ArticlePDF Available

Abstract and Figures

Deploying advanced imaging solutions to robotic and autonomous systems by mimicking human vision requires simultaneous acquisition of multiple fields of views, named the peripheral and fovea regions. Among 3D computer vision techniques, LiDAR is currently considered at the industrial level for robotic vision. Notwithstanding the efforts on LiDAR integration and optimization, commercially available devices have slow frame rate and low resolution, notably limited by the performance of mechanical or solid-state deflection systems. Metasurfaces are versatile optical components that can distribute the optical power in desired regions of space. Here, we report on an advanced LiDAR technology that leverages from ultrafast low FoV deflectors cascaded with large area metasurfaces to achieve large FoV (150°) and high framerate (kHz) which can provide simultaneous peripheral and central imaging zones. The use of our disruptive LiDAR technology with advanced learning algorithms offers perspectives to improve perception and decision-making process of ADAS and robotic systems.
Concept of a metasurface-augmented FoV LidAR a Schematic representation of the LIDAR system. A triggered laser source, emitting single pulses for ToF detection, is directed to a synchronized acousto-optic deflector (AOD) offering ultrafast light scanning with low FoV (~2°). The deflected beam is directed to a scanning lens to scan the laser spot on the metasurface at different radial and azimuthal positions. The transmitted light across the metasurface is deviated according to the position of the impinging beam on the component to cover a scanning range between −75∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-75^\circ$$\end{document} and 75∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$75^\circ$$\end{document}. The scattered light from the scene is collected using a fast detector. Data are processed to extract the single echo ToF for 2D and 3D imaging of the scene. b Detail of the cascaded AOD-metasurface assembled deflection system. c Top view photography of the optical setup. d Bottom: Graphical representation of the metasurface phase distribution along the radial axis. Top: Representations of beam deflection according to the incident beam positioning on the metasurface. Inset equation represents the phase function designed. e Illustration of axial symmetry for the laser impact point. f Photography of the 1 cm MS fabricated using nanoimprinting lithography. g SEM image of the sample showing the nanopillar building blocks of varying sizes employed to achieve beam deflection by considering lateral effective refractive index variations.
… 
3D imaging and wide-angle scanning capabilities a LIDAR line scanning of our laboratory room that show the large FoV on both Elevation (top) and Azimuth (bottom) angles. Note the top picture showing a scanning line profile covering the whole range from the ground to the ceiling of the testing room over 150°. b 3D ranging demonstration (top): the scene (bottom) was set up with actors wearing reflective suits positioned in the scene at distance Z varying from 1.2 to 4.9 m. Colors encodes distance. c Lissajous scanning using deflecting functions as θ=Asinαt+Ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta=A{\sin }\left(\alpha t+\varPsi \right)$$\end{document} and =Bsinβt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$=B{\sin }\left(\beta t\right)$$\end{document} for different parameters α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document} and β\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} to illustrate the laser projection capabilities on a fast beam scanning, in a large FoV configuration. Ψ was set to be 0° and A = B = 30, although any configuration can be actively changed.
… 
This content is subject to copyright. Terms and conditions apply.
Article https://doi.org/10.1038/s41467-022-33450-2
Metasurface-enhanced light detection and
ranging technology
Renato Juliano Martins
1
, Emil Marinov
1
,M.AzizBenYoussef
1
, Christina Kyrou
1
,
Mathilde Joubert
1
, Constance Colmagro
1,2
,ValentinGâté
2
, Colette Turbil
2
,
Pierre-Marie Coulon
1
,DanielTurover
2
,SamiraKhadir
1
, Massimo Giudici
3
,
Charalambos Klitis
4
,MarcSorel
4,5
&PatriceGenevet
1
Deploying advanced imaging solutions to robotic and autonomous systems by
mimicking human vision requires simultaneous acquisition of multiple elds
of views, named the peripheral and fovea regions. Among 3D computer vision
techniques, LiDAR is currently considered at the industrial level for robotic
vision. Notwithstanding the efforts on LiDAR integration and optimization,
commercially available devices have slow frame rate and low resolution,
notably limited by the performance of mechanical or solid-state deection
systems. Metasurfaces are versatile optical components that can distribute the
optical power in desired regions of space. Here, we report on an advanced
LiDAR technology that leverages from ultrafast low FoV deectors cascaded
with large area metasurfaces to achieve large FoV (150°) and high framerate
(kHz) which can provide simultaneous peripheral and central imaging zones.
The use of our disruptive LiDAR technology with advanced learning algorithms
offers perspectives to improve perception and decision-making process of
ADAS and robotic systems.
Autonomous mobile systems such asautonomous carsand warehouse
robots include multiple sensors to acquire information of their sur-
rounding environments, dening their position, velocity, and accel-
eration in real time. Among them, range sensors, and in particular
optical ranging sensors, provide vision to robotic systems13and are
thus at the core of the automation of industrial processes, theso-called
4.0 industrial revolution. Several optical imaging techniques are cur-
rently integrated into industrial robots for 3D image acquisition,
including stereoscopic camera, RADAR, structured light illumination,
and laser range nders or LiDARs. LiDAR is a technological concept
introduced in the early 60s, when Massachusetts Institute of Tech-
nology (MIT) scientists reported on the detection of echo signals upon
sending optical radiation to the moon surface4. Since the pioneering
MIT work,LiDARs have been using laser sources to illuminate targeted
objects and to collect the returning echo signals offering the
possibility of reconstructing highly resolved three-dimensional (3D)
images. Conventional LiDARs rely on time-of-ight (ToF) measure-
ment, which employs a pulsed laser directed toward a distant reective
object tomeasure the round-trip time of light pulses propagating from
the laser to the scanned scene and back to a detection module. All
LiDAR components must act synchronously to tag single returning
pulses for ranging imaging reconstruction. The formula, 2d=cToF ,
holds for the recovered distance, where cis the speed of light and ToF
is the ToF. To sense the space, the LiDAR source must be able to sweep
a large Field of View (FoV). The objects in the scene are then detected,
point-by-point by measuring the ToF from every single direction to
build an optical echo map. The other measurement processes known
as Amplitude Modulation Continuous Wave (AMCW)5,6,Frequency
Modulation Continuous Wave (FMCW)7,8or Stepped Frequency Con-
tinuous Wave (SFCW)9employ continuous waves with constant or
Received: 7 April 2022
Accepted: 20 September 2022
Check for updates
1
Université Cote dAzur, CNRS, CRHEA, Rue Bernard Gregory, Sophia Antipolis, 06560 Valbonne, France.
2
NAPA-Technologies, 74160 Archamps, France.
3
Université Côte dAzur, Centre National de La Recherche Scientique, Institut de Physique de Nice, F-06560 Valbonne, France.
4
School of Engineering,
University of Glasgow, Glasgow G12 8LT, UK.
5
Institute of Technologies for Communication, Information and Perception (TeCIP), SantAnna School of
Advanced Studies, Via Moruzzi 1, 56127 Pisa, Italy. e-mail: Patrice.Genevet@crhea.cnrs.fr
Nature Communications | (2022) 13:5724 1
1234567890():,;
1234567890():,;
Content courtesy of Springer Nature, terms of use apply. Rights reserved
time-modulated frequency to measure the round-trip time of the
modulated light information. LiDAR systems enable the real-time 3D
mapping of objects located at long, medium or short-range distances
from the source, nding a vast variety of applications beyond robotic
vision, spanning from landscape mapping Chase1012,atmospheric
particle detection1316, wind speed measurements17,18, static and/or
moving object tracking1922,AR/VR
23, among others. Generally, LiDARs
are classied into scanning or non-scanning (Flash LiDAR) systems
depending on whether the laser sources simply illuminate24 or scan the
targeted scene. A scanning LiDAR system can be essentially described
in terms ofthree key components, (i) the light source for illumination,
(ii) the scanning module for fast beam direction at different points in
the scene, and (iii) the detection system for high-speed recovering of
the optical information received from the scene. Over the past dec-
ades, nanophotonics-based LiDAR systems have blossomed, and more
advanced scanning and detection techniques have been proposed25,26.
The expected massive use of LiDARs in the automotive industry for
advanced driver-assistance systems (ADAS) or even full-autonomous
driving brought out new challenges for the scanning systems, includ-
ing low fabrication complexity, potential for scalable manufacturing,
cost, lightweight, tolerance to vibrations and so on. Today, industrially
relevant LiDARs mainly use macro-mechanical systems to scan the
entire 360° FoV. Besides their large FoV, these bulk systems present
limited imaging rates of the order of few tens of Hz. A promising
evolution in mechanical scanners are the micro-electromechanical
systems27 (MEMS) which shift the scanning frequency to the kHz range.
However, a major drawback of MEMS is the low FoV, typically not
exceeding 25° for horizontal and 15° for vertical scanning. At the
research level, beam steering with optical phased arrays (OPA)28,29
provides remarkable speeds while reaching FoV around 60°. However,
OPA technology is less likely to be massively deployed in industrial
systems due to its manufacturing challenges. The industrially mature
liquid crystal modulators are also not adequate as LiDAR scanners due
to their poor FoVs usually remaining below 20° depending on the
wavelength, as well as their kHz modulation frequency30,31.Moreover,
acousto-optic deectors (AODs) enabling ultrafast MHz scanning32,33,
have never been considered in LiDARs because of their narrow FoV
reaching at maximum 2°, imposing a compromise between high-speed
imaging and large FoV.
During the last decade, metasurfaces (MS)34 have spurred the
interest of the entire international photonic community by unveiling
the possibility of engineering the properties (i.e., the amplitude, the
phase, the frequency and/or the polarization) oflight at will35.Theyare
at optical components made of arrangements of scattering objects
(meta-atoms) of subwavelength size and periodicity. Currently, four
light modulation mechanisms are used to create metasurfaces: light
scattering from resonant nanoparticles36,37,geometric phase occurring
during polarization conversion (PancharatnamBerry phase)38,accu-
mulated propagation phase in pillars with controllable effective
Refractive Index (ERI)39 and the topological phase in vicinity of
singularities40. Usually, MSs comprise inherently passive components,
designed to perform a xed optical functionality after fabrication.
For instance, by properly selecting the size and the spacing of the
meta-atoms, MSs allow to redirect a laser beam at any arbitrary but
xed angle dictated by the generalized Snells law. Clearly, passive
MS alone cannot be used in LiDARs requiring real-time beam scan-
ning. On the contrary, dynamic MSs designed byor combined with
materials possessing tunable optical properties caused by external
stimuli4145 stand as promising alternatives for real-time deection.
Recently, the US startup company LUMOTIVE introduced electrically
addressable reective resonant MSs inltrated with liquid crystals
and demonstrated scanning frequency that exceeds the switching
speed of common liquid crystal displays, as well as a FoV of around
120°46. The latter approach has been proven auspicious for minia-
turized, scalable LiDARs but it involves complex electronic
architectures, and likely signicant optical losses in case of metallic
MS building blocks.
Here, we propose an alternative high-frequency beam scanning
approach that exploits the light deecting capabilities of passive MSs
to expand the LiDAR FoV to 150× 150°, and to achieve simultaneous
low- and high-resolution multizone imaging. We make use of an ERI
multibeam deecting MS cascaded with a commercial AOD. The sys-
tem offers large exibilities in terms of beam scanning performance,
operation wavelength and materials. The angular resolution, referring
to the ability of the system to distinguish adjacent targets and retrieve
shapes, becomes very important in applications requiring simulta-
neous long and short-range detections. Our multizone LiDAR imaging
demonstration can mimic human vision by achieving simultaneous
high frame rate acquisition of high- and low-eld zones with different
spatial resolution. The large design exibility of MSs provides imaging
capabilities of interest to LiDAR systems, meanwhile offering new
industrial applications.
Ultrafast and high-FoV metasurface scanning module
MHz beam scanning can be achieved over a large FoV, by coupling
AODs with ERI MSs exhibiting spatially varying deection angles.
Figure 1a illustrates the experimental concept where a modulated
laser source at λ= 633 nm (TOPTICA i-beam smart) generates single
pulses at any arbitrary rate up to 250 MHz. For single-pulse LIDAR,
the repetition rate frep is related to the maximum ranging distance
dmax by the expression:
dmax =1
2
c
frep
:ð1Þ
The focused beam with a small deection is angularly increased
to scan in both azimuthal θand elevation φangles. A detailed scheme
of the FoV amplifying system is shown in Fig. 1b. A photograph of the
built proof-of-concept system is shown in Fig. 1c where we high-
lighted (shaded red region) the expansion of the small two degrees
(2°) AOD FoV into an enhanced 150° FoV. The deected angle by the
MS is controlled by the impact position of the impinging focused
beam on the MS plane, associated with the radial and angular coor-
dinates rand θMS, respectively (see Fig. 1e). By applying voltage into
the AOD, one can actively re-point the beam at any arbitrary angle
within the × FoV, thus sweeping the focused beam across the
metasurface to vary θMS and r, in the range of [0 2π]and[0rmax ],
respectively, where rmax is the radius of the metasurface. Note that
θMS and rdenote, in polar coordinate, the position of the impact
beam on the metasurface according to Fig. 1e. For simplicity in
connecting incident and deected angles, we designed a circular
metasurface with radially symmetric phase-delaying response, but
given the versatility in controlling the optical wavefront, various MS
with any other beam defecting properties can be adjusted according
to specic application. We must also highlight that, in principle,
there is no limitation on the observed FoV as it is fully dependent on
the metasurface phase function, within the limit ½0, πfor transmis-
sion scheme. In this initial demonstration, we implemented (Fig. 1f, g)
the simple concept of ERI MS designed to spatially impart linearly
increasing momentum with respect to the radial dimension rgiven
by the expression:
Φ
r=k0
r
rmax
ð2Þ
where, k0is the free space momentum, and Φthe local-phase
retardation. Such design results in parabolic-phase retardation as
represented in Fig. 1d. In this design, the deected beam will be
delayed by a maximum phase retardation of Φ=πrmax
λand Φ=0 for
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 2
Content courtesy of Springer Nature, terms of use apply. Rights reserved
the peripherical points ± rmax and central points, respectively. More-
over, Eq. (2) transformed in Cartesian coordinates determines the
value of the deected angles in both axes, denoted as ðθ,φÞ,according
to the generalized Snell laws41:
kx,t=kx,i+Φ
x=k0sinθisinφi+Φ
r
r
x
ky,t=ky,i+Φ
y=k0sinθicosφi+Φ
r
r
y
,
(ð3Þ
where the phase gradient is dened at the metasurface plane at z=0.
Considering small incident angles originating from the AOD, the
expressions simplify as:
k0sinθtsinφt=k0r
rmax cosθMS
k0sinθtcosφt=k0r
rmax sinθMS
(,ð4Þ
Such expression validates the linearity observed for small angles
[40°, 40°] according to the experimental measurements of the vol-
tage dependence of the deection angles (Supplemental Fig. S2c).
Results
2D and 3D LiDAR image acquisition
To show the angular and depth 2D imaging capabilities of our LIDAR
system, we start performing 1D scanning of three distinct objects
placed on a table, (1) a square reector mounted on a post, (2) a round
deector and (3) a box reector, angularly distributed at different
locations as shown in Fig. 2a. The associated 2D LIDAR ranging image is
displayed in Fig. 2b, indicating that high reectivity objects are
observed at LIDAR positions matching to those observed with a con-
ventional camera (Fig. 2a). Particularly, we found that the three objects
shown in Fig. 2c were located at the following width [x], and depth [z]
positions: [0.4m,1.5m],[0.1m, 2.4 m] and [0.6 m, 3.5 m] for the
square, the round, and the box reector, respectively. In the graph, we
also observe the difference in reectivity of the three objects at various
distances leading to distinct intensities: the objects on the left and
right (square and box deectors) correspond to lower signals due to
their angular locations, size and distance, while the round deector in
the middle has higher reectivity and appears with higher reectance.
This rst example validates the short-range (~5 m) imaging capabilities
of our LiDAR system.
To further investigate the capabilities of the system, we exten-
ded the performance to achieve 3D imaging. To this end, an addi-
tional FoV dimension is added by cascading a second AOD,
orthogonally oriented, in the elevation axis. The extended FoV is now
improved over both dimensions considering a MS with radial sym-
metry, as schematized in Fig. 1b. To demonstrate the two-axis scan-
ning capability, we present in Fig. 3a the elevation (top) and the
azimuthal (bottom) line scanning, respectively, to highlight that 150°
FoV (Supplementary Materials S1) is accessible for both scanning axis
(see video V1 in supplement materials). These examples of line
scanning are realized by xing the voltage value on the one deector
and scanning the voltage of the second deector over the entire
range at a scanning rate that exceeds the acquisition speed of either
our eye or the CCD refreshing frame rate, resulting in an apparent
continuous line scan. We prepared a scene (Fig. 3b)bottom) with
three different actors located at different angular and depth posi-
tions of 1.2, 2.7, and 4.9 m to demonstrate 3D imaging. Due to low
laser pulse peak power (about 10 mW), we performed our demon-
strations in an indoor environment using high reective suits, con-
siderations of power and losses are addressed in Section S2 of
Supplemental Materials. For the demonstration, we choose a visible
laser operating at λ= 633 nm, which is very convenient to observe
and monitor the deected beam. After calibrating the system (see
Fig. 1 | Concept of a metasurface-augmented FoV LidAR. a Schematic repre-
sentation of the LIDAR system. A triggered laser source, emitting single pulses for
ToF detection, is directed to a synchronized acousto-opticdeector (AOD)offering
ultrafast light scanning with low FoV (~2°). The deectedbeamisdirectedtoa
scanning lens to scan the laser spot on the metasurface at different radial and
azimuthal positions. The transmitted light across the metasurface is deviated
according to the position of the impinging beam on the component to cover a
scanning range between 75and 75. The scattered light from the scene is col-
lected using a fast detector. Data are processed to extract the single echo ToF for
2D and 3D imaging of the scene. bDetail of the cascaded AOD-metasurface
assembled deection system. cTop view photography of the optical setup.
dBottom: Graphical representation of the metasurface phase distribution along
the radial axis. Top: Representations of beam deection according to the incident
beam positioningon the metasurface. Inset equation represents thephase function
designed. eIllustration of axial symmetry for the laser impact point. fPhotography
of the 1 cm MS fabricated using nanoimprinting lithography. gSEM image of the
sample showing the nanopillar building blocks of varying sizes employed to
achievebeam deection by considering lateral effective refractiveindex variations.
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 3
Content courtesy of Springer Nature, terms of use apply. Rights reserved
Supplementary Material), arbitraryor random accessbeam scan-
ning along high FoV can be realized and arbitrary intensity patterns
can be projected by rapidly steering the beam at different locations
at very short time intervals (see Supplementary Video V2). Figure 3c
shows examples of several scanning proles implemented to the
metasurface beam scanner to project Lissajous curves demonstrat-
ing random-point access mode.
Mimicking human peripheral and fovea vision with multizone
LiDAR imaging
Previous experiments were performed by focusing the light deected
by the AOD onrelatively small metasurfaces (1, 2, and 3 mm diameters)
using a scanning lens. This conguration favors a small spot (of the
order of 50 μm) to contain the MS angular divergence to a small
parametric region, i.e., scanning the MS with smallspot prevents large
Fig. 2 | 1D time-of-ight imaging. a Photography of the scene. bRanging image of
three objects displaced on a table using high reective tapes to improve the
intensity of the returned signal. In (1) a post with a small reector was used in (2) a
round object with a reector and in (3)there is a box reector witha tape around it.
The graph shows the image in the correct ranging distance X(scanning dimension)
and Z(ranging dimension) showing the capabilitiesto sense all of the three objects.
cPositionof single objects according to ranging image in (b). dRawsignal collected
for the respective image, showing that objects oriented in the normal direction
have bigger scattering intensity, the inset display single pulses used to determine
the ToF ranging distance.
Fig. 3 | 3D imaging and wide-angle scanning capabilities. a LIDAR line scanning
of our laboratory room that show the large FoV on both Elevation (top) and Azi-
muth (bottom) angles. Note the top picture showing a scanning line prole cov-
ering thewhole range from theground to the ceiling of the testing room over 150°.
b3D ranging demonstration (top): the scene (bottom) was set up with actors
wearing reective suits positioned in the scene at distance Zvarying from 1.2 to
4.9 m. Colors encodes distance. cLissajous scanning using deecting functions as
θ=Asin αt+ΨðÞand = Bsin βtðÞfor different parameters αand βto illustrate the
laser projectioncapabilitieson a fast beam scanning, in a large FoV conguration. Ψ
was set to be and A=B= 30, although any conguration can be activelychanged.
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 4
Content courtesy of Springer Nature, terms of use apply. Rights reserved
overlap with thespatially varying deecting area. The beam divergence
as a function of the metasurface size is provided in Supplementary
Material S5, indicating that a 3 mm device results in a divergence lower
than 1.5°. Robotic systems interested in reproducing human vision
requires peripheral and central vision as illustrated in Fig. 4a, where
several zones featuring different spatial resolutions are acquired
simultaneously. A low-resolution peripheral eld provides coarse
scene exploration, usually needed for human to direct the eye to focus
to a highly resolved fovea region for sharp imaging. The scene thus
needs to be scanned differently according to the zones of interest. To
reduce further beam divergence and improve as needed the resolu-
tion, it is necessary to increase the diameter and complexity of the
metasurface and work with fully collimated beams. For this purpose,
we realized a cm-size metasurface deector using nanoimprint litho-
graphy(NIL), as shown in Fig. 1f, g (further details on the fabrication are
provided in S9). In the latter conguration, the deector is directly
placed after the AOD without utilizing a scanning lens. We specically
designed a large area deector that achieve moderate 1st order
deection efciency of ~40% and took advantage of the non-deected
zero-order narrow scanning FoV to simultaneously scan two zones
with different FoVs and resolutions. This demonstration specically
exploits the multibeam addressing capability of metasurfaces, result-
ing in a dual mode imaging: (i) a high-resolution scanning provided by
the near collimated zero-order beam deected by the AOD only, and
(ii) a large FoV, lower resolution image provided by the 1st order beam
deected by the metasurface. As illustrated in Fig. 4b, inset, we spa-
tially selected the returned/scattered signal from the different parts of
the scene. For this purpose, we used a double-detector monitoring
scheme. The rst detector collects light from the full numerical aper-
ture (~2πsolid angle) but it blocks the central small numerical aperture
(a beam blocker is placed in front of the detector). The second
detector covers only a small NA for the narrow FoV resulting from
zero-order light scanning (a spatial lter is used to select the obser-
vation area). A dual-beam metasurface scanning scheme is used to
imageascene(Fig.4b, top) with two elds of interest: (i) three actors
placed at different regions of the space periphery, as measured in
Fig. 4c (top) and a highly resolved chessboard-like object placed in the
forward direction at a small FoV, measured in Fig. 4c (bottom). The
images presented in Fig. 4c correspond to low- and highly resolved
imaging, acquired by both detectors simultaneously. Multizones
scanning with a high resolution forward, and low lateral resolution
over a high-FoV peripherical vision could be a disruptive solution for
addressing the needs of advanced driver-assistance systems (ADAS).
High-speed velocimetry and time-series imaging
To characterize the MHz deection speed and the possibility of
achieving real-time frame rate imaging, we measured the beam
deection speed, i.e., the minimum frequency at which the beam can
be re-pointed to a new direction. To do so, we placed highly reecting
tapes on the wall, and measured the amplitude of the backscattered
signal for distinct scanning frequencies. We dene as system cutoff
frequencythe condition when the amplitude of the reected signal
decays to 3dB point (see Supplementary Information S4). The mea-
surements were made by considering: (i) a single scanner in the
Fig. 4 | Multizone imaging. a Schematic representation of a human multizone
viewing with the concept to be adapted in ADAS systems. Such mimicking char-
acteristics enables double vision for dual-purpose imaging features for high-reso-
lution, long range, in the center and lower resolution, bigger FoV, for the
peripherical view. bExperimental realization to test the dual-zone imaging func-
tionality of the LIDAR system, including dual detection scheme (inset) for simul-
taneousimage multiplexed collection. The central0th diffractionorder beam scans
a small area with high resolution directed at the center of the image while the 1st
diffracted order scans the whole eld. cTop: We show the result of the scanned
scenes described in (b). Top represents the LIDAR large FoV ranging image. The
image is obtained by blocking the central part of the numerical aperture using an
obstacle as sketched in (b). The bottom LIDAR ranging high-resolution image
presentsthe central part scene captured using the0th diffractionbeam, covering a
FoV of about 2°.
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 5
Content courtesy of Springer Nature, terms of use apply. Rights reserved
azimuth angle (see Supplementary Fig. S5b red curve) and (ii) a cas-
caded system comprised by two orthogonally oriented deectors, for
scanning at both azimuthal and elevation angles, (see Supplementary
Fig. S5b) blue curve). The results indicate less than 3dB loss up to
around 6 MHz and 10 MHz for single and double-axis scanning,
respectively. We also demonstrate the modulation of a laser beam over
an large FoV (>140°) at MHz speed and correct imaging with scanning
frequency up to 6.25 MHz (see Supplemental Fig. S5c)). This corre-
sponds to about two orders of magnitude faster than any other beam-
pointing technology reported so far. Operating beyond the 3dBloss
at higher frequency was also realized, leading to reduced resolution
but increased imaging frame rate, up to 1 MHz for 1D scanning at
40 MHz (see discussion in Supplementary Materials in Section S7).
Measurements of time events were performed to investigate
dynamic imaging. The most convenient dynamic system observable in
our laboratory was a spinning chopper composed of a rotating wheel
at nominally 100 Hz rotation speed. We prepared the scene composed
of a chopper, located at 70 cm away from the source, decorated with a
high reective tape in one of the mechanical shutters, as illustrated in
Fig. 5a (top). As described in Supplemental Table 1, we performed
three time-series experiments using acquisition frame rates of 741,
1020, and 3401 fps (see Supplementary Information S8 and Supple-
mentary Videos SGIF 13).Wetrackedthecenterpositionofthe
reective tape in both the space and time domains by integrating the
radial axis of the ranging image from the center of the chopper and
tting a Gaussian curve plotted over the entire ½0,2πangular axis (see
Fig. 5a, bottom). The curves are manually offset by 6πto differentiate
the experiments. All experiments revealed an averaged rotation speed
valueof92.71Hz.Weattributethe~7.3Hzdifferencebetweenthe
measured and nominal speed of 100 Hz to the phase-jitter control
mechanism on the chopper. In principle, rotating mechanical shutters
are designed with a closed loop circuitry providing an electronic signal
that maintains linear rotation speed. Interestingly, displaying time
events on the angular dimension reveals small wobbling wheel
imperfection caused by the presence of the reective tape, resulting in
a slowdown at the angles around 3π=2asevidencedinFig.5b
(Experiment 21020 fps). One can indeed observe a rotation slope-
change during periodic times corresponding to the position of the
reective tape at the bottom (for instance at t= 1.0 ms/10.8 ms in
Fig. 5b) (bottom panel)). Using the recovered ranging information, we
estimate the size of the tape to 4 cm, as illustrated in Fig. 5c (bottom).
The 1 cm difference to the real object (Fig. 5c, top) is due to the high
reectivity of the screws located close to the center and causing
additional scattering at the same ranging distance.
Discussion
We realize an ultrafast beam scanning system composed of a fast
deector and a passive metasurface to achieve beam steering at MHz
speed over 150 × 150°FoV, improving the wide-angle scanning rate of
mechanical devices by ve orders of magnitude. We performed fast
steering in one and two angular dimensions and retrieved the asso-
ciated time of ight for ranging measurements leading to high-speed
LiDAR imaging of very fast-moving objects on a large FoV. Employing
parameters described on the second row of Supplementary Table 1, we
achieved a time step of 980 µs, see Fig. 5b. An object traveling at the
speed of the sound (1234 Km/h) at 15m away from the source will take
~74 ms to cover a 120°FoV. Such supersonic object can be detected
within 76 time-series events. Considering the Nyquist limit i.e., four
time series to recover the speed, the maximum event detection can
increase up to a speed of 47 mega-meter/h.
High-speed scanning modules for LiDAR applications have to
trade-off between the maximum distance and spatial resolution (see
Supplementary Information S5, S10). The frame rate of a single ToF
system can be expressed as:
fRate =c
2ndmax
ð5Þ
where cis the speed of the light. Equation (5) thus indicates that both
the number of pixels in the image (n) and the maximum ambiguity
distance, ðdmaxÞ,denes theimaging frame rate. Suchechoing time can
be reduced by encoding the signal sent in each scanning direction with
aspecicidentication code namely Code-division multiple access
(CDMA)47. Multiplexed observation is realized by decorrelating the
ToF signal using matched lter technique. LiDAR companies often
multiplex the source with an array of diode lasers to increase frame
rate, increasing the lidar complexity, and multiplying the system cost
by the number of sources. Such CDMA technique realistically could be
exploited in combination with our fast beam deection system to
reach imaging frame rate of 125 frames/s with high spatial resolution of
200 × 200 pixels. Beyond application for ADAS industry, beam
steering systems with similar performances have potential in real-
time imaging for applications requiring short ambiguity distance, for
example in microscopy and wide-angle optical coherence
tomography48. Our main limitation to achieve high frame-rate is
related to the extremely large volume of real-time data treatment to be
realized synchronously during the acquisition. Here we only per-
formed calculation using conventional CPULabView basedas such,
we cannot output and save data as the same speed as their acquisition.
Fig. 5 | Measurement of fast in real-time-series events. a Top: Illustration of the
scene: a mechanical chopper of was set up with a nominal speed of 100 Hz and
some slabs were covered using a reective tape. Bottom: Measurement of the
rotation speed for three different frame rates. bTop: Normalized intensity map for
the radial axis, illustrating the dynamics of the wheel. Note the different slope for
the rotation angles around 3π=2 representing a lessening of the speed. Bottom:
Single-frame intensity data illustrating various angular positions. cTop: photo-
graphy of the chopper and the size of the reective tape. Bottom: Ranging image
for t= 1.0ms and the measurement of the tape from the recovered data.
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 6
Content courtesy of Springer Nature, terms of use apply. Rights reserved
The Supplementary Video V3 showing a moving person in 3D space is
taken by achieving the best compromise, that is by acquiring single
frames raw data (with 200 ×200 pixels for instance) and outputting
data directly to SSD driverframe-by-frame. Our data treatment process
creates latencies related to asynchronous data storage, which result in
stuttered or choppy movements with occasional video speeding-up
movements. This problem is generally mitigated in LiDAR by
implementing FPGA/ASICS processing.
Our approach also offers random-access beam steering cap-
abilities. Multizone ranging images mimicking human vision at high
frame rate have been realized. The versatility of MS for wavefront
engineering could improve the capabilities of simultaneous localiza-
tion and mapping algorithms. Furthermore, incorporating this system
in ADAS could provide a disruptive solution for medium/long-range
perception, in which the central view scans the front scene, while the
peripheral view provides additional sensing for pedestrian safety for
example. We nally demonstrated time-event series for imaging at a
real-time regime (>1k fps and up to MHz frame rate for 1D scanning).
Outperforming existing LiDAR technologies, our tool offers a per-
spective for future applications, in particular by participating to
reducing the low decision-making latency of robotic and advanced
driver-assistance systems.
Methods
Experimental methodology
A collimated beam is sent to an AOD device (AA Opto-electronic
DTSXY-400-633) to deect light at small arbitrary angles, within 49
mrad. The AOD is driven by a voltage-controlled RF generator (AA
Opto-electronics DRFA10Y2X-D-34-90.210). The deected signal is
directed to a scanning lens (THORLABS LSM03-VIS) that focuses the
light at different transverse positions on the MS. The MS acts as a
designer-dened passive device to convert the small × FoV into an
enhanced 150° × 150° FoV. ToF is obtained by monitoring the scattered
light at each scanned angle using a detector (Hamamatsu C14193-
1325SA); and the reconstructed ranging image is built by associating
each period ð1
frepÞto individual pixels and extracting the ToF. In our
detection scheme, the detection path is separated to the excitation
path, which may result in not overlapped illumination/observation
regions. We believe that a mono-static approach could as well be
implemented in our conguration by utilizing a beam splitter before
sending the laser beam into the acousto-optics deector. A PXI
(National Instruments) system is used for data generation, recovery,
and treatment (more details can be found in Section S6 of Supple-
mental Materials). The angular scanning of the whole 1D was per-
formed in a single shot, during which we orchestrated pulse
repetition, scanning position angles and collection for precise mea-
surement of ToF in the system. With an acquisition scope card of
3Gsamples/s sample rate and considering a rise time on the detector
smaller than ~330 ps, the maximum z (depth) resolution of single
echo per laser shot measurement is about Δz=5cm.InFig.2d, we
show the collected raw signal corresponding to the three objects. For
ToF recovery, we used the derivative of the signal and collected the
peak of the differentiated signal. Single pulses were collected (inset
Fig. 2d) and separated to evaluate the ToF for each scanned direction
and then folded at the scanning frequency to form an image. The
fabrication of the different MS has been realized using GaN on sap-
phire nanofabrication processes. Details are available in the supple-
mentary materials.
Data availability
The Source data are available from the corresponding author upon
request. All data needed to evaluate the conclusion are present in the
manuscript and/or the Supplementary Information. Videos are avail-
able as Supplementary Materials, and the associated raw data would be
available upon request.
References
1. Kolhatkar, C. & Wagle, K. In Review of SLAM Algorithms for Indoor
Mobile Robot with LIDAR and RGB-D Camera Technology BT
Innovations in Electrical and Electronic Engineering (eds. Favors-
kaya,M.N.etal.)397409 (Springer Singapore, 2021).
2. Sujiwo, A., Ando, T., Takeuchi, E., Ninomiya, Y. & Edahiro, M.
Monocular vision-based localization using ORB-SLAM with LIDAR-
aided mapping in real-world robot challenge. J. Robot. Mechatron.
28,479490 (2016).
3. Royo, S. & Ballesta-Garcia, M. An overview of lidar imaging systems
for autonomous vehicles. Appl. Sci. 9,4093(2019).
4. Smullin, L. D. & Fiocco, G. Optical echoes from the moon. Nature
194,12671267 (1962).
5. Heide, F., Xiao, L., Kolb, A., Hullin, M. B. & Heidrich, W. Imaging in
scattering media using correlation image sensors and sparse con-
volutional coding. Opt. Express 22, 26338 (2014).
6. Bamji, C. S. et al. A 0.13 μm CMOS system-on-chip for a 512 × 424
time-of-ight image sensor with multi-frequency photo-demodu-
lationup to 130 MHz and 2 GS/s ADC. IEEE J. Solid-State Circuits 50,
303319 (2015).
7. Martin, A. et al. Photonic integrated circuit-based FMCW coherent
LiDAR. J. Lightwave Technol. 36, 46404645 (2018).
8. Barber, Z. W., Dahl, J. R., Mateo, A. B., Crouch, S. C. & Reibel, R. R.
High resolution FMCW ladar for imaging and metrology. In Imaging
and Applied Optics 2015 LM4F.2, OSA Technical Digest (online)
(Optica Publishing Group, 2015). https://opg.optica.org/abstract.
cfm?uri=lsc-2015-LM4F.2.
9. Whyte, R., Streeter, L., Cree, M. J. & Dorrington, A. A. Application of
lidar techniques to time-of-ight range imaging. Appl. Opt. 54,
9654 (2015).
10. Lefsky, M. A., Cohen, W. B., Parker, G. G. & Harding, D. J. Lidar
Remote sensing for ecosystem studies: lidar, an emerging remote
sensing technology that directly measures the three-dimensional
distribution of plant canopies, can accurately estimate vegetation
structural attributes and should be of particular interest to forest,
landscape, and global ecologists. Bioscience 52,1930 (2002).
11. Bewley,R.H.,Crutchley,S.P.&Shell,C.A.Newlightonanancient
landscape: lidar survey in the Stonehenge World Heritage Site.
Antiquity 79,636647 (2005).
12. Chase, A. F. et al. Airborne LiDAR, archaeology, and the ancient
Maya landscape at Caracol, Belize. J. Archaeol. Sci. 38,
387398 (2011).
13. Miffre, A., Anselmo, C., Geffroy, S., Fréjafon, E. & Rairoux, P. Lidar
remote sensing of laser-induced incandescence on light absorbing
particles in the atmosphere. Opt. Express 23,2347(2015).
14. Collis, R. T. H. Lidar: a new atmospheric probe. Q. J. R. Meteor-
ological Soc. 92, 220230 (1966).
15. Badarinath, K. V. S., Kumar Kharol, S. & Rani Sharma, A. Long-range
transport of aerosols from agriculture crop residue burning in Indo-
Gangetic Plainsa study using LIDAR, ground measurements and
satellite data. J. Atmos. Sol. Terr. Phys. 71, 112120 (2009).
16. Xie, C. et al. Study of the scanning lidar on the atmospheric
detection. J. Quant. Spectrosc. Radiat. Transf. 150, 114120 (2015).
17. Baker, W. E. et al. Lidar-measured wind proles: the missing link in
the global observing system. Bull. Am. Meteorol. Soc. 95,
543564 (2014).
18. Sathe,A.&Mann,J.Areviewofturbulence measurements using
ground-based wind lidars. Atmos. Meas. Tech. 6,31473167 (2013).
19. Llamazares, Á., Molinos, E. J. & Ocaña, M. Detection and tracking of
moving obstacles (DATMO): A Review. Robotica 38,761774 (2020) .
20. Debeunne, C. & Vivet, D. A review of visual-LiDAR fusion based
simultaneous localization and mapping. Sensors 20, 2068 (2020).
21. Liu, C., Li, S., Chang, F. & Wang, Y. Machine vision based trafcsign
detection methods: review, analyses and perspectives. IEEE Access
7,8657886596 (2019).
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved
22. Yoo, H. W. et al. MEMS-based lidar for autonomous driving. ei
Elektrotechnik und Informationstechnik 135,408415 (2018).
23. Liu, W. et al. Learning to Match 2D images and 3D LiDAR point
clouds for outdoor augmented reality. In 2020 IEEE Conference on
Virtual Reality and 3D User Interfaces Abstracts and Workshops
(VRW) 654655 (IEEE, 2020).
24. Zhang, P., Du, X., Zhao, J., Song, Y. & Chen, H. High resolution ash
three-dimensional LIDAR systems based on polarization modula-
tion. Appl. Opt. 56,38893894 (2017).
25. Kim, I. et al. Nanophotonics for light detection and ranging tech-
nology. Nat. Nanotechnol. 16,508524 (2021).
26. Rogers, C. et al. A universal 3D imaging sensor on a silicon photo-
nics platform. Nature 590,256261 (2021).
27. Wang,D.,Watkins,C.&Xie,H.MEMS mirrors for LiDAR: a review.
Micromachines 11, 456 (2020).
28. Poulton, C. V. et al. Long-range LiDAR and free-space data com-
munication with high-performance optical phased arrays. IEEE J.
Sel. Top. Quantum Electron. 25,18(2019).
29. Hsu, C.-P. et al. A review and perspective on optical phased array for
automotive LiDAR. IEEE J. Sel. Top. Quantum Electron. 27,
116 (2021).
30. Kim,Y.etal.Large-arealiquidcrystalbeamdeector with wide
steering angle. Appl. Opt.59, 74627468 (2020).
31. Park, J. et al. All-solid-state spatial light modulator with indepen-
dent phase and amplitude control for three-dimensional LiDAR
applications. Nat. Nanotechnol.16,6976 (2021).
32. Uchida, N. & Niizeki, N. Acoustooptic deection materials and
techniques. Proc. IEEE 61,10731092 (1973).
33. Römer, G. R. B. E. & Bechtold, P. Electro-optic and acousto-optic
laser beam scanners. Phys. Procedia 56,2939 (2014).
34. Zhao, X., Duan, G., Li, A., Chen, C. & Zhang, X. Integrating micro-
systems with metamaterials towards metadevices. Microsyst.
Nanoeng. 5,117 (2019).
35. Genevet, P., Capasso, F., Aieta, F., Khorasaninejad, M. & Devlin, R.
Recent advances in planar optics: from plasmonic to dielectric
metasurfaces. Optica 4,139152 (2017).
36. Genevet, P. et al. Ultra-thin plasmonic optical vortex plate based on
phase discontinuities. Appl. Phys. Lett. 100, 013101 (2012).
37. Decker, M. et al. High-Efciency Dielectric HuygensSurfaces. Adv.
Opt. Mater.3,813820 (2015).
38. Gao, Z. et al. Revealing topological phase in PancharatnamBerry
metasurfaces using mesoscopic electrodynamics. Nanophotonics
9,47114718 (2020).
39. Zhou, Z. et al. Efcient silicon metasurfaces for visible light. ACS
Photonics 4,544551 (2017).
40. Song, Q., Odeh, M., Zúñiga-Pérez, J., Kanté, B. & Genevet, P. Plas-
monic topological metasurface by encircling an exceptional point.
Science 373, 11331137 (2021).
41. Yu, N. et al. Light propagation with phase discontinuities: general-
ized laws of reection and refraction. Science 334, 333337 (2011).
42. He, Q., Sun, S. & Zhou, L. Tunable/recongurable metasurfaces:
physics and applications. Research 2019,16(2019).
43. Li, S.-Q. et al. Phase-only transmissive spatial light modulator based
on tunable dielectric metasurface. Science 364,10871090 (2019).
44. Zhang, Y. et al. Electrically recongurable non-volatile metasurface
using low-loss optical phase-change material. Nat. Nanotechnol.16,
661666 (2021).
45. Wu, P. C. et al. Dynamic beam steering with all-dielectric electro-
optic IIIV multiple-quantum-well metasurfaces. Nat. Commun. 10,
19 (2019).
46. Akselrod, G. M, Yang, Y. & Bowen, P. Tunable liquid crystal meta-
surfaces. (2020). https://patentscope.wipo.int/search/en/detail.
jsf?docId=WO2020190704.
47. Kim, G. & Park, Y. LIDAR pulse coding for high resolution range
imaging at improved refresh rate. Opt. Express 24,23810(2016).
48. Pahlevaninezhad, M. et al. Metasurface-based bijective illumination
collection imaging provides high-resolution tomography in three
dimensions. Nat. Photonics 16,203211 (2022).
Acknowledgements
This work was nancially supported by the European Research Council
proof of concept (ERC POC) under the European Unions Horizon 2020
research and innovation program (Project i-LiDAR, grant number
874986), the CNRS prématuration, and the UCA Innovation Program
(2020 startup deepTech) and the French defense procurement agency
under the ANR ASTRID Maturation program, grant agreement number
ANR-18-ASMA-0006. CK and MS acknowledge inputs of the technical
staff at the James Watt Nanofabrication Centre at Glasgow University. C.
Kyrou has been supported with a postdoctoral fellowship grant by the
Bodossaki Foundation (Athens, Greece).
Author contributions
Sample fabrication: C.C., V.G., C.T., P.M.C., D.T., C. Klitis, and M.S.;
conceptualization and supervision: P.G.; Instrumentation support: M.G.;
experimental realization: R.J.M. and P.G.; data collection: R.J.M., E.M.,
A.B.Y., and M.J.; data analysis: R.J.M., E.M., A.B.Y. and P.G.; manuscript
writing: R.J.M., P.G., C. Kyrou, E.M., A.B.Y., and S.K.
Competing interests
A patent has been led on this technology/Renato J Martins, Samira
Khadir, Massimo Giudici, and Patrice Genevet, SYSTEM AND METHOD
FOR IMAGING IN THE OPTICAL DOMAIN, EP21305472 (2021).
Additional information
Supplementary information The online version contains
supplementary material available at
https://doi.org/10.1038/s41467-022-33450-2.
Correspondence and requests for materials should be addressed to
Patrice Genevet.
Peer review information Nature Communications thanks the other
anonymous reviewer(s) for their contribution to the peer review of this
work. Peer review reports are available.
Reprints and permission information is available at
http://www.nature.com/reprints
Publishers note Springer Nature remains neutral with regard to jur-
isdictional claims in published maps and institutional afliations.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if
changes were made. The images or other third party material in this
article are included in the articles Creative Commons license, unless
indicated otherwise in a credit line to the material. If material is not
included in the articles Creative Commons license and your intended
use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright
holder. To view a copy of this license, visit http://creativecommons.org/
licenses/by/4.0/.
© The Author(s) 2022
Article https://doi.org/10.1038/s41467-022-33450-2
Nature Communications | (2022) 13:5724 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Metasurfaces, for example, could be combined with dynamic SERS. 109 Martins et al. combined metasurfaces with ultrafast low field-of-view (FoV) deflectors to achieve high frame rates (kHz) and a large FoV for LiDAR, which has potential use in dynamic SERS. ...
Article
Full-text available
Dynamic surface-enhanced Raman spectroscopy (SERS) is nowadays one of the most interesting applications of SERS, in particular for single molecule studies. In fact, it enables the study of real-time processes at the molecular level. This review summarizes the latest developments in dynamic SERS techniques and their applications, focusing on new instrumentation, data analysis methods, temporal resolution and sensitivity improvements, and novel substrates. We highlight the progress and applications of single-molecule dynamic SERS in monitoring chemical reactions, catalysis, biomolecular interactions, conformational dynamics, and real-time sensing and detection. We aim to provide a comprehensive review on its advancements, applications as well as its current challenges and development frontiers.
Article
Full-text available
Sampling is a pivotal element in the design of metasurfaces, enabling a broad spectrum of applications. Despite its flexibility, sampling can result in reduced efficiency and unintended diffractions, which are more pronounced at high numerical aperture or shorter wavelengths, e.g. ultraviolet spectrum. Prevailing metasurface research has often relied on the conventional Nyquist sampling theorem to assess sampling appropriateness, however, our findings reveal that the Nyquist criterion is insufficient guidance for sampling in metasurface. Specifically, we find that the performance of a metasurface is significantly correlated to the geometric relationship between the spectrum morphology and sampling lattice. Based on lattice-based diffraction analysis, we demonstrate several anti-aliasing strategies from visible to ultraviolet regimes. These approaches significantly reduce aliasing phenomena occurring in high numerical aperture metasurfaces. Our findings not only deepen the understanding in phase gradient metasurface but also pave the way for high numerical aperture operation down to the ultraviolet spectrum.
Article
To improve the performance of the next-generation optical metasurface device, we investigated the feasibility of practical design and fabrication processes for 3D optical metasurface. 3D nanoimprint lithography technology could duplicate the multilayer pattern of the device in a single fabrication process with high resolution, which shows the prospect of manufacturing the 3D optical metasurface. To verify the superiority of this method, we designed a novel multilayer optical metasurface 1-to-8 beam splitter, which could achieve high energy utility efficiency and light intensity distribution of the eight beams based on the principle of Dammann grating. The multilayer structure that we designed was prepared on a Si wafer. Then, the pattern could be duplicated by the 3D nanoimprint lithography. We also do the sensitivity analysis on how the fabrication errors influence the optical properties of the device. The analytical results show the fabrication process is robust. The sample we made with 3D nanoimprint lithography technology has a performance of 86.4% power efficiency and only 2.33% light intensity deviation. The high device performance and the low fabricating cost show that the 3D nanoimprint lithography technology is a solid way to manufacture the optical metasurface with complex structures.
Article
Full-text available
van der Waals topological insulators, characterized by their high-index dielectric response, offer a promising materials platform for nanophotonics. Among these materials, Bi2Te3 has one of the highest refractive indices and extinction coefficients. However, the precise determination of Bi2Te3 optical properties remains challenging owing to its complicated physical model, which includes an oxide layer, topological conducting states, and optical anisotropy. Here, we resolve this problem and develop an accurate optical model for Bi2Te3 in a broad (450–1500 nm) spectral range. Our study shows that an oxide layer plays a major role in optical model for these wavelengths, while the influence of topological conducting states and optical anisotropy is minimal. Our model allows us to obtain accurate Bi2Te3 optical constants and demonstrate their use in biosensors, thermal theranostics, and topological phase singularities. Moreover, we observe a polarization transition of topological phase singularity for Bi2Se3, which opens a new direction for the development of topological phase effects. Therefore, our results open new avenues for photonic applications of Bi2Te3 optical properties.
Preprint
Full-text available
Dynamic surface-enhanced Raman spectroscopy (SERS) is nowadays one of the most interesting applications of SERS, in particular for single molecule studies. In fact, it enables the study of real-time processes at the molecular level. This review summarizes the latest developments in dynamic SERS techniques and their applications, focusing on new instrumentation, data analysis methods, temporal resolution and sensitivity improvements, and novel substrates. We highlight the progress and applications of single-molecule dynamic SERS in monitoring chemical reactions, catalysis, biomolecular interactions, conformational dynamics, and real-time sensing and detection. We aim to provide a comprehensive review on its advancements, applications as well as its current challenges and development frontiers.
Article
Full-text available
Microscopic imaging in three dimensions enables numerous biological and clinical applications. However, high-resolution optical imaging preserved in a relatively large depth range is hampered by the rapid spread of tightly confined light due to diffraction. Here, we show that a particular disposition of light illumination and collection paths liberates optical imaging from the restrictions imposed by diffraction. This arrangement, realized by metasurfaces, decouples lateral resolution from the depth of focus by establishing a one-to-one correspondence (bijection) along a focal line between the incident and collected light. Implementing this approach in optical coherence tomography, we demonstrate tissue imaging at a wavelength of 1.3 µm with ~3.2 µm lateral resolution, maintained nearly intact over a 1.25 mm depth of focus, with no additional acquisition or computational burden. This method, termed bijective illumination collection imaging, is general and might be adapted across various existing imaging modalities. A custom-designed metasurface for sample illumination and light collection in optical coherence tomography overcomes the usual trade off in lateral resolution and depth of field.
Article
Full-text available
Light detection and ranging (LiDAR) technology, a laser-based imaging technique for accurate distance measurement, is considered one of the most crucial sensor technologies for autonomous vehicles, artificially intelligent robots and unmanned aerial vehicle reconnaissance. Until recently, LiDAR has relied on light sources and detectors mounted on multiple mechanically rotating optical transmitters and receivers to cover an entire scene. Such an architecture gives rise to limitations in terms of the imaging frame rate and resolution. In this Review, we examine how novel nanophotonic platforms could overcome the hardware restrictions of existing LiDAR technologies. After briefly introducing the basic principles of LiDAR, we present the device specifications required by the industrial sector. We then review a variety of LiDAR-relevant nanophotonic approaches such as integrated photonic circuits, optical phased antenna arrays and flat optical devices based on metasurfaces. The latter have already demonstrated exceptional functional beam manipulation properties, such as active beam deflection, point-cloud generation and device integration using scalable manufacturing methods, and are expected to disrupt modern optical technologies. In the outlook, we address the upcoming physics and engineering challenges that must be overcome from the viewpoint of incorporating nanophotonic technologies into commercially viable, fast, ultrathin and lightweight LiDAR systems. This Review highlights the technological challenges linked to the application of nanophotonics for light detection and ranging (LiDAR).
Article
Full-text available
Active metasurfaces promise reconfigurable optics with drastically improved compactness, ruggedness, manufacturability and functionality compared to their traditional bulk counterparts. Optical phase-change materials (PCMs) offer an appealing material solution for active metasurface devices with their large index contrast and non-volatile switching characteristics. Here we report a large-scale, electrically reconfigurable non-volatile metasurface platform based on optical PCMs. The optical PCM alloy used in the devices, Ge2Sb2Se4Te (GSST), uniquely combines giant non-volatile index modulation capability, broadband low optical loss and a large reversible switching volume, enabling notably enhanced light–matter interactions within the active optical PCM medium. Capitalizing on these favourable attributes, we demonstrated quasi-continuously tuneable active metasurfaces with record half-octave spectral tuning range and large optical contrast of over 400%. We further prototyped a polarization-insensitive phase-gradient metasurface to realize dynamic optical beam steering. An electrically reconfigurable optical metasurface using a Ge2Sb2Se4Te phase change material shows half an octave spectral tuning and promising performances for optical beam steering applications.
Article
Full-text available
Accurate three-dimensional (3D) imaging is essential for machines to map and interact with the physical world1,2. Although numerous 3D imaging technologies exist, each addressing niche applications with varying degrees of success, none has achieved the breadth of applicability and impact that digital image sensors have in the two-dimensional imaging world3–10. A large-scale two-dimensional array of coherent detector pixels operating as a light detection and ranging system could serve as a universal 3D imaging platform. Such a system would offer high depth accuracy and immunity to interference from sunlight, as well as the ability to measure the velocity of moving objects directly¹¹. Owing to difficulties in providing electrical and photonic connections to every pixel, previous systems have been restricted to fewer than 20 pixels12–15. Here we demonstrate the operation of a large-scale coherent detector array, consisting of 512 pixels, in a 3D imaging system. Leveraging recent advances in the monolithic integration of photonic and electronic circuits, a dense array of optical heterodyne detectors is combined with an integrated electronic readout architecture, enabling straightforward scaling to arbitrarily large arrays. Two-axis solid-state beam steering eliminates any trade-off between field of view and range. Operating at the quantum noise limit16,17, our system achieves an accuracy of 3.1 millimetres at a distance of 75 metres when using only 4 milliwatts of light, an order of magnitude more accurate than existing solid-state systems at such ranges. Future reductions of pixel size using state-of-the-art components could yield resolutions in excess of 20 megapixels for arrays the size of a consumer camera sensor. This result paves the way for the development and proliferation of low-cost, compact and high-performance 3D imaging cameras that could be used in applications from robotics and autonomous navigation to augmented reality and healthcare.
Article
Full-text available
Relying on the local orientation of nanostructures, Pancharatnam–Berry metasurfaces are currently enabling a new generation of polarization-sensitive optical devices. A systematical mesoscopic description of topological metasurfaces is developed, providing a deeper understanding of the physical mechanisms leading to the polarization-dependent breaking of translational symmetry in contrast with propagation phase effects. These theoretical results, along with interferometric experiments contribute to the development of a solid analytical framework for arbitrary polarization-dependent metasurfaces.
Article
Full-text available
Spatial light modulators are essential optical elements in applications that require the ability to regulate the amplitude, phase and polarization of light, such as digital holography, optical communications and biomedical imaging. With the push towards miniaturization of optical components, static metasurfaces are used as competent alternatives. These evolved to active metasurfaces in which light-wavefront manipulation can be done in a time-dependent fashion. The active metasurfaces reported so far, however, still show incomplete phase modulation (below 360°). Here we present an all-solid-state, electrically tunable and reflective metasurface array that can generate a specific phase or a continuous sweep between 0 and 360° at an estimated rate of 5.4 MHz while independently adjusting the amplitude. The metasurface features 550 individually addressable nanoresonators in a 250 × 250 μm² area with no micromechanical elements or liquid crystals. A key feature of our design is the presence of two independent control parameters (top and bottom gate voltages) in each nanoresonator, which are used to adjust the real and imaginary parts of the reflection coefficient independently. To demonstrate this array’s use in light detection and ranging, we performed a three-dimensional depth scan of an emulated street scene that consisted of a model car and a human figure up to a distance of 4.7 m.
Article
Full-text available
A slim beam deflector that satisfies both a large steering angle and a large area can be very useful in various applications. However, a smaller electrode pitch for a large steering angle and enlargement of its area are trade-off relations due to the limited number of control channels in an electrically tunable beam deflector system. For a large steering angle in the active area where actual diffraction occurs, an indium tin oxide electrode of 2 µm pitch was implemented through a stepper lithography. The via-hole process was developed to expand the reduced active area due to the small electrode pitch. We developed a beam deflector with 7200 controllable channels in an active area of 14.4  mm×14.4  mm{14.4}\;{\rm mm} \times 14.4\;{\rm mm} 14.4 m m × 14.4 m m . The maximum steering angle is 7.643° at a wavelength of 532 nm.
Article
Upon reflection, modulate phase Metasurfaces provide a platform to fabricate optical devices in a compact form much thinner than their corresponding bulk optical components. Recognizing that metasurfaces are also open systems interacting with their environment, Song et al . designed a metasurface that exploits those non-Hermitian properties such that they can encircle an exceptional point. Subsequent scattering from such an exceptional point was shown to be polarization dependent, thus providing an additional control knob in designing metasurfaces for wave front engineering. —ISO
Article
This paper aims to review the state of the art of Light Detection and Ranging (LiDAR) sensors for automotive applications, and particularly for automated vehicles, focusing on recent advances in the field of integrated LiDAR, and one of its key components: the Optical Phased Array (OPA). LiDAR is still a sensor that divides the automotive community, with several automotive companies investing in it, and some companies stating that LiDAR is a 'useless appendix'. However, currently there is not a single sensor technology able to robustly and completely support automated navigation. Therefore, LiDAR, with its capability to map in 3 dimensions (3D) the vehicle surroundings, is a strong candidate to support Automated Vehicles (AVs). This manuscript highlights current AV sensor challenges, and it analyses the strengths and weaknesses of the perception sensor currently deployed. Then, the manuscript discusses the main LiDAR technologies emerging in automotive, and focuses on integrated LiDAR, challenges associated with light beam steering on a chip, the use of Optical Phased Arrays, finally discussing current factors hindering the affirmation of silicon photonics OPAs and their future research directions.