Content uploaded by Doksoo Lee
Author content
All content in this area was uploaded by Doksoo Lee on Dec 09, 2023
Content may be subject to copyright.
DEEP NEURAL OPERATOR ENABLED CONCURRENT MULTITASK
DESIGN FOR MULTIFUNCTIONAL METAMATERIALS UNDER
HETEROGENEOUS FIELDS
Doksoo Lee
Department of Mechanical Engineering
Northwestern University
Evanston, IL 60208
dslee@u.northwestern.edu
Lu Zhang
Department of Mathematics
Lehigh University
Bethlehem, PA 18015
luz319@lehigh.edu
Yue Yu
Department of Mathematics
Lehigh University
Bethlehem, PA 18015
yuy214@lehigh.edu
Wei Chen*
Department of Mechanical Engineering
Northwestern University
Evanston, IL 60208
weichen@northwestern.edu
Keywords multifunctional metamaterials, neural operator, heterogeneous fields, data-driven design, concurrent
optimization
ABS TRAC T
Multifunctional metamaterials (MMM) bear promise as next-generation material platforms supporting
miniaturization and customization. Despite many proof-of-concept demonstrations and the prolifera-
tion of deep learning assisted design, grand challenges of inverse design for MMM, especially those
involving heterogeneous fields possibly subject to either mutual meta-atom coupling or long-range
interactions, remain largely under-explored. To this end, we present a data-driven design framework,
which streamlines the inverse design of MMMs involving heterogeneous fields. A core enabler is
implicit Fourier neural operator (IFNO), which predicts heterogeneous fields distributed across a
metamaterial array, thus in general at odds with homogenization assumptions, in a parameter-/sample-
efficient fashion. Additionally, we propose a standard formulation of inverse problem covering a
broad class of MMMs, and gradient-based multitask concurrent optimization identifying a set of
Pareto-optimal architecture-stimulus (A-S) pairs. Fourier multiclass blending is proposed to syn-
thesize inter-class meta-atoms anchored on a set of geometric motifs, while enjoying training-free
dimension reduction and built-it reconstruction. Interlocking the three pillars, the framework is
validated for light-by-light programmable plasmonic nanoantenna, whose design involves vast space
jointly spanned by quasi-freeform supercells, maneuverable incident phase distributions, and conflict-
ing figure-of-merits involving on-demand localization patterns. Accommodating all the challenges
without a-priori simplifications, our framework could propel future advancements of MMM.
1 Introduction
Metamaterials are engineered architectured systems that support either superior often exotic functionalities, not
found or beyond those in conventional systems [
1
]. The emergent behaviors usually stem from structure rather than
composition [
2
]. Among many, multifunctional metamaterials (MMM) form a branch of next-generation emerging
material systems that support miniaturization and customization [
3
–
5
]. They hold promise for making large societal
impact, including that in medical, defense, and energy. A plethora of demonstrations on MMM have been reported in
the literature [6–15].
On one end, the recent advancements in fabrication techniques have fueled the dissemination of multiscale systems
often possessing sophisticated architectural details [
16
–
20
]. In contrast, design automation tools, particularly for MMM,
are still underway. The lack of solid, rigorous design support has been, arguably, a root cause of the current design
practice on MMM that largely resorts to a-priori simplifications on designable entities, e.g., material, topology of unit
cells, tailorable incident light. Pre-specifying any of them not only risks a substantial compromise of what MMM
*The corresponding author.
1
arXiv:2312.02403v1 [cs.CE] 5 Dec 2023
could exclusively offer, but also impedes to shed light on the opaque inter-relationships among material, architecture,
and stimulus [
21
]. In addition, a prevalent trade-off across multiple conflicting functionalities of MMM has rarely
received in-depth investigations that it deserves. So has rigorous decision making under the trade-off, arguably due to
the absence of a standard formulation of the inverse problem dedicated to MMM. Addressing all the design challenges
all at once remains an open problem, while it has certainly been a core quest for design of next-generation metamaterials
to embrace challenges, more than conventional approaches do, without losing tractability.
Since the inception of metamaterials [
1
], a core quest of metamaterials design has been to achieve the sought-after ideal
of the multiscale architectures through systematic, rational decision making. In doing so, recent years have seen a new
wave of design paradigm: data-driven design for metamaterials and multiscale systems. The concept broadly refers to a
design practice that builds on the common thread: discovering patterns from a finite collection of observational data,
rather than domain knowledge, and then harnesses the patterns to expedite the otherwise arduous multiscale design
procedure [
22
]. Evidenced by a growing volume of reviews from diverse perspectives [
22
–
30
], the new paradigm has
drawn immense attention in a broad realm of engineering sciences, perhaps with the vision that it may possibly unlock
the potential of metamaterials.
The workhorse of the rising paradigm has been data-driven surrogates that offer on-the-fly, accurate enough prediction
on quantities of interest, which by extension accelerates the resource-intensive, repetitive computations of solving
governing equations for inverse design. The surrogate usually involves the mapping from a parameterized meta-atom to
the corresponding homogenized, or “aggregated”, spectral responses, e.g., scattering parameters, that are computed from
the raw physical field. Grounded on the universal approximation theorem [
31
–
33
], there always exists a data-driven
model to approximate the associated mapping between input and output with a desired accuracy, given a sufficient
amount of training data, low dimensional output, and fine-tuning of hyperparameters [
22
]. This camp of design
approaches, which we would dub “the palette approach” [
34
] throughout this article, have underpinned foundational
achievements reported in the communities [35–41].
Behind the proliferation, relatively sparse attention has been given to its intrinsic drawbacks. First, it typically resorts to
the “shortcut” modeling strategy, where the model has access only to homogenized spectral responses at the downstream,
which are at most a proxy of high-dimensional fields available in the full-wave analysis, e.g., rigorous coupled wave
analysis [
42
], finite-difference time-domain [
43
], and finite element method [
44
]. Skipping the raw physical fields,
the shortcut-based modeling hosts a disconnect from the domain knowledge that has been progressively forged by
the communities, such as spatiotemporal causal effects formally described by either differential equations (e.g., the
Maxwell’s equations) or analytic models. Without proper supervision, the model would be black-box and prone to
just imitating a superficial, statistical relation that is specific to given training data, rather than truly absorbing the
fundamental light-matter interactions that are universal. Second, another underpinning of the palette approach is the
assumption of scale separation, where (i) the multiscale system of interest is either deterministic under periodicity
or stochastic following the ergodic principle thus stochastically periodic [
22
], and (ii) uncoupled operation among
non-identical neighboring meta-atoms can be justified with an acceptable amount of deviations from the spectral
responses under periodic boundary conditions. The widely-used two conditions, however, are at odds with many
advanced metamaterials that are subject to (i) long-range interaction [
45
–
47
], (ii) mutual coupling among non-identical
meta-atoms [
48
–
50
], (iii) fabrication uncertainty (e.g., heterogeneous local material defects; freeform meta-atom
uncertainty) [
51
–
53
], (iv) engineered disorder [
54
–
56
], (v) heterogeneously modulated incident wave [
57
–
59
]. For
those cases, the conceptualization, corroborated by follow-up experimental demonstrations, was presented a long time
ago; yet the design automation with any optimality guarantee still remains open to questions. Developing a suite of
inverse design tools dedicated to those systems could pave an avenue to further explore the rich design freedom in
deterministic/stochastic multiscale architectures, succeeding to the recent achievements that have been primarily driven
by the palette approach.
We claim that releasing the potential of MMM would in part be enabled via embracing the vast design space as is,
jointly formed by multiple field-type design entities, followed by rigorous inverse optimization directly involving
the set of on-demand functionalities at the system level, and their commonplace yet overlooked trade-off as well.
To do so, we propose a data-driven design framework dedicated to MMM, with particular focus on programmable
metamaterials that embody the multifunctionality of interest through a functional switch under modulation of external
stimuli. The proposed framework builds on three pillars. First, we construct a field-to-field surrogate through implicit
Fourier neural operator (IFNO) [
60
–
62
] (Section 2.1). Neural operators [
60
,
63
–
69
] learn a surrogate mapping between
infinite dimensional functional spaces thus feature resolution independence and generalizability to different input
instances, which is in contrast to the classical neural networks, e.g., U-net [
70
], that only handle a mapping between
finite-dimensional discrete vectors with prefixed resolution. In our context, the neural operator offers the joint mapping
from a pair of architecture (A) field (e.g., meta-atom and its arrangement) and stimulus (S) field (e.g., modulated incident
phase), as the input instance, to the corresponding output field that could be heterogeneous beyond periodic boundary
conditions. Once trained, the neural operator modeling approach offers on-the-fly full-field prediction, thus enjoys
2
physical transparency, and generality to unseen field-type A-S pairs, all of which are challenging for the palette approach
to achieve. Second, we present a standard formulation of the inverse problem on a class of MMM, where a system of
interest are open to multiple types of designable entities and multiple target functional states (Section 2.2). Tackling
the problem enables us to identify a set of Pareto-optimal architecture-stimuli pairs with respect to a user-defined
set of functionalities, and to address task importance thereof, no matter if it has been given prior to the optimization
or after. Third, we conceive Fourier multiclass blending (FMB) inspired by the topological encoding method for
meta-atoms [
71
], and class remixing [
72
] in order to create a large amount of quasi freeform meta-atoms in relation
to canonical families (Section 2.3). The proposed meta-atom generation scheme allows to accommodate domain
knowledge, produce inter-class instances specified through a compact yet expressive meta-atom representation that
enjoys training-free dimension reduction and built-in reconstruction.
To show the efficacy of the framework, design of spatially addressable plasmonic metasurfaces [
58
,
59
,
73
–
75
] is
revisited as a case study, where the design goal is to embody a “metasurface chessboard” maneuverable with spatial
light modulators (Section 3). As opposed to the prior reports that simplified the problem setting to make it amenable,
we tackle all the core challenges simultaneously, namely
• drastically varying fields involving heterogeneously distributed plasmonic hotspots
• opaque long-range interaction across meta-atoms
•
the vast input space jointly formed by both a quasi-freeform metasurface supercell and spatially modulated
phase field
• a trade-off across the multiple functionalities of interest.
Through the case study, we also present the key findings including:
•
the spatially drastically varying fields can be accurately predicted through the proposed IFNO architecture,
with the local energy hotspots robustly reproduced
•
the machine learning (ML) surrogate combined with gradient-based multitask concurrent optimization boosts
to navigate the vast stimulus-architecture joint space in a tractable way
•
inverse design of multifunctional devices forms a decision making problem on task importance specificity,
which can be addressed by a set of diverse Pareto-optimal architectures, and one-to-many A-S solutions.
In addition to the results in our case study, we delineate some feasible extensions of the proposed framework upon
trivial modifications, in terms of alternative input representations for the neural operator, model transparency thereof,
geometrically aperiodic metasurface arrays, and other optimization methods (Section 4.4).
2 Method
Figure 1gives a visual abstract of the proposed framework. Three key pillars are (1) fields-to-field modeling with
implicit Fourier Neural Operator (Section 2.1), (2) multitask concurrent A-S optimization (Section 2.2), and (3)
quasi-freeform shape generation through Fourier multiclass blending (Section 2.3).
2.1 Modeling: Field-to-Field Modeling through Implicit Fourier Neural Operator
2.1.1 Operator Learning
An operator refers to a mapping, or a function, that acts on elements forming a space and then generate elements
forming another space, or optionally included in the same space [
76
]. The goal of operator learning is to learn a
mapping between infinite-dimensional function spaces using a finite collection of observational data. As such, the
trained operator can serve as an efficient surrogate for PDE solvers [
63
–
65
]. Comparing with the classical PDE solvers
and the physics-informed neural networks (PINNs), the neural operator approach requires no preliminary knowledge on
the governing equation [
61
] and, more importantly, is generalizable to different input instances. That means, once the
network is trained, it provides on-the-fly predictions for new and unseen material architectures and stimulus.
We briefly state a formal description on operator learning. Let
Ω⊂Rd
and
x∈Ω
be the spatial domain of the
PDE and an arbitrary point within the domain, respectively. We consider two function spaces: (1)
A=A(Ω; Rda)
being a Banach space spanned by input functions taking values in Rdaand (2) U=U(Ω; Rdu)being another Banach
space spanned by solution functions taking values in
Rdu
. The overarching goal of operator learning is to construct a
(non-linear) mapping between the two function spaces by learning a solution operator
M†:A→U
, given a finite
3
Figure 1: A visual overview of the proposed framework.
set of function pair observations
{aj(x), uj(x)}
,
x∈Ω
. The ground-truth, infinite-dimensional map
M†
will be
approximated through
M
endowed with parameterization
θ∈Θ
following the mapping
Mθ:A→U
, where
Θ
is the
finite-dimensional parameter space for the operator. The optimal surrogate operator,
M
, is then obtained by solving for
θvia the optimization problem:
θ∗=argmin
θ∈Θ
Ea[L(Mθ(a),M†(a))],(1)
where
L(·,·)
is the regression loss, e.g., mean squared error. When passing observational data to the neural operator,
the collection would involve a finite discretization. Importantly, the learned operator
Mθ∗
offers prediction with
respect to any
x∈Ω
, including the points not necessarily covered by the finite discretization. This explicates
why operator learning, as opposed to mainstream image-to-image networks like U-net [
70
], exclusively enjoys the
discretization invariance: solutions can be transferred to different uniform grids (e.g., low-resolution
→
high-resolution).
In addition, some reports claimed that operator learning also stands out regarding sample-/parameter-efficiency among
its competitors [77].
2.1.2 Neural Operator
Neural operator offers a powerful means to approximate non-linear operators with data. The general construction
principle follows that of conventional, standard neural networks: taking a layer as a building block that involves some
linear transformations (e.g., weight and bias), followed by nonlinear activations (e.g., ReLU or sigmoid), and then build
a cascade of such blocks to boost modeling capability. While the standard neural networks capture the dependencies
between neurons as discrete linear combinations, the integral neural operator (INO) architectures, as originally proposed
by Li et al. [
63
] and extended in [
60
,
64
,
66
,
67
], model the long-range dependencies through an integral operator. An
L-layer INO has the following form:
Mθ[a;θ](x) := Q◦IL◦ · ·· ◦ I1◦ R[a](x),(2)
where
R
,
Q
are shallow-layer neural networks that provide point-wise mapping from a low-dimensional vector into a
high-dimensional vector, and vice versa. Each intermediate layer,
Il
, consists of a local linear transformation operator
4
Wl
, an integral (nonlocal) kernel operator
Kl
, and an activation function
σ
. The architectures of NOs mainly differ
in the design of their intermediate layer update rules. As a popular example, when considering the problem with
structured domain
Ω
and uniform grid set, the Fourier neural operator (FNO) is widely used [
64
], where the integral
kernel operators
Kl
are linear transformations in frequency space. The core idea of FNO is to parameterize the kernel
integral operator
Kl
as a convolution operator defined in Fourier space. Formally, let
F
denote the FT of a function
f: Ω →Rdv
and
F−1
its inverse. FNO takes a sequence of Fourier layer blocks, such that the (
l
+1)th-layer feature
function vl+1(x)is calculated based on the lth-layer feature function vl(x)via:
vl+1(x) = IF N O
l[vl(x)] := σ(Wlvl(x) + F−1[ϕl· F[vl(·)]](x) + cl),(3)
where
Wl
,
cl
and
ϕl
are trainable matrices to be optimized. Comparing to other NO architectures, FNO inherits the
advantages of INOs on resolution independence while boosting the computation of the kernel integral operator
K
using Fast Fourier Transform (FFT). The approximation capabilities have been demonstrated by a diverse array of
benchmarks [60,77–79].
2.1.3 Implicit Fourier Neural Operator
Given sufficient amount of data, FNOs are universal in the sense that they can approximate any continuous operator to a
desired accuracy [
68
]. However, in the vanilla FNO architecture
(3)
different integrable layers have different parameters
Wl
,
cl
, and
ϕl
, which makes the number of trainable parameters increasing as the network gets deeper. As a result,
the training process of the FNOs may become challenging and potentially prone to over-fitting in small data regimes
[
60
,
67
]. To overcome these limitations, the implicit neural operator architectures were proposed [
60
,
80
,
81
], where
the skip connections were added between the integral layers and all layers share the same set of trainable parameter,
W
,
c, and ϕ. In particular, the architecture Eq. 3was modified as:
vl+1(x) = II F NO [vl(x)] := vl(x) + 1
Lσ(W vl(x) + F−1[ϕ· F[vl(·)]](x) + c).(4)
As a result, the NO architecture has substantially smaller number of trainable parameters, while still serves as a universal
approximator for fixed point PDE solvers [
60
]. Moreover, the above architecture can also be interpreted as discretized
nonlocal differential equations, and consequently allows for the shallow-to-deep initialization technique where optimal
parameters learned on shallow networks are considered (quasi-optimal) initial guesses for deeper networks [
67
]. This
technique was found helpful in enhancing the network consistency across different layer numbers, and mitigating the
vanishing gradient issue in the deep layer limit [67,82].
In this paper we consider
Ω
a squared-shape 2D measuring domain with uniform grids, and employ the implicit
Fourier neural operator (IFNO) [
60
] as the surrogate solver. In practice, the implementation of FNO and its variants is
accelerated through FFT that incurs quasilinear time complexity, provided that the discretization over
Ω
is uniform.
Otherwise, use of special strategies dedicated to unconventional domains [
83
,
84
] was proposed; this beyond the scope
of the current work.
Herein we introduce the architectural details of the IFNO network
MAS
conceived for this work. The concatenated
vector of 2D coordinate
x
, the architecture function, and the stimulus function is set as the input field, and the
corresponding electric field as the output function. As a result, we have the input function
a
being vector-valued with
a(x)∈R4
, and the output
u
being scalar-valued functions, i.e.,
u(x)∈R
. In the proposed network architecture, key
model parameters that primarily determine the model complexity are as follows:
• modes: the number of Fourier modes along each dimension
• width: the number of channels
• lastwidth: the number of input channels at the last fully connected linear layer
• depth: the number of cascaded Fourier layers
Through the parameter study on the model parameters, we use
modes
=64,
width
=32,
lastwidth
=64, and
depth
=8.
Parameter study on depth can be found in Appendix A. The setting forms 8,392,001 parameters, all of which are
trainable. Hyperparameter setting includes
lr = 0.01
as initial learning rate,
γ= 0.5
as scheduler exponent,
wd = 10−5
as weight decay in the Adam optimizer [
85
],
nepochs = 500
as the number of epochs, and
nstep = 100
as the period of
step to update the scheduler. All trainings are performed on nVIDIA Tesla T4 GPU card with 16GB memory. Further
details associated with the training, e.g., scheduling [86], are stated in Appendix A.
5
2.2
Optimization: An Inverse Problem Formulation on Multitask Concurrent Optimization of Multifunctional
Devices
2.2.1 The Inverse Problem on Programmable Material Systems
The end goal of this work is to present an inverse design framework for multifunctional systems that: (1) exhibit an
on-demand functional switch subject to dynamic stimuli and (2) are possibly open to multiple types of design entities,
e.g., architecture (A), material (M), and stimulus (S), not limited to architecture (i.e., meta-atom and its tessellations).
The proposed design framework is supposed to navigate the vast design space, jointly formed by M-A-S entities, with
a possibly conflicting target tasks addressed along the way. More specifically, we are interested in A-S concurrent
optimization for plasmonic metasurfaces that are programmable under spatial phase modulation [
58
,
59
,
73
–
75
], which
offer a means to embody multifunctionality with a geometrically stationary metasurface. In addition, the presented
framework is applicable for more than two target functional states, as opposed to a plethora of demonstrations done
for on-off type programmability [
87
–
93
], mostly presented without the trade-off investigated. The case study of main
interest that involves all the challenges is to be elaborated in Section 3.
2.2.2 A Generic Formalism for Multitask Architecture-Stimulus Concurrent Optimization
As one form to support multifunctionality, a functional switch in programmable material systems involves a finite
set of target states, each of which can generally be conceptualized as task. For a single task
t
, let
θA∈ΘA
be the
parameterization specifying architecture, e.g., a set of bar lengths for I-beam meta-atoms,
θS∈ΘS
be that given to
stimulus, and
Jt: ΘA×ΘS→R
be the task-specific objective that quantifies system performance with regard to task
t∈ T ={1,· · · , T } ⊂ N. For each task, the A-S design problem can be formulated as:
((θA)∗
t,(θS)∗
t) = min
(θA,θS)∈ΘA×ΘS
Jt(θA, θS),(5)
where the superscript
(·)∗
and subscript
(·)t
indicate optimality and the associated task index, respectively. Given
a single task
t
in concern, an A-S pair
((θA)∗
t,(θS)∗
t)
can be optimized via navigating the joint
ΘA×ΘS
space.
By extension, given a set of tasks, optimized A-S pairs can be found individually. Seemingly straightforward, this
formulation Eq. 5turns out to be ineffective for real-world deployment of multifunctional devices. It is highly likely
that optimized A-S pairs for individual tasks would involve task-specific optimal architectures; demanding different
architectures for different functionalities would hinder the practical deployment. It is warranted to find a single
architecture that supports multiple functionalities under dynamic stimuli. To do so, we alternatively consider the
following optimization problem:
θ∗= (θ∗
A,θ∗
S) = min
θA∈ΘA,θS∈ΘS
J(θA,θS) = min
θA∈ΘA,θS∈ΘS
J(θA,(θS)1,· · · ,(θS)T)(6)
where
θ
is the concatenated A-S entities;
θS={(θS)1,· · · ,(θS)T}
denotes the set of stimuli;
J
is the multitask
objective, formulated as a vector-valued function of individual figure-of-merits (FoMs)
Jt
. The way to specify
J
with
respect to multiple FoMs is non-singular. A conventional, empirical approach is to construct the so called scalarized
objective as a proxy, which casts the multiobjective optimization problem into a single-objective one as follows:
min
(θA,θS)∈ΘA×ΘS
T
X
t=1
ctJt(θA,θS),(7)
where
0≤ct≤1
is the weighting factor subject to the linear constraint
PT
t=1 ct= 1
. Provided that all the FoMs have
been properly scaled one another, the key is how to assign the scaling
ct
under the constraint. A large body of prior
work has formed a dichotomy between static weighting where
ct
remains constant during optimization and dynamic
weighting based on some heuristics [
94
]. Whichever is chosen, it is inevitable for the weighted formulation to involve a
grid search or heuristics unless task importance has been declared. Such is rarely the case, hence the line of approaches
allows room for subjectivity in terms of global optimality.
The optimization goal of Eq. 6, for generic purposes, should ideally be to find a set of solution(s)
Pθ
that meets Pareto
optimality [
95
]. The concept builds on the notion of a dominated solution:
θ∗
is said to dominate
θ
if
Jt(θ∗)≤ Jt(θ)
for
∀t∈ T
and
Jt(θ∗)=Jt(θ)
. No solution
θ
dominates a Pareto optimal solution
θ∗
. Parato optimal solutions
constitute the Pareto set Pθ, whose image with infinitely many elements would form the Pareto front PJ.
The Pytorch implementation of the proposed IFNO (Sec. 2.1.3) offers access to numerical design sensitivities of
MAS
through backpropagation with Automatic Differentiation [
96
] (Figure 3). Capitalizing on the gradients, we tackle the
optimization problem in Eq. 6based on gradient descent. Analogous to the conventional optimization with a single
objective, gradient descent for multiobjective optimization boils down to finding a Pareto stationary point, defined based
6
upon the Karush-Kuhn-Tucker (KKT) conditions. Pareto stationary points can be found through the following line
search [95]:
min
{ct}
T
X
t=1
ct∇θAJt(θA,θS)
2
,(8)
under the aforementioned constraints on the weights
{ct}
. It has been shown that the minimizer of Eq. 8either ensures
that the gradient direction decreases all task-specific objectives
Jt
or meets the KKT conditions with the minimum norm
being zero [
95
]. Built on the availability of an analytic solution for
|T | = 2
, a common descent direction for
|T | ≥ 3
can be identified. For each iteration, the design update with respect to stimulus, i.e., (
θS)t→(θS)′
t
, is task-specific
hence done independently across tasks as follows:
(θS)′
t= (θS)t−ηS∇(θS)tJt(θA,θS),(9)
with the step size on stimulus being
ηS
. On the other hand, design update for architecture is shared across all tasks. The
design update for architecture (
θA→θ′
A
) – the most challenging part in this optimization – is specified via aggregating
task-specific gradients through
θ′
A=θA−ηA
T
X
t=1
ct∇θAJt(θA,θS),(10)
where
ηA
is the step size for architecture, and
{ct}
is identified via the line search (Eq. 8). The proposed formalism will
be employed in the case study in Section 3, where modulated incident phase and meta-atom geometry are regarded as
stimulus and architecture, respectively.
2.3 Data Generation: Meta-Atom Synthesis through Fourier multiclass blending
In data-driven design, data itself is a design element [
97
]. Data acquisition for D3 has been tackled through a
diverse array of strategies [
22
,
24
,
30
]. Lee et al. [
22
] claimed that what commonly underlies each were (1) unit cell
representation, e.g., pixel/voxel, and (2) reproduction strategy, e.g., parametric sweep. A representation refers to a set
of parameters, or models, used to directly characterize unit cells [
98
]. Meanwhile, reproduction refers to the way of
producing generic shape instances, particularly of “growing” a sparse shape set to massive one [
22
]. Determining the
pair of representation and reproduction, specifically for the meta-atoms within our scope, is a key decision that should
be made at the early stages of data acquisition, as the pair (1) dictates the distributional nature of resulting data, e.g.,
space-filling and coverage, and (2) could significantly affect the difficulty of downstream tasks, e.g., both unit-cell-level
and system-level design optimization.
2.3.1 Fourier multiclass blending
Harnessing the Fourier Transform pair, Liu et al. [
71
] has established a procedure to obtain a versatile design represen-
tation that is powerful to specify topologically free meta-atoms. The key idea is that given an image of a meta-atom,
one can apply
F
to obtain the corresponding sparse representation in the frequency domain. The method enjoys (1)
substantial dimension reduction with topological skeletons preserved, (2) built-in training-free reconstruction capability
supported by F−1, and (3) perfect control over some topological symmetry of meta-atoms.
Hinged on the benefits, we present FMB. The core motivation of multiclass blending [
72
], in general, is to generate a
large amount of building blocks, pivoted on a set of geometric motifs, as many as needed. We advocate this line of
reproduction strategy for D3 purposes in that it directly accommodates domain knowledge via including canonical
meta-atom families and smoothly bridges them, often in a unified landscape with follow-up representation learning [
98
].
In doing so, we propose to do multiclass blending in the Fourier feature space in order to promote smooth interpolation
between an arbitrary pair of meta-atom instances. This is a departure from Chan et al. [
72
] where the blending takes
place in the ambient image space. Empirical observations support that the proposed blending offers a smoother transition
(Figure 13, Appendix C).
We assume that FMB starts from a predefined set of
nc
families, whose individual “class representative” is given as
either a binary image or a level-set function
χ(x, y)∈[0,1]N×N
. For each binary image, a corresponding sparse
Fourier representation
zk(k= 1,2,· · · , nc)
can be found with FT
F
, where
dim(zk)≪N2
. The feature dimension is
empirically determined by observing the trade-off between dimensionality
dim(zk)
and reconstruction error (Figure 12,
Appendix C). Since the Fourier features involve highly imbalanced distributions across components, it is useful to apply
proper feature-wise scaling, e.g., standardization or normalization. We create an inter-class instance through a simple
linear combination of the features in the Fourier feature space as
z=
nb
X
k=1
αkzk,(11)
7
where
nb≥2∈N
is the number of classes to be considered for blending, as
αk
the weights subject to
Pnb
k=1 αk= 1
and 0≤αk≤1. For reconstruction, the corresponding level-set representation is found through:
ˆ
ϕ(x, y) = F−1[z].(12)
Given a cutoff threshold ϕ0, the binary image of the inter-class meta-atom is identified as
ˆχ(x, y) = (1where ˆ
ϕ(x, y)≥ϕ0
0otherwise.(13)
Figure 2: An overview of the proposed FMB and its instantiation
DS
. (a) The six start-up classes chosen from literature.
(b) Seventy randomly selected inter-class instances. (c) Two-class linear traversal in the 36D Fourier feature space
Z
.
(d) A 2D shape manifold obtained through Uniform Manifold Approximation and Projection [99].
Figure 2shows example instances of the proposed blending. Without loss of generality of the blending scheme, six
classes were selected from literature (Figure 2(a)). By construction, all instances generated by FMB inherits the
advantages of the sparse Fourier representation, i.e., dimensional compactness, built-in reconstruction, and symmetry
control. The proposed blending offers smoother transition between/among user-defined class representatives, and
inter-class instances born from those (Figure 2(c)). Based on the FMB approach, we built a ground shape set, to be
called
DS
throughout this article, including 15k instances with
nb= 2,3,4
. In Figure 2(b), some instances show clear
closeness to one of the user-define families, e.g., bow-tie, ellipse, I-beam. Most of the instances, however, exhibit
“organic” deviations from the families, which are difficult to be explicitly described through simple primitives, e.g., bar
and hole. More instantiations of for two- and three-class blending are listed in Figure 14 of Appendix C. Figure 2(d)
visualizes a 2D shape manifold of the unified landscape obtained through Uniform Manifold Approximation and
Projection [
99
]. The visualization inevitably distorts the original 36D Fourier feature space; yet some qualitative
observations are available, e.g., the closeness between bow-tie family and ellipse family, that between ring and square
ring. The color denotes the distance to the nearest class representative, which could serve as a rough metric to quantify
the degree of blending.
3 Case study: Light-by-Light Programmable Plasmonic Metasurfaces
Active light control using plasmonic metasurfaces paves a way to enhance the speed of nanoscale optical imaging
using far-field optical components [
100
,
101
]. Spatial phase modulation [
58
,
59
,
73
–
75
] offers a route to dynamically
8
address light confinement on plasmonic metasurfaces [
102
]. The inverse problem involves two key designable entities:
plasmonic meta-atoms (Section 2.3), whose array channels propagating light into localized evanescent electromagnetic
waves on the sample surface, and phase distributions in the incoming wave (Section 3.1), whose distribution can be
modulated through commercial spatial light modulators.
3.1 Phase Representation
Based on our perspective for inverse design, phase is a special field-type instance of stimulus that is open to design. As
a proof-of-concept of the proposed framework, we employ the continuous phase representation presented in Lee et
al. [58]:
ϕ(x, y) =
nh
X
j=1
ϕjΦj(x, y)
Φj(x, y) = ϕjcos π
jλ Mx +αjcos π
jλ My +βj
(14)
where
0≤ϕj≤π, 0≤αj, βj≤2π
;
ϕ
is the spatial phase lag function;
nh
is the order of the expansion;
ϕj
represents the amplitude of harmonic
j
;
αj
and
βj
establish a translational shift of harmonic
j
along the
x
- and
y
-directions, respectively;
λ
is the periodicity of a meta-atom;
M
is the demagnification factor. An individual harmonic
is specified by three design variables
[ϕj, αj, βj]T
. With
nh= 2
a phase profile is represented by six design variables
ϕ= [ϕ1, α1, β1, ϕ2, α2, β2]T
. Further details of the full-field analysis using COMSOL Multiphysics
®
v.5.6 [
103
] can
be found in either Lee et al. [
58
] or Appendix B. Note that the proposed framework involving IFNO (Section 2.1) is by
no means restricted to this particular analytic representation.
3.2 The Inverse Problem on Digitally Addressable Plasmonic Metasurfaces
The full-wave analysis offers an access to norm of electric field
||E(x, y)||
for
(x, y)∈Ω
, where
Ω
is the measuring
domain located right above the metasurface array. The planar measuring domain is subdivided into
Nm×Nn
square
patches of equal size, each of which is denoted as
Ω(m,n)
for
m, n = 1,2,· · · , Nm
. We consider the following array
response matrix L= [L](m,n)given a plasmonic metasurface antenna array:
Lmn =ZΩ(m,n)
||E(x, y)||2dΩ.(15)
As a proof-of-concept, we set the three patterns as the target task of multitask concurrent A-S optimization (i.e.,
T={1,2,3
}). In order to construct a scalar FoM specific to each energy redistribution pattern, task-specific weight
matrices Wtare introduced as:
W1= [−1,−1,−1; −1,8,−1; −1,−1,−1]
W2= [−1,8,−1; −1,−1,−1; −1,−1,−1]
W3= [−1,−1,−1; −1,−1,−1; −1,−1,−8] .
(16)
An intuitive visualization will be provided in Section 4.3. With
Wt
specific to a target localization pattern, an individual
FoM Jtis quantified as
Jt=Wt◦L(17)
where
◦
is the Hadamard product (i.e., elementwise multiplication). Deduced from Eq. 6, the associated inverse problem
reads
min
(z,ϕ)∈ΘA×ΘS
T
X
t=1
ctJt(z,ϕ),(18)
where
z
is the Fourier representation (Eq. 11),
ϕ
is the harmonic phase representation (Eq. 14) subject to the associated
bounds. For each iteration the design sensitivities with respect to both entity groups,
∂Jt/∂z
and
∂Jt/∂ϕ
, are
computed through backpropagation (Figure 3) [96].
3.3 Data Acquisition Strategy
As a class of operator learning, IFNO demands a finite collection of data pairs to approximate the joint mapping
among the function spaces of interest, e.g.,
MAS :A × S → E
in our case study, where
A
,
S
,
E
are associated with a
9
Figure 3: An illustration of backpropagation in MAS to obtain the numerical gradients.
meta-atom array
χ(x, y)
, modulated phase field
ϕ(x, y)
, electric energy intensity
||E(x, y)||2
, respectively. A training
dataset is structured in the form of
D={(χ(x, y), ϕ(x, y); ||E||2(x, y ))|z∈ΘA,ϕ∈ΘS}(19)
where χ(x, y)is reconstructed from z(Eqs. (12) and (13)) and ϕ(x, y)is specified from ϕ(Eq. 14).
It has been empirically seen in the corpus of the palette approach, when aiming to build a conventional meta-atom-
to-spectrum surrogate of quality, the order of training data typically ranges from
O(103)
to
O(104)
. Preparing a
comparable amount of observational data for both architecture field and phase field could be very resource-intensive.
To this end, we implement an efficient data acquisition strategy. In Section 2.3 a 15k-size shape-only dataset DSborn
from the six meta-atom families has been prepared. Taking the 36D Fourier representation
z
as the shape descriptor, we
apply shape diversity based sequential acquisition proposed in Lee et al. [
97
], in order to sequentially identify a 500-size
subset with largest shape diversity. Regarding phase, on the other hand, Optimal Latin Hypercube Sampling [
104
] is
employed to identify 36 space-filling samples, all at once, with the side constraints imposed on the phase variables
ϕ
taken into account. The resulting dataset
D
with all the responses available ends up containing 500
×
36 input instances.
We empirically confirmed that
MAS
trained on
D
exhibits decent predictive performance (Section 4.1), even with
the presence of huge fluctuations of local energy distributions in the output fields (Section 2.1). In case where a more
rigorous answer to the question “How much data?” is sought, deep active learning [
105
] may secure more thrifty via
directly including a particular ML model of interest into the loop of data acquisition.
4 Result
4.1 Predictive Performance
Details on the training of
MAS
are stated in Appendix A. Figure 4gives a visual impression on the predictive
performance of the trained
MAS
with respect to the test shape dataset. The 3
×
3 plasmonic metasurface array of
interest is geometrically periodic, yet able to produce heterogeneous energy distributions when illuminated with the
heterogeneously modulated phase distributions. The five randomly selected meta-atom instances
χ(x, y)
manifest huge
topological freedom, associated with holes, gaps, and organic boundary variations, achievable through the proposed
FMB (Section 2.3). Meanwhile, the phase distributions
ϕ(x, y)
are fully specified by the six phase variables
ϕ
according
10
to Eq. 14 and passed to
MAS
through another input channel. The interplay between meta-atom
χ
and incident phase
ϕ
gives rise to a diverse array of energy distribution patterns, as seen in the third column of Figure 4. The key challenge
herein from modeling perspective is to capture the locally distributed plasmonic energy hotspots, typically exhibiting
O(103)
stronger energy intensity than the surrounding, formed either along part of the meta-atom boundary or between
a gap. For all the cases, the prediction by
MAS
shows good agreement with the ground-truth counterpart. More
examples with respect to both training and test sets are listed in Supporting Information (Figures S8-11 under Section
D).
Figure 4: Prediction results of
MAS
for five randomly selected pairs of meta-atoms and input phase fields from the test
dataset.
4.2 Single-Task Concurrent Optimization
Without loss of generality, we will consider the task set
T={1,2,3}
specified in Section 3.1. As a prior step to the
proposed multitask concurrent optimization, single-task concurrent optimization is run for the individual tasks. By
doing so we intend to:
11
•
corroborate the efficacy of the proposed A-S concurrent optimization. The optimized results with diverse
topologies as well as case-specific phase distributions are an indication that the concurrent design is necessary
to avoid suboptimality.
•
identify the upper bounds of
Jt
that the following multiobjective optimization can reach for each target pattern.
In other words, the values will enable to assess the multiobjective optimization results, in relation to the single
objective ones. The resulting objective values Jt(t∈ T )read 3.28, 3.11, 3.16, respectively.
For an individual task
nrep = 100
attempts with random initialization were made to mitigate the initial dependence of
gradient-based search in the vast joint design space. Herein we postulate the task-specific results as referential bounds
for the following multitask concurrent optimization.
4.3 Multitask Concurrent Optimization
Figure 5illustrates an instance of the Pareto-optimal solutions with respect to the task set
T={1,2,3}
, which was
identified through the proposed multitask concurrent optimization (Section 2.2). The optimized meta-atom
χ∗
T
ends up
converging to an inter-class instance. For each target pattern, the phase variable
ϕT
is optimized to tailor the resulting
energy distribution to be as close as to the target. A qualitative observation on the optimized phase distributions
ϕ∗
T
indicates that the local hills of phase lags tend to suppress energy localization around them. The FoMs with respect
to each task reads 2.71, 2.96, and 2.46, respectively. Recalling the single-task optimization result, it seems that this
Pareto solution supports a decent performance for Pattern II (
J2= 2.96
vs
3.11
), despite the presence of the other two
patterns simultaneously taken into account during the optimization. The performance as good as that of the single-task
optimization, however, could have come at the cost of the performance gap against the single-task counterpart in Pattern
III (J3= 2.46 vs 3.16).
Looking into the Pareto solutions all together, we further investigate the trade-off among tasks in the proposed multitask
concurrent optimization. The 3D FoM space in Figure 5(a) shows the scatter plot of the 100 optimized solutions
{z∗
k}nrep
k=1
, each of which started with random initialization on
z
. Denoted with red star, the 15 Pareto solutions form the
finite Pareto set
Pθ
, which is a finite collection of landmarks for the Pareto surface
PJ
. The ground-truth surface can be
approximated as
ˆ
PJ
with upon proper surface fitting (yellow region in Figure 5). 2D scatter plots in Figure 5(b)-(d) are
projections of Figure 5(a) onto the three principal planes. The projections reveal that the FoMs of interest manifest
a trade-off in design of MMM. Without an arbitrary, user-defined importance specified, global optimality across the
Pareto solutions would remain ambiguous.
In Figure 7, all the optimized meta-atoms in the Pareto set are enumerated with the corresponding radar plot of FoMs.
By definition, a Pareto solution outperforms another regarding at least one task (Section 2.2). The diverse topology
observed in these Pareto-optimal meta-atoms – arguably blended from bow-tie, ellipse, ring – is a strong indication of
(1) task importance specificity in design of multifunctional systems and (2) one-to-many mapping in the inverse problem
involving the joint
A
-
S
space, an extension of that in conventional geometry-only inverse problems. Meanwhile, the
radar plots show FoM footprints, which seem as diverse as the meta-atoms; this implies that the proposed multitask
concurrent optimization can accommodate on-demand task importance, no matter whether it has been declared in
advance or yet to be specified. Figure 7(a)-(c) shows such an example of the latter, where the relative task importance
for Tis hypothetically given as (1,1,1),(2,1,1),(5,1,4), respectively.
So far we have intentionally limited our discussion for the specific task set T. We claim that the key observations and
findings discussed in this section generalize to other task sets.
4.4 Discussion
Alternatives to the Input Representations Through the case study, we have shown the trained neural operator
MAS
can be plugged into the Fourier meta-atom representation
z
and the harmonic phase
ϕ
for inverse design purposes.
Without re-training, the model can also be connected to other input representations, as long as the training data well
covers the space spanned by the chosen representations. According to the primary concern of inverse design, alternative
meta-atom representations include latent representation learned from generative models [
38
,
39
], fabrication-aware
representation [
106
–
108
], mixed qualitative-quantitative representation [
40
,
109
], to name a few. Different stimulus
representations can also be of interest [
57
]. In case the training data does not cover selected input representations
well, the neural operator can be re-trained on either (1) new out-of-distribution sparse observations or (2) physics-
based residual, as has been postulated by some recent works scrutinizing extrapolation capability of deep neural
operators [110].
12
Figure 5: A Pareto-optimal solution of the proposed multitask optimization constructed for
T={1,2,3}
. (a) A selected
set of target patterns, where a target focusing region is marked with red box. (b) The optimized energy distributions
{||E||2
1,||E||2
2,||E||2
3}
that are programmable through the Pareto-optimal meta-atom
χ∗
paired with the optimized set
of task-specific stimuli {ϕ∗}={ϕ∗
1, ϕ∗
2, ϕ∗
3}. (c) The figure-of-merits for each task.
Model Transparency The proposed neural operator based modeling, essentially, is a field-to-field surrogate that offers
full-field predictions. By construction, the approach features model transparency: plausibility of the field prediction can
be directly inspected with relevant domain knowledge, e.g., strong energy localization forming either between a gap
or around meta-atom boundaries. This is a sharp contrast to commonplace black-box models that are often given a
direct “shortcut” to the output quantity of interest at downstream (e.g., spectra of transmission/phase delay). Despite the
practicality and easy implementation, the modeling approach may merely imitate the scratch of underlying light-matter
interactions dictated by rich spatiotemporal causal effects. In our case the proposed neural operator combats this issue
via encoding long-range, across-meta-atom spatial interactions into the model. Importantly, the advantage comes
without extra cost of the data acquisition, compared to that of conventional models fed with full-field simulation data; a
possible exception would be the training data computed through analytic approximations that directly give spectral
quantities of interest without full-field information computed.
Extending to Geometrically Aperiodic Arrays Another assumption regarding the modeling and inverse design
presented is that meta-atoms on the array are periodic. We argue that the proposed framework as is has no technical
hurdles to impede both modeling and inverse design of geometrically aperiodic arrays. In fact, the proposed framework
could rather be an exclusive means to conduct a system-level top-down inverse design all at once, as opposed to
the bottom-up camp popular in literature, where a massive meta-atom library is prepared — often under negligible
13
Figure 6: Scattering plots of the Pareto-optimal solutions. The upper bounds for multitask concurrent optimization (red
dotted lines) have been identified by the single-task counterpart individually conducted for each task (Section 4.2). (a)
The yellow surface denotes the Pareto surface
PJ
, approximated as the convex hull of the Pareto set
Pθ
. (b)-(d) 2D
projections of the Pareto surface PJ.
.
meta-atom coupling – and then simply tiled in the array during the system-level design. A conceptual extension of
the meta-atom library based approach has been proposed by An et al. [
48
], where a supercell library is prepared to
compensate for the deviations of response spectra under near-field coupling effects. Even with the extension, however,
the surrogate does not offer field predictions and the supercell-based inverse design is intrinsically restricted to the class
of design problems where required distributions of transmission/phase delay have been identified (e.g., meta-lenses).
Such cases can be handled by our proposed framework. We envision that the extension of the proposed framework to
aperiodic meta-atom arrays would be feasible with (1) intelligent data acquisition strategies redefining shape diversity
for supercells – not for unit cells – as a core pillar and (2) gradient-based search with a large number of randomly
initialized replicates to deal with the extra dimensionality incurred in the architecture space.
14
Figure 7: Task importance specificity of multifunctional metasurfaces. Optimized meta-atoms in the Pareto set with their
radar plots for the FoMs are enumerated. (a)-(c) The best solutions where a hypothetical task importance
(J1,J2,J3
)
is given (a) (1, 1, 1). (b) (2, 1, 1). (c) (5, 1, 4).
Other Optimization Methods Throughout this article the design sensitivities of interest, e.g.,
∂J/∂z
and
∂J/∂ϕ
,
are assumed to be accessible through Automatic Differentiation. Augenstein et al. [
77
] articulated that the combination
of a data-driven surrogate and gradient-based search allows to offset the cost for data acquisition and model construction.
Echoing the claim, we have also harnessed gradient-based, iterative design update, summarized in Eqs. 9and 10, with
a number of random initial starting points. Nevertheless, we reflect upon the claim via postulating some scenarios,
beyond our case study, that could make the claim refutable: (i) the design sensitivity is not reliable enough to be directly
used for design search and (ii) the predictive performance of associated ML model is not accurate enough, hence
for concurrent optimization it seems more reasonable to use the ground-truth solver with thrifty sequential sampling.
If such is the case, the proposed formalism in Eq. 6may be better addressed through efficient global optimization
algorithms, e.g., Bayesian optimization [
111
], provided that the scalability issue with respect to input dimensionality
can be somehow resolved. Regarding multitask/multiobjective Bayesian optimization, readers are referred to some
foundational works [
112
]. In the meantime, the widely-used metaheuristic optimization [
113
], e.g., generic algorithms
and particle swarm optimization, in general, seems not effective for inverse design of MMM in that: (1) the vast,
high-dimensional design space jointly formed by architecture and stimulus and (2) many optimization hyperparameters
that are critical for search performance thus supposed to be exhaustively fine-tuned.
5 Conclusion
In this paper, we aim to address three grand challenges associated with the design of multifunctional metamaterials
involving heterogenous fields, namely, (1) vast design space jointly formed by architecture, stimulus, and optionally
material, (2) a prevalent trade-off across multiple functionalities of interest, and (3) a lack of standardized inverse
problem formulations on multifunctional metamaterials and solution procedures thereof. To overcome the limitations of
the palette approach that assumes scale separation, we presented a data-driven design framework that can streamline the
inverse design of multifunctional metamaterials whose functionality/operating conditions feature heterogeneous fields.
The framework interlocks three methodological pillars:
•
Implicit Fourier neural operator, which serves as a field-to-field surrogate from a pair of architecture-stimulus
fields to the corresponding high-dimensional, possibly heterogeneous physical fields.
•
A standard formulation of the inverse problem on a class of multifunctional metamaterials, where a system is
open to both architecture and stimulus, and multiple target functionalities often subject to a trade-off. We also
propose a principled, gradient-based solution procedure involving the joint architecture-stimulus space, with
Pareto-optimality of the multifunctionality addressed.
15
•
Fourier multiclass blending as a new data generation scheme. It facilitates to accommodate domain knowledge,
produce inter-class, quasi-free meta-atoms with smooth topological transition, and features training-free
dimension reduction and built-in reconstruction.
By seamlessly integrating the three pillars, we demonstrated our approach to the inverse design on plasmonic meta-
surfaces whose field distribution can be dynamically programmable through spatial light modulators. Our proposed
approach addresses simultaneously the aforementioned three grand challenges. The prediction results of the proposed
implicit Fourier neural operator demonstrated the satisfactory predictive performances over heterogeneous energy fields,
featuring plasmonic hotspots with an energy intensity on the order of
103×
stronger than the surrounding, with respect
to virtually infinite field-type pairs of quasi-free supercell and incident phase. The optimization results corroborated
that the proposed framework can automatically identify a Pareto set of meta-atom and incident phase. Looking into the
Pareto-optimal solutions, we reported huge diversity regarding both meta-atom topology and figure-of-merit profiles,
which could handle both mutable task importance specificity and the one-to-many mapping of inverse design. In addition
to the technical contributions and observations centered on the case study, we also articulated three crucial advantages
that the proposed framework offers by construction, namely (1) direct connectivity to alternative combinations of input
representations, (2) model transparency that allows a physics-based sanity check, and (3) sample-/parameter-efficiency.
In a broader sense, we shared our perspectives on feasible extensions of our framework to accommodate geometrically
aperiodic arrays, and other optimization algorithms as well. Collecting evidence to support these claims with the design
scenarios not directly covered in our case study would be our future work. We believe that this design framework, suc-
ceeding to the palette approach, qualifies as an important step toward next-generation data-driven design for multiscale
architectures.
Acknowledgements
W. Chen and D. Lee appreciate the support by the NSF BRITE Fellow program (CMMI 2227641), the NSF CSSI
program (OAC 1835782), and the Northwestern McCormick Catalyst Award. Y. Yu and L. Zhang would like to
acknowledge support by the NSF Award (DMS-1753031) and the AFOSR grant (FA9550-22-1-0197). Portions of this
research were conducted on Lehigh University’s Research Computing infrastructure partially supported by NSF Award
(2019035).
Appendix A. Training Details of the Proposed Implicit Fourier Neural Operator MAS
Scheduling When training a machine learning model based on stochastic gradient descent, a source of noise is
introduced due to the random sampling of training instances, and does not vanish even when we arrive at a minimum [
86
].
A workaround for better convergence is to employ a learning rate schedule which gradually decreases the learning
rate between epochs as the training progresses. Following this, in this work, we employ a learning rate schedule
which gradually decreases the learning rate between epochs as the training progresses. During the network training the
learning rate lr(i)for each iteration (i)is recursively updated according to
lr(i)=lr0γj(i)
nstep k,(20)
where
lr0
is the initial learning rate set as
lr0=lr = 0.01
in our case;
nstep
is the update period set as 100 in our
training. The scheduling is incorported into Adam optimizer [85].
Training Figure 8displays the training results with regard to different depths. All the results show acceptable
differences of loss between training and test data. We conducted parameter study on depth and empirically chose
depth=8 considering (1) the balance between model complexity and predictive performance and (2) the gap between
Ltrain
and
Ltest
. The discrete jumps in each training run are involved with the scheduling, following Eq. 20. The
proposed IFNO
MAS
is open for further performance improvement upon hyperparameter study on model complexity
as well as training parameters, using either conventional grid search or Bayesian optimization [114,115].
Appendix B. The Wave Analysis Simulation
A visual illustration of the wave analysis is in Figure 9. The wave of incidence has a wavelength of
660
nm (or a
frequency of
454
THz) and is polarized in the
x
-direction. The input phase is adjusted by the six design variables
from our suggested phase representation. The periodicity is set at
440
nm, which is two-thirds of the wavelength. The
unit nanoantenna, composed of gold with a permittivity of
ε=−13.682 + 1.3056i
at frequency
f
, forms an array
16
Figure 8: Training history of MAS for different layer depths.
Figure 9: A schematic of wave analysis. (a) A side view of the whole plasmonic metasurface system. (b) Virtual profiles
of a modulate incident phase and the resulting norm of electric field intensity.
through the periodic tessellation of
30
-nm-thick unit cells. This array is set on a cuboid of SiO
2
, which is half the
wavelength thick and has a permittivity of
ε= 3.75
. Perfectly matched layers surround the entire analysis domain to
minimize boundary reflections. Frequency Domain in The RF Module of COMSOL Multiphysics [
116
] was used for
the full-wave analysis.
17
Appendix C. Fourier Multiclass Blending
Figure 10: A visual pipeline from a binary meta-atom to the Fourier representation. (a) A 2D binary image. (b) The
Fourier Transform of the given image. (c) The sparse representation after cropping with padding depth
d
. (d) The
reconstructed binary image with inverse Fourier Transform.
Our implementation procedure of Fourier multiclass blending builds on the topological encoding procedure proposed
by Liu et al. [
71
]. Upon proper translations, high-frequency components of the Fourier representation are located
along the boundary of the frequency image
ψ(x, y)
.
Rkeep
denotes the internal region to be preserved as is, while
the complementary region will be zeroed out.
Rkeep
is specified through the depth
d
that represents the thickness
of padding (Figure 10). The resulting dimension of the sparse representation becomes
(N−2d)2
. For instance, if
resolution
N
and depth
d
happen to be 64 and 30, respectively, the corresponding dimension of
ψr(x, y)
would reduce
to
(64 −2×30)2= 16
. Figure 11 gives an example of reconstruction errors with respect to
d
for a couple of binary
images. By construction, the Fourier transform produces the frequency map that inherits some types of symmetries
in the image space. For example, given a binary image
χ(x, y)
with four-fold symmetry, the corresponding sparse
representation
ψr(x, y)
also holds four-fold symmetry, thus is reducible to one quadrant. The resulting dimensionality
reduces from
N×N
to
(N−2d)2/4
. Revisiting the above-mentioned case where
N= 64
and
d= 30
, this results in
dimension reduction from
642
D to
4
D. Based on our empirical observation on the L1 error distribution, visualized in
Figure 12,
d= 11
was chosen as the padding depth. This results in the feature matrix of
{(11 + 1)/2}2= 36
D. For
notational simplicity, the flattened vector of the matrix, which would be called Fourier feature, is to be used. This is the
final form of the Fourier representation
z
used for Fourier multiclass blending in the main body. So far the description
on the encoding procedure has assumed its deployment to 2D images in
R2
space. The procedure trivially generalizes
to R3space.
18
Figure 11: Built-in reconstruction with the Fourier representation. All the black boxes denote the original binary image
χ(x, y)
, while all the red lines denote the boundaries of it. (a) Reconstructed images
χr(x, y)
of the given unit cell for
d= 5,7,9
, respectively. (b) L1 error of reconstruction as a function of dpeth
d
. The red dotted line denotes 5% error.
(c) L1 error of reconstruction of another unit cell.
Figure 12: L1 error distribution of reconstruction under
d= 11
and a error threshold 0.05. Approximately 87.2%
shows less error than the threshold.
19
Figure 13: Visual comparison of linear traversal between (top) the proposed blending (FMB) and (bottom) blending in
the image space.
Figure 14: Additional examples of (a) two-class blending, ordered monotonically hence can be seen as linear traversal
and (b) three-class blending, displayed without a specific order.
20
Appendix D. Prediction Results of MAS : Additional Examples
Figure 15: Prediction results of
MAS
for five randomly selected from the training samples (1/2). All scale bars for
each column has an arbitrary unit.
21
Figure 16: Prediction results of
MAS
for five randomly selected from the training samples (2/2). All scale bars for
each column has an arbitrary unit.
22
Figure 17: Prediction results of
MAS
for five randomly selected from the test samples (1/2). All scale bars for each
column has an arbitrary unit.
23
Figure 18: Prediction results of
MAS
for five randomly selected from the test samples (2/2). All scale bars for each
column has an arbitrary unit.
References
[1] John Brian Pendry. Negative refraction makes a perfect lens. Physical review letters, 85(18):3966, 2000.
[2]
Xianglong Yu, Ji Zhou, Haiyi Liang, Zhengyi Jiang, and Lingling Wu. Mechanical metamaterials associated
with stiffness, rigidity and compressibility: A brief review. Progress in Materials Science, 94:114–173, 2018.
[3]
Reece L Lincoln, Fabrizio Scarpa, Valeska P Ting, and Richard S Trask. Multifunctional composites: A
metamaterial perspective. Multifunctional Materials, 2(4):043001, 2019.
24
[4]
Wenwang Wu, Wenxia Hu, Guian Qian, Haitao Liao, Xiaoying Xu, and Filippo Berto. Mechanical design and
multifunctional applications of chiral mechanical metamaterials: A review. Materials & design, 180:107950,
2019.
[5]
Xujin Yuan, Mingji Chen, Yin Yao, Xiaogang Guo, Yixing Huang, Zhilong Peng, Baosheng Xu, Bowen Lv, Ran
Tao, Shenyu Duan, et al. Recent progress in the design and fabrication of multifunctional structures based on
metamaterials. Current Opinion in Solid State and Materials Science, 25(1):100883, 2021.
[6]
Zi-Lan Deng, Yaoyu Cao, Xiangping Li, and Guo Ping Wang. Multifunctional metasurface: from extraordinary
optical transmission to extraordinary optical diffraction in a single structure. Photonics Research, 6(5):443–450,
2018.
[7]
Shuai Wu, Qiji Ze, Rundong Zhang, Nan Hu, Yang Cheng, Fengyuan Yang, and Ruike Zhao. Symmetry-
breaking actuation mechanism for soft robotics and active metamaterials. ACS applied materials & interfaces,
11(44):41649–41658, 2019.
[8]
Junyu Li, Li Bao, Shun Jiang, Qiushi Guo, Dehui Xu, Bin Xiong, Guangzu Zhang, and Fei Yi. Inverse
design of multifunctional plasmonic metamaterial absorbers for infrared polarimetric imaging. Optics express,
27(6):8375–8386, 2019.
[9]
Oraib Al-Ketan and Rashid K Abu Al-Rub. Multifunctional mechanical metamaterials based on triply periodic
minimal surface lattices. Advanced Engineering Materials, 21(10):1900524, 2019.
[10]
Shuqi Chen, Wenwei Liu, Zhancheng Li, Hua Cheng, and Jianguo Tian. Metasurface-empowered optical
multiplexing and multifunction. Advanced Materials, 32(3):1805912, 2020.
[11]
Qiji Ze, Xiao Kuang, Shuai Wu, Janet Wong, S Macrae Montgomery, Rundong Zhang, Joshua M Kovitz,
Fengyuan Yang, H Jerry Qi, and Ruike Zhao. Magnetic shape memory polymers with integrated multifunctional
shape manipulation. Advanced Materials, 32(4):1906657, 2020.
[12]
Hongcheng Tao and James Gibert. Multifunctional mechanical metamaterials with embedded triboelectric
nanogenerators. Advanced Functional Materials, 30(23):2001720, 2020.
[13]
Ghazaleh Kafaie Shirmanesh, Ruzan Sokhoyan, Pin Chieh Wu, and Harry A Atwater. Electro-optically tunable
multifunctional metasurfaces. ACS nano, 14(6):6912–6920, 2020.
[14]
Sensong An, Bowen Zheng, Hong Tang, Mikhail Y Shalaginov, Li Zhou, Hang Li, Myungkoo Kang, Kathleen A
Richardson, Tian Gu, Juejun Hu, et al. Multifunctional metasurface design with a generative adversarial network.
Advanced Optical Materials, 9(5):2001433, 2021.
[15]
Mingze Liu, Wenqi Zhu, Pengcheng Huo, Lei Feng, Maowen Song, Cheng Zhang, Lu Chen, Henri J Lezec,
Yanqing Lu, Amit Agrawal, et al. Multifunctional metasurfaces enabled by simultaneous and independent control
of phase and amplitude for orthogonal polarization states. Light: Science & Applications, 10(1):107, 2021.
[16]
Meisam Askari, David A Hutchins, Peter J Thomas, Lorenzo Astolfi, Richard L Watson, Meisam Abdi, Marco
Ricci, Stefano Laureti, Luzhen Nie, Steven Freear, et al. Additive manufacturing of metamaterials: A review.
Additive Manufacturing, 36:101562, 2020.
[17]
Ke Bi, Qingmin Wang, Jianchun Xu, Lihao Chen, Chuwen Lan, and Ming Lei. All-dielectric metamaterial
fabrication techniques. Advanced Optical Materials, 9(1):2001474, 2021.
[18]
Gwanho Yoon, Inki Kim, and Junsuk Rho. Challenges in fabrication towards realization of practical metamaterials.
Microelectronic Engineering, 163:7–20, 2016.
[19]
Igor Levchenko, Kateryna Bazaka, Michael Keidar, Shuyan Xu, and Jinghua Fang. Hierarchical multicomponent
inorganic metamaterials: intrinsically driven self-assembly at the nanoscale. Advanced Materials, 30(2):1702226,
2018.
[20]
Ketki M Lichade, Yizhou Jiang, and Yayue Pan. Hierarchical nano/micro-structured surfaces with high surface
area/volume ratios. Journal of Manufacturing Science and Engineering, 143(8):081002, 2021.
[21]
Tuba Dolar, Doksoo Lee, and Wei Chen. Interpretable neural network analyses for understanding complex
physical interactions in engineering design. In International Design Engineering Technical Conferences and
Computers and Information in Engineering Conference, volume 87301, page V03AT03A021. American Society
of Mechanical Engineers, 2023.
[22]
Doksoo Lee, Wei Wayne Chen, Liwei Wang, Yu-Chin Chan, and Wei Chen. Data-driven design for metamaterials
and multiscale systems: A review. arXiv preprint arXiv:2307.05506, 2023.
[23]
Sunae So, Trevon Badloe, Jaebum Noh, Junsuk Rho, and Jorge Bravo-Abad. Deep learning enabled inverse
design in nanophotonics. Nanophotonics, 9(5):1041–1057, 2020.
25
[24]
Lyle Regenwetter, Amin Heyrani Nobari, and Faez Ahmed. Deep generative models in engineering design: A
review. Journal of Mechanical Design, 144(7):071704, 2022.
[25]
Siddhant Kumar and Dennis M Kochmann. What machine learning can do for computational solid mechanics.
In Current trends and open problems in computational mechanics, pages 275–285. Springer, 2022.
[26]
Yabin Jin, Liangshu He, Zhihui Wen, Bohayra Mortazavi, Hongwei Guo, Daniel Torrent, Bahram Djafari-
Rouhani, Timon Rabczuk, Xiaoying Zhuang, and Yan Li. Intelligent on-demand design of phononic metamateri-
als. Nanophotonics, 11(3):439–460, 2022.
[27]
Rebekka V Woldseth, Niels Aage, J Andreas Bærentzen, and Ole Sigmund. On the use of artificial neural
networks in topology optimisation. Structural and Multidisciplinary Optimization, 65(10):1–36, 2022.
[28]
Sunae So, Jungho Mun, Junghyun Park, and Junsuk Rho. Revisiting the design strategies for metasurfaces:
Fundamental physics, optimization, and beyond. Advanced Materials, page 2206399, 2022.
[29]
Chen-Xu Liu and Gui-Lan Yu. Deep learning for the design of phononic crystals and elastic metamaterials.
Journal of Computational Design and Engineering, 10(2):602–614, 2023.
[30]
Xiaoyang Zheng, Xubo Zhang, Ta-Te Chen, and Ikumu Watanabe. Deep learning in mechanical metamaterials:
From prediction and generation to inverse design. Advanced Materials, page 2302530, 2023.
[31]
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals
and systems, 2(4):303–314, 1989.
[32]
Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with
arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks,
6(4):911–917, 1995.
[33]
Samuel Lanthaler, Zongyi Li, and Andrew M Stuart. The nonlocal neural operator: Universal approximation.
arXiv preprint arXiv:2304.13221, 2023.
[34]
Bo Zhu, Mélina Skouras, Desai Chen, and Wojciech Matusik. Two-scale topology optimization with microstruc-
tures. ACM Transactions on Graphics, 2017.
[35]
Christian Schumacher, Bernd Bickel, Jan Rys, Steve Marschner, Chiara Daraio, and Markus Gross. Microstruc-
tures to control elasticity in 3D printing. ACM Transactions on Graphics (TOG), 34(4):1–13, 2015.
[36]
Julian Panetta, Qingnan Zhou, Luigi Malomo, Nico Pietroni, Paolo Cignoni, and Denis Zorin. Elastic textures
for additive fabrication. ACM Transactions on Graphics (TOG), 34(4):1–12, 2015.
[37]
Itzik Malkiel, Michael Mrejen, Achiya Nagler, Uri Arieli, Lior Wolf, and Haim Suchowski. Plasmonic
nanostructure design and characterization via deep learning. Light: Science & Applications, 7(1):1–8, 2018.
[38]
W. Ma, F. Cheng, Y. Xu, Q. Wen, and Y. Liu. Probabilistic representation and inverse design of metamaterials
based on a deep generative model with semi-supervised learning strategy. Adv. Mater., 31(35):1–9, 2019.
[39]
Z. Liu, D. Zhu, S. P. Rodrigues, K. T. Lee, and W. Cai. Generative model for the inverse design of metasurfaces.
Nano Lett., 18(10):6570–6576, 2018.
[40] Sunae So and Junsuk Rho. Designing nanophotonic structures using conditional deep convolutional generative
adversarial networks. Nanophotonics, 8(7):1255–1261, 2019.
[41]
Ke Liu, Rachel Sun, and Chiara Daraio. Growth rules for irregular architected materials with programmable
properties. Science, 377(6609):975–981, 2022.
[42]
MG Moharam, Eric B Grann, Drew A Pommet, and TK Gaylord. Formulation for stable and efficient implemen-
tation of the rigorous coupled-wave analysis of binary gratings. JOSA a, 12(5):1068–1076, 1995.
[43]
Allen Taflove, Susan C Hagness, and Melinda Piket-May. Computational electromagnetics: the finite-difference
time-domain method. The Electrical Engineering Handbook, 3(629-670):15, 2005.
[44] Jian-Ming Jin. The finite element method in electromagnetics. John Wiley & Sons, 2015.
[45]
Hoyeong Kwon, Dimitrios Sounas, Andrea Cordaro, Albert Polman, and Andrea Alù. Nonlocal metasurfaces for
optical signal processing. Physical review letters, 121(17):173004, 2018.
[46]
Adam C Overvig, Stephanie C Malek, and Nanfang Yu. Multifunctional nonlocal metasurfaces. Physical Review
Letters, 125(1):017402, 2020.
[47]
James R Capers, Stephen J Boyes, Alastair P Hibbins, and Simon AR Horsley. Designing the collective non-local
responses of metasurfaces. Communications Physics, 4(1):209, 2021.
26
[48]
Sensong An, Bowen Zheng, Mikhail Y Shalaginov, Hong Tang, Hang Li, Li Zhou, Yunxi Dong, Mohammad
Haerinia, Anuradha Murthy Agarwal, Clara Rivero-Baleine, et al. Deep convolutional neural networks to predict
mutual coupling effects in metasurfaces. Advanced Optical Materials, 10(3):2102113, 2022.
[49]
Jaebum Noh, Yong-Hyun Nam, Sun-Gyu Lee, In-Gon Lee, Yongjune Kim, Jeong-Hae Lee, and Junsuk Rho.
Reconfigurable reflective metasurface reinforced by optimizing mutual coupling based on a deep neural network.
Photonics and Nanostructures-Fundamentals and Applications, 52:101071, 2022.
[50]
Yihan Ma, Jonas Florentin Kolb, Achintha Avin Ihalage, Andre Sarker Andy, and Yang Hao. Incorporating
meta-atom interactions in rapid optimization of large-scale disordered metasurfaces based on deep interactive
learning. Advanced Photonics Research, 4(4):2200099, 2023.
[51]
Evan W Wang, David Sell, Thaibao Phan, and Jonathan A Fan. Robust design of topology-optimized metasurfaces.
Optical Materials Express, 9(2):469–482, 2019.
[52]
Ke Liu, Larissa S Novelino, Paolo Gardoni, and Glaucio H Paulino. Big influence of small random imperfections
in origami-based metamaterials. Proceedings of the Royal Society A, 476(2241):20200236, 2020.
[53]
Wei Chen, Doksoo Lee, Oluwaseyi Balogun, and Wei Chen. Gan-duf: Hierarchical deep generative models for
design under free-form geometric uncertainty. Journal of Mechanical Design, 145(1):011703, 2022.
[54]
Mooseok Jang, Yu Horie, Atsushi Shibukawa, Joshua Brake, Yan Liu, Seyedeh Mahsa Kamali, Amir Arbabi,
Haowen Ruan, Andrei Faraon, and Changhuei Yang. Wavefront shaping with disorder-engineered metasurfaces.
Nature photonics, 12(2):84–90, 2018.
[55]
Mingfeng Xu, Qiong He, Mingbo Pu, Fei Zhang, Ling Li, Di Sang, Yinghui Guo, Renyan Zhang, Xiong Li,
Xiaoliang Ma, et al. Emerging long-range order from a freeform disordered metasurface. Advanced Materials,
34(12):2108709, 2022.
[56]
Sunkyu Yu, Cheng-Wei Qiu, Yidong Chong, Salvatore Torquato, and Namkyoo Park. Engineered disorder in
photonics. Nature Reviews Materials, 6(3):226–243, 2021.
[57]
T. S. Kao, E. T. F. Rogers, J. Y. Ou, and N. I. Zheludev. “digitally” addressable focusing of light into a
subwavelength hot spot. Nano Lett., 12(6):2728–2731, 2012.
[58]
Doksoo Lee, Shizhou Jiang, Oluwaseyi Balogun, and Wei Chen. Dynamic control of plasmonic localization by
inverse optimization of spatial phase modulation. ACS Photonics, 9(2):351–359, 2021.
[59]
Myungjoon Kim, Nayoung Kim, and Jonghwa Shin. Concurrent inverse design of structured light and metasurface
for nanopatterning process. In Frontiers in Optics, pages FM5C–6. Optica Publishing Group, 2022.
[60]
Huaiqian You, Quinn Zhang, Colton J Ross, Chung-Hao Lee, and Yue Yu. Learning deep implicit fourier
neural operators (ifnos) with applications to heterogeneous material modeling. Computer Methods in Applied
Mechanics and Engineering, 398:115296, 2022.
[61]
Huaiqian You, Quinn Zhang, Colton J Ross, Chung-Hao Lee, Ming-Chen Hsu, and Yue Yu. A physics-guided
neural operator learning approach to model biological tissues from digital image correlation measurements.
Journal of Biomechanical Engineering, 144(12):121012, 2022.
[62]
Ning Liu, Yue Yu, Huaiqian You, and Neeraj Tatikola. Ino: Invariant neural operators for learning complex
physical systems with momentum conservation. In International Conference on Artificial Intelligence and
Statistics, pages 6822–6838. PMLR, 2023.
[63]
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and
Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. arXiv preprint
arXiv:2003.03485, 2020.
[64]
Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and
Anima Anandkumar. Fourier Neural Operator for Parametric Partial Differential Equations. In International
Conference on Learning Representations, 2020.
[65]
Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying dif-
ferential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193,
2019.
[66]
Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations.
Advances in neural information processing systems, 34:24048–24062, 2021.
[67]
Huaiqian You, Yue Yu, Marta D’Elia, Tian Gao, and Stewart Silling. Nonlocal kernel network (NKN): A
stable and resolution-independent deep neural network. Journal of Computational Physics, page arXiv preprint
arXiv:2201.02217, 2022.
27
[68]
Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and
Anima Anandkumar. Neural operator: Learning maps between function spaces. arXiv preprint arXiv:2108.08481,
2021.
[69]
Lu Zhang, Huaiqian You, Tian Gao, Mo Yu, Chung-Hao Lee, and Yue Yu. Metano: How to transfer your
knowledge on learning hidden physics. arXiv preprint arXiv:2301.12095, 2023.
[70]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image seg-
mentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International
Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
[71]
Zhaocheng Liu, Zhaoming Zhu, and Wenshan Cai. Topological encoding method for data-driven photonics
inverse design. Optics express, 28(4):4825–4835, 2020.
[72]
Yu-Chin Chan, Daicong Da, Liwei Wang, and Wei Chen. Remixing functionally graded structures: data-driven
topology optimization with multiclass shape blending. Structural and Multidisciplinary Optimization, 65(5),
April 2022.
[73]
Anshuman Singh, James T Hugall, Gaetan Calbris, and Niek F van Hulst. Far-field control of nanoscale hotspots
by near-field interference. ACS Photonics, 7(9):2381–2389, 2020.
[74]
Robin D Buijs, Tom AW Wolterink, Giampiero Gerini, Ewold Verhagen, and A Femius Koenderink. Programming
metasurface near-fields for nano-optical sensing. Advanced Optical Materials, 9(15):2100435, 2021.
[75]
Hajun Yoo, Hyunwoong Lee, Seongmin Im, Sukhyeon Ka, Gwiyeong Moon, Kyungnam Kang, and Donghyun
Kim. Switching on versatility: Recent advances in switchable plasmonic nanostructures. Small Science, page
2300048, 2023.
[76] John M Erdman. Functional analysis and operator algebras: An introduction. Version October, 4, 2015.
[77]
Yannick Augenstein, Taavi Repan, and Carsten Rockstuhl. Neural operator-based surrogate solver for free-form
electromagnetic inverse design. ACS Photonics, 2023.
[78]
Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson. U-fno—an
enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources,
163:104180, 2022.
[79]
Lu Lu, Raphaël Pestourie, Steven G Johnson, and Giuseppe Romano. Multifidelity deep neural operators
for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat
transport. Physical Review Research, 4(2):023210, 2022.
[80]
Zhijie Li, Wenhui Peng, Zelong Yuan, and Jianchun Wang. Modeling long-term large-scale dynamics of
turbulence by implicit u-net enhanced fourier neural operator. arXiv preprint arXiv:2305.10215, 2023.
[81]
Jose Antonio Lara Benitez, Takashi Furuya, Florian Faucher, Xavier Tricoche, and Maarten V de Hoop. Fine-
tuning neural-operator architectures for training and generalization. arXiv preprint arXiv:2301.11509, 2023.
[82]
Eldad Haber, Lars Ruthotto, Elliot Holtham, and Seong-Hwan Jun. Learning across scales—multiscale methods
for convolution neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32,
2018.
[83]
Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned
deformations for PDEs on general geometries. arXiv preprint arXiv:2207.05209, 2022.
[84]
Ning Liu, Siavash Jafarzadeh, and Yue Yu. Domain agnostic fourier neural operators. arXiv preprint
arXiv:2305.00478, 2023.
[85]
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980,
2014.
[86] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
[87]
Oleksandr Buchnev, Nina Podoliak, Malgosia Kaczmarek, Nikolay I Zheludev, and Vassili A Fedotov. Electrically
controlled nanostructured metasurface loaded with liquid crystal: toward multifunctional photonic switch.
Advanced Optical Materials, 3(5):674–679, 2015.
[88]
Bing Li, Chao Zhang, Fang Peng, Wenzhi Wang, Bryan D Vogt, and KT Tan. 4d printed shape memory
metamaterial for vibration bandgap switching and active elastic-wave guiding. Journal of Materials Chemistry
C, 9(4):1164–1173, 2021.
[89] Liwei Wang, Yilong Chang, Shuai Wu, Ruike Renee Zhao, and Wei Chen. Physics-aware differentiable design
of magnetically actuated kirigami for shape morphing. arXiv preprint arXiv:2308.05054, 2023.
28
[90]
Chao Wang, Zhi Zhao, and Xiaojia Shelly Zhang. Inverse design of magneto-active metasurfaces and robots:
Theory, computation, and experimental validation. Computer Methods in Applied Mechanics and Engineering,
413:116065, 2023.
[91]
Stephanie C Malek, Ho-Seok Ee, and Ritesh Agarwal. Strain multiplexed metasurface holograms on a stretchable
substrate. Nano letters, 17(6):3641–3645, 2017.
[92]
Osama R Bilal, André Foehr, and Chiara Daraio. Reprogrammable phononic metasurfaces. Advanced materials,
29(39):1700628, 2017.
[93]
Zefeng Xu and Yu-Sheng Lin. A stretchable terahertz parabolic-shaped metamaterial. Advanced Optical
Materials, 7(19):1900379, 2019.
[94]
R Timothy Marler and Jasbir S Arora. The weighted sum method for multi-objective optimization: new insights.
Structural and multidisciplinary optimization, 41:853–862, 2010.
[95]
Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural
information processing systems, 31, 2018.
[96]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin,
Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
[97]
Doksoo Lee, Yu-Chin Chan, Wei Chen, Liwei Wang, Anton van Beek, and Wei Chen. t-metaset: Task-aware
acquisition of metamaterial datasets through diversity-based active learning. Journal of Mechanical Design,
145(3):031704, 2023.
[98] Yu-Chin Chan. Yu-Chin Chan PhD Dissertation. Technical report, 2022.
[99]
Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for
dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
[100]
Amr M Shaltout, Vladimir M Shalaev, and Mark L Brongersma. Spatiotemporal light control with active
metasurfaces. Science, 364(6441):eaat3100, 2019.
[101]
Lei Kang, Ronald P Jenkins, and Douglas H Werner. Recent progress in active optical metasurfaces. Advanced
Optical Materials, 7(14):1801813, 2019.
[102]
Oluwaseyi Balogun. Optically detecting acoustic oscillations at the nanoscale: Exploring techniques suitable for
studying elastic wave propagation. IEEE Nanotechnology Magazine, 13(3):39–54, 2019.
[103]
COMSOL AB. COMSOL Multiphysics® v.5.6. Stockholm, Sweden, 2020. Available from:
https://www.
comsol.com.
[104]
Ruichen Jin, Wei Chen, and Agus Sudjianto. An efficient algorithm for constructing optimal design of computer
experiments. In International design engineering technical conferences and computers and information in
engineering conference, volume 37009, pages 545–554, 2003.
[105]
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin
Wang. A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9):1–40, 2021.
[106]
M. Chen, J. Jiang, and J. A. Fan. Design space reparameterization enforces hard geometric constraints in
inverse-designed nanophotonic devices. ACS Photonics, 7(8):2039–2046, 2020.
[107]
Alec M. Hammond et al. Photonic topology optimization with semiconductor-foundry design-rule constraints.
Optics Express, 29(15):23916–23938, 2021.
[108]
Ibrahim Tanriover, Doksoo Lee, Wei Chen, and Koray Aydin. Deep generative modeling and inverse design of
manufacturable free-form dielectric metasurfaces. ACS Photonics, 2022.
[109]
Liwei Wang, Siyu Tao, Ping Zhu, and Wei Chen. Data-driven topology optimization with multiclass microstruc-
tures using latent variable gaussian process. Journal of Mechanical Design, 143(3):031708, 2021.
[110]
Min Zhu, Handi Zhang, Anran Jiao, George Em Karniadakis, and Lu Lu. Reliable extrapolation of deep neural
operators informed by physics or sparse observations. Computer Methods in Applied Mechanics and Engineering,
412:116064, 2023.
[111]
J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In
Adv. Neural Inf. Process. Syst., volume 4, pages 2951–2959, 2012.
[112]
Kevin Swersky, Jasper Snoek, and Ryan P. Adams. Multi-task bayesian optimization. In Advances in Neural
Information Processing Systems 26, 2013.
[113]
A. Hanif Halim, Idris Ismail, and Swagatam Das. Performance assessment of the metaheuristic optimization
algorithms: an exhaustive review. Artificial Intelligence Review, 54:2323–2409, 2021.
29
[114]
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of
the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2015.
[115]
Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning
algorithms. Advances in neural information processing systems, 25, 2012.
[116] Comsol multiphysics v. 5.3. https://www.comsol.com, 2021. Accessed: 2021-12-12.
30