Content uploaded by Namwoo Kang

Author content

All content in this area was uploaded by Namwoo Kang on Feb 12, 2021

Content may be subject to copyright.

Sangeun Oh

1

Department of Mechanical Systems Engineering,

Sookmyung Women’s University,

Cheongpa-ro 47-gil 100, Yongsan-gu,

Seoul 04310, Korea

e-mail: ohsaeu@gmail.com

Yongsu Jung

1

Department of Mechanical Engineering,

Korea Advanced Institute of Science and

Technology,

291, Daehak-ro, Yuseong-gu,

Daejeon 34141, Korea

e-mail: yongsu50@kaist.ac.kr

Seongsin Kim

Department of Mechanical Systems Engineering,

Sookmyung Women’s University,

Cheongpa-ro 47-gil 100, Yongsan-gu,

Seoul 04310, Korea

e-mail: kss@sm.ac.kr

Ikjin Lee

2

Department of Mechanical Engineering,

Korea Advanced Institute of Science and

Technology,

291, Daehak-ro, Yuseong-gu,

Daejeon 34141, Korea

e-mail: ikjin.lee@kaist.ac.kr

Namwoo Kang

2

Department of Mechanical Systems Engineering,

Sookmyung Women’s University,

Cheongpa-ro 47-gil 100, Yongsan-gu,

Seoul 04310, Korea

e-mail: nwkang@sm.ac.kr

Deep Generative Design:

Integration of Topology

Optimization and Generative

Models

Deep learning has recently been applied to various research areas of design optimization.

This study presents the need and effectiveness of adopting deep learning for generative

design (or design exploration) research area. This work proposes an artiﬁcial intelligent

(AI)-based deep generative design framework that is capable of generating numerous

design options which are not only aesthetic but also optimized for engineering performance.

The proposed framework integrates topology optimization and generative models (e.g., gen-

erative adversarial networks (GANs)) in an iterative manner to explore new design options,

thus generating a large number of designs starting from limited previous design data. In

addition, anomaly detection can evaluate the novelty of generated designs, thus helping

designers choose among design options. The 2D wheel design problem is applied as a

case study for validation of the proposed framework. The framework manifests better aes-

thetics, diversity, and robustness of generated designs than previous generative design

methods. [DOI: 10.1115/1.4044229]

Keywords: generative design, design exploration, topology optimization, deep learning,

generative models, generative adversarial networks, design automation, design

methodology, design optimization, expert systems, product design

1 Introduction

Artiﬁcial intelligence (AI) covers all technologies pursuing

machines to imitate human behavior. Machine learning is a subset

of AI, which attempts to learn meaningful patterns from raw data

using statistical methods. Deep learning seeks to enhance the learn-

ing ability with a hierarchical neural network structure that consists

of several layers [1,2]. Recently, deep learning has been employed

not only in computer science but also in various engineering

domains. A physics-based approach in engineering domains can

be replaced with a data-driven approach in an effective way. In

mechanical engineering, it has been widely applied to autonomous

driving, robot control, biomedical engineering, prognostics and

health management, and design optimization.

Deep learning research related to design optimization can be

classiﬁed as follows: (1) topology optimization [3–7], (2) shape

parameterization [8,9], (3) computer-aided engineering (CAE)

simulation and meta modeling [10–12], (4) material design

[13–15], and (5) design preference estimation [16,17]. Section 2

introduces each research in detail.

This study commenced from the idea that deep learning is indis-

pensable for a generative design (i.e., design exploration), and the

aforementioned deep learning research can certainly be integrated

through the proposed framework in a holistic view.

1.1 Generative Design Using Topology Optimization. The

generative design is one of the design exploration methods per-

formed by typically varying design geometry parametrically and

assessing the performance of output designs [18,19]. Recent research

on the generative design utilizes topology optimization as a design

generator instead of design parameterization and develops the

methods to generate numerous designs in parallel with cloud com-

puting [20]. A designer provides diverse boundary conditions of

topology optimization, which brings different optimized designs

under different boundary conditions. Matejka et al. [21] state “gen-

erative design varies the parameters of the problem deﬁnition

while parametric design varies parameters of the geometry directly.”

The generative design aspires to explore the design options that

satisfy structural performance and choose suitable designs for

various designers’needs, whereas conventional topology optimiza-

tion seeks to ﬁnd an optimal design. The generative design concept

has been rapidly developed and implemented in a commercial soft-

ware [22] and applied in designing various structures such as auto-

mobile, architecture, and aircraft.

The overall process of the generative design consists of four

stages as follows [20]:

•Step 1: Set the design parameters and goals for topology

optimization.

1

Sangeun Oh and Yongsu Jung contributed equally to this work.

2

Corresponding authors.

Paper presented at the ASME 2018 International Design Engineering Technical

Conferences & Computers and Information in Engineering Conference, Quebec

City, Canada, Aug. 26–29, 2018.

Contributed by the Design Automation Committee of ASME for publicatio n in the

JOURNAL OF MECHANICAL DESIGN. Manuscript received March 1, 2019; ﬁnal manuscript

received July 1, 2019; published online July 15, 2019. Assoc. Editor: Xiaoping Qian.

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-1Copyright © 2019 by ASME

Downloaded from https://asmedigitalcollection.asme.org/mechanicaldesign/article-pdf/141/11/111405/5430116/md_141_11_111405.pdf by Korea Advanced Institute of Science and Technology (KAIST) user on 17 September 2019

•Step 2: Generate designs with running topology optimization

under different parameters.

•Step 3: Study options, iterate, and select the best design.

•Step 4: Manufacture the design by 3D printing.

In particular, 3D printing technology development enabled the

production of complicated geometric designs, which further accel-

erates the practical use of the generative design.

However, several drawbacks in the current generative design were

identiﬁed [23]. First, it does not use state-of-the-art AI technology

(e.g., deep learning), even though topology optimization can be con-

sidered as AI in a broad way. Second, it is unable to create aesthetic

designs. Topology optimization focuses solely on engineering per-

formance. Therefore, the results seem to be counterintuitive in the

aesthetic point of view. However, aesthetics is an essential factor

for customers and should be balanced with engineering performance

[24]. Third, the diversity of optimized designs is low, which can

result in a new design in terms of intensity or density of pixels.

However, these designs may be similar in terms of human perception.

1.2 Generative Models for the Generative Design. Genera-

tive models, one of the promising deep learning areas, can

enhance research on generative design. The generative model is

an algorithm for constructing a generator that learns the probability

distribution of training data and generates new data based on the

learned probability distribution. In particular, variational autoenco-

der (VAE) and generative adversarial network (GAN) are popular

generative models used in design optimization, where high-

dimensional design variables are encoded in low-dimensional

design space [13,14]. In addition, these models are utilized in the

design exploration and shape parameterization [8,9].

The use of the generative model to produce engineering designs

directly is limited [23]. However, this study claims that the limita-

tions can be overcome by integrating with topology optimization.

First, the generative model requires a number of training data, but

accumulated training data for various designs in the industry are

conﬁdential and difﬁcult to access. A number of designs obtained

from topology optimization are expected to serve as training data.

Second, the generative model cannot guarantee feasible engineer-

ing. In this case, engineering performance can be evaluated

through topology optimization. Third, mode collapse is one of the

main problems in the generative model, producing only speciﬁc

results and bringing large variance of output quality. However, low-

quality designs can be improved through postprocessing by

employing topology optimization.

1.3 Research Purpose. This study proposes the new frame-

work for the generative design by integrating topology optimization

and generative models. The proposed framework can provide a

number of meaningful design options accounting for engineering

performance and aesthetics and allows evaluation and visualization

of the new design options according to the design attributes (e.g.,

novelty, compliance, and cost).

The proposed framework consists of iterative design exploration

and design evaluation parts. Iterative design exploration involves

generating a large number of new designs iteratively using a

small number of previous designs. Design evaluation involves

quantifying the novelty of generated designs in comparison with

the previous designs and visualizing design options with other

design attributes. The proposed framework is applied to a 2D

wheel design of an automobile for demonstration.

The rest of this study is structured as follows. Section 2reviews

previous design optimization studies that employ deep learning.

Section 3proposes a deep generative design framework. Sections

4and 5present topology optimization and generative models,

respectively, which are the main methodologies used in our

study. Sections 6and 7present and discuss case study results,

respectively. Finally, Sec. 8summarizes the conclusions and limita-

tion and introduces future work.

2 Literature Review: Deep Learning in Design

Optimization Research

The deep learning-based design optimization research is

explained as follows. First, topology optimization can be interpreted

as deep learning from the perspective of pixel-wise image labeling

because it distributes the materials in design domain accounting for

objective and constraint functions [3]. Intensive computational

demand is a drawback of topology optimization due to iterative

ﬁnite element analysis (FEA) of a structure. Thus, Yu et al. [4]

propose the framework where a low-resolution structure is ﬁrst gen-

erated using a convolutional neural network (CNN)-based encoder

and decoder constructed on 2D topology optimal designs, and then,

it is upscaled to a high-resolution structure through conditional

GAN (cGAN) without any iterative FEA in topology optimization.

Banga et al. [5] employ 3D CNN-based encoder and decoder, which

allow the ﬁnal optimized structural output to be obtained from inter-

mediate structural inputs. Guo et al. [6] perform the topology opti-

mization on latent variables of reduced dimensional space using

VAE and style transfer. Cang et al. [7] used active learning to con-

strain the training of a neural network so that the network results in

near-optimal topologies.

Second, deep learning applicationson shape parameterization (i.e.,

design representation) have been developed. The parameterization to

deﬁne the geometry is necessary for shape optimization. However,

deﬁning the variables in complicated geometry is extremely difﬁcult,

and the correlation between variables is too strong, which can hinder

parameterization in a mathematical approach. Burnap et al. [8]show

the parameterization possibility of the 2D shape of an automobile

using VAE. Moreover, Umetani [9] shows the parameterization of

3D meshes of an automobile using autoencoder.

Third, deep learning has been applied to the metamodeling and

simulation-based optimization. Many researches tried to apply

deep learning to computational ﬂuid dynamics (CFD) because

CFD simulation has high computational cost. Guo et al. [10]

propose the CNN model to predict the responses of CFD simulation

for the 2D shape of an automobile, and Tompson et al. [11] accel-

erate Eulerian ﬂuid simulation by approximating a linear system

with CNN. Farimani et al. [12] also propose the integration of

cGAN to solve the steady-state problem for heat conduction and

incompressible ﬂuid ﬂow.

Fourth, deep learning application on the material design has been

developed because of the direct relationship between the density of

element in the material structure and pixels of images, resulting to

easy transformation of the domain from the material structures to

images. Yang et al. [14] obtain optimal microstructure using the

Bayesian optimization framework where the microstructure is

mapped into low-dimensional latent variables by using GAN.

Cang et al. [13] propose a feature extraction method to convert the

microstructure to low-dimensional design space through a convolu-

tional deep belief network. Cang et al. [15] show that generating an

arbitrary amount of microstructure by a small amount of training data

is available, proposing VAE under morphology constraint.

Finally, the application of deep learning on design preference

estimation has been developed. Burnap et al. [16] improve the pre-

diction accuracy of the customer’s preference model by learning

restricted Boltzmann machine with the original design variables

as input and extracting the future. Pan et al. [17] propose the learn-

ing preference on aesthetic appeal using Siamese neural network

architecture with cGAN.

In addition, deep learning is used in various engineering designs.

For instance, Dering and Tucker [25] have successfully mapped the

form and function of the design using 3D convolution neural

network, and Dering and Tucker [26] propose an image generation

model that integrates deep learning and big data.

The categorization of the aforementioned research is not indepen-

dent from one another, thus allowing integration and subsequently

enhancing the conventional design optimization process. Espe-

cially, the authors claim that the generative design is located at

the intersection of all these research areas and that it would be a

111405-2 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

Downloaded from https://asmedigitalcollection.asme.org/mechanicaldesign/article-pdf/141/11/111405/5430116/md_141_11_111405.pdf by Korea Advanced Institute of Science and Technology (KAIST) user on 17 September 2019

very promising research area within an AI-based design automation

system.

3 Deep Generative Design Framework

A deep generative design framework was proposed, which inte-

grates topology optimization and generative models. Figure 1

shows the entire process which consists of two main parts (i.e., iter-

ative design exploration and design evaluation) and nine stages.

The iterative design exploration is the integration of topology

optimization and generative models to produce new designs, and

design evaluation quantiﬁes and evaluates the novelty and main

attributes of new designs. Each stage is explained as follows:

In stage 1, the previous designs in the market and the industry are

collected as reference designs for stage 2. In this study, the

reference design is deﬁned as a benchmark design to create

new designs in topology optimization.

In stage 2, new designs are obtained by topology optimization

based on reference designs. In this step, topology optimi-

zation has a multi-objective function of (1) compliance mini-

mization, which represents engineering performance, and

(2) difference (i.e., pixel-wise L1 distance) minimization

from the reference design, which aims to improve aesthetics

and diversity. Different designs are obtained by varying

the relative weights of each objective function because a

trade-off between two objectives exists. Here, we assume

that previous designs in the market are more aesthetic than

conventional topology optimization results because previous

designs are created by human designers. If topology optimi-

zation can benchmark the shape of previous designs, the ﬁnal

optimization result is expected to be more aesthetic. In terms

of diversity, the more diverse reference designs are used as

input, the more diverse topology optimization results

would be. Detail of this stage is explained in Sec. 4.2.

In stage 3, similar designs gathered from topology optimization

are ﬁltered out by the similarity criterion conﬁgured for the

user-speciﬁed threshold to reduce computational costs due

to irrelevant designs. This study uses the pixel-wise L1 dis-

tance as a criterion to judge a similar design, which is set

to 10

3

. If we set this value much tightly, then the number

of designs to be generated will be reduced, but the differen-

tiation between designs will be improved. This process is

repeated in stage 6 to ﬁlter out the designs generated from

the generative models. It is important to note that L1-norm

distinguishes two of the same designs with different rotations

and considers them to be different. In order to resolve this

limitation of L1-norm, one can map the design variables

into the latent space through generative models and calculate

the L1-norm. In our case study, however, there were not

many of these cases, so we used L1-norm in the design

space only.

In stage 4, the ratio of the number of new designs in the current

iteration to the number of total designs in the previous itera-

tion is calculated. If it is smaller than the user-speciﬁed

threshold, then exit the iterative design exploration and

jump to stage 8; otherwise, proceed to stage 5. In our

study, the threshold for termination criteria is set to 0.3.

This value can be adjusted according to how much diverse

designs the user wants to generate.

In stage 5, new designs are created by generative models after

learning aggregated designs in the current iteration, and

they are used as reference designs in stage 2 after ﬁltering

out similar designs in stage 6. We used boundary equilib-

rium GAN (BEGAN), whose structure and settings are

introduced in Sec. 5.2. The iterative design exploration

should be continuously performed from stage 2 to stage 6

until the termination criterion is satisﬁed, i.e., until the

amount of generated designs are substantial. This iterative

process has the purpose of creating a large number of

various designs by starting from a small number of previous

designs in stage 1.

Next, design evaluation part consists of stages 7–9. Stage 7

involves the building of a loss function (i.e., reconstruction

error function) employing autoencoder trained by previous

designs of stage 1. This function can be used to evaluate

design novelty in comparison with the previous designs. The

details of the autoencoder model are introduced in Sec. 5.2.In

stage 8, design options obtained from iterative design explora-

tion have to be evaluated on the basis of various design attri-

butes that are essential to the designers. It can evaluate not

only the novelty of generated designs but also the physical

quantities such as the volume and compliance of designs.

Finally, the trade-off between attributes is demonstrated as plot-

ting of designs in each axis of design attributes, and the proper

designs can be chosen accounting to the relative importance of

each attribute.

The proposed framework is applied to the 2D wheel of an auto-

mobile as a case study. Sections 4and 5present detailed

Fig. 1 Deep generative design framework

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-3

Downloaded from https://asmedigitalcollection.asme.org/mechanicaldesign/article-pdf/141/11/111405/5430116/md_141_11_111405.pdf by Korea Advanced Institute of Science and Technology (KAIST) user on 17 September 2019

descriptions of two main methodologies of our proposed frame-

work, i.e., topology optimization and generative models.

4 Topology Optimization

Section 4.1 introduces the basic theory of topology optimiza-

tion which our study stems from, and Sec. 4.2 presents the pro-

posed topology optimization method for the wheel design case

study.

4.1 Basic Theory

4.1.1 Density-Based Approach. Topology optimization is

commonly referred to as the material distribution method developed

and spread to a wide range of disciplines. The basic concept is how

to distribute materials in a given design domain without any precon-

ceived design [27–29]. In this study, the compliance minimization

related to the stiffness of a structure has been carried out to redesign

existing wheels. Many approaches such as homogenization and

level-set methods can be applied, but we choose the density-based

approach where material distribution is parameterized by the

density of elements. Especially, solid isotropic material with penal-

ization (SIMP) is a procedure that implicitly penalizes intermediate

density values to lead to the black-and-white design. The basic for-

mulation of SIMP in compliance minimization can be written as

[29,30]

min c(x)=UTKU =

Ne

e=1

uT

e(Ee(xe)k0)ue

s.t.V(x)/V0=f

KU =F

0≤xe≤1,e=1,...,Ne

(1)

where Uis a displacement vector, Kis a global stiffness matrix, c(x)

is the compliance, ueis an element displacement vector, k0is an

element stiffness matrix, fis the volume fraction, N

e

is the

number of elements, x

e

is the design variable (i.e., density) of

element e, and V(x) and V

0

are the material volume and the

volume of design domain, respectively. In modiﬁed SIMP, the

density that is directly associated with Young’s modulus can be

expressed as [31]

Ee(xe)=Emin +xp

e(E0−Emin)(2)

where pis a penalization factor to ensure the black-and-white

design, and E

min

is introduced to avoid numerical instability when

the density of elements become zero.

Many studies have been done to enhance the performance of

topology optimization such as ﬁltering techniques. In this study,

we develop the code based on 99- and 88-line MATLAB codes,

which are the simplest and most efﬁcient two-dimensional topology

optimization codes written in MATLAB [30,32]. Thus, we will brieﬂy

explain algorithms such as sensitivity analyses and ﬁltering tech-

niques used in this study.

4.1.2 Sensitivity Analysis and Filtering Techniques. In

gradient-based optimization, the sensitivity analysis of the objective

and constraint function with respect to each design variable is

required to provide accurate search direction to the optimizer.

Therefore, the sensitivity analysis with respect to the density of ele-

ments can be given by

∂c

∂xe

=−px p−1

e(E0−Emin)uT

ek0ue(3)

and

∂V

∂xe

=∂

∂xe

Ne

e=1

xeve

=1(4)

under the assumption that all elements have a unit volume. On the

other hand, the optimality criteria (OC) method, one of the classical

approaches to structural optimization problems, is employed in this

paper. The OC method updates the design domain as

xnew

e=

max (0,xe−m)ifxeBη

e≤max (0,xe−m)

min (1,xe+m)ifxeBη

e≥min (1,xe+m)

xeBη

eotherwise

⎧

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎩

(5)

where mis a positive move-limit and ηis a numerical damping coef-

ﬁcient, and

Be=−(∂c/∂xe)

λ(∂V/∂xe)(6)

The Lagrange multiplier related to the volume fraction constraint

can be obtained from a bisection algorithm that is one of the popular

root-ﬁnding algorithms. The termination criteria for the conver-

gence can be written as

xnew −x∞≤ε(7)

where ɛis the tolerance usually set as a relatively small value such

as 0.01.

For the assurance of the existence of well-posed and

mesh-independent solutions, several strategies to avoid a checker-

board pattern and gray-scale issues are introduced. In this study, we

apply so-called three-ﬁeld SIMP, which has a projection scheme.

Three-ﬁeld means the original density, ﬁltered density, and projected

density. Detailed descriptions can be seen in the literature [33].

The basic ﬁlters applied to topology optimization are sensitivity

and density ﬁlters, which are used in one-ﬁeld and two-ﬁeld SIMP,

respectively. The main idea of both techniques is to modify sen-

sitivity or physical element density to be a weighted average of the

neighborhood. The neighborhood is deﬁned on the basis of the dis-

tance from the center of the element, and the maximum distance

to include in the neighborhood is a user-speciﬁed parameter referred

to the mesh-independent radius. The sensitivity ﬁlter can be written

as [34]

∂c

∂xe

=1

xeN

f=1Hf

N

f=1

Hfxf

∂c

∂xf

(8)

where the convolution operator can be written as

Hf=rmin −dist(e,f)(9)

where subscript fmeans one of the elements that the center-to-center

distance expressed as dist(e,f) between elements is smaller than r

min

.

The density ﬁlter deﬁnes the physical density with weighted aver-

aging. The weighted average concept is the same in the sensitivity

ﬁlter as Eq. (8), but the density is ﬁltered instead of sensitivity

expressed as [35,36]

˜

xe=N

f=1Hfxf

N

f=1Hf

(10)

Therefore, original and ﬁltered densities can be referred to as a

design variable and physical density, respectively. The sensitivity

analysis with respect to design variables should be modiﬁed by

introducing the physical density using a chain rule. A detailed

description can be seen in the literature [37].

The weighted average is used in both ﬁltering methods to avoid

the checkerboard pattern in an optimum design. However, the

111405-4 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

density ﬁlter can induce gray transitions between solid and void

regions. Thus, the third ﬁeld of density, or the so-called projection

ﬁlter, is introduced. It mitigates the gray transition problem by pro-

jecting to solid and void usually using a smoothed Heaviside projec-

tion [31,37,38]. In this study, we use the Heaviside projection ﬁlter

on the ﬁltered density obtained from Eq. (10). The projection ﬁlter

can be written as

˜

xe=1−e−β˜

xe+˜

xee−β(11)

where βis a parameter related to the slope of projection and can be

updated through the optimization. In three-ﬁeld SIMP with the pro-

jection ﬁlter, the sensitivity analysis is modiﬁed compared with

Eq. (8) because the ﬁnite element analysis is performed based on

the physical density obtained from Eq. (11). The sensitivity analysis

with respect to design variables can be easily derived using the

chain rule.

4.2 Proposed Topology Optimization. In the deep generative

design framework of Fig. 1, stage 2 generates new engineering

designs through topology optimization reﬂecting the shape of the

reference designs that can be either previous wheel designs from

stage 1 or generated designs from generative models (stages 5

and 6). A number of engineering performances has to be considered

when designing the wheel of the vehicle, but compliance obtained

from the static analysis has been generally employed in this research

for the sake of simplicity. Figure 2sketches the design domain and

boundary conditions for the 2D wheel design. The original design

domain is 128 by 128 elements, and the reference designs also

have 128 by 128 pixels. The outer ring of the wheel is set to the non-

design region to maintain the shape of the rim, and the inner region

is set to a ﬁxed boundary condition for connecting parts. Therefore,

spoke is the main component in the design domain.

The element stiffness matrix is based on four-node bilinear

square elements in the 88-line MATLAB code [32]. Normal and

shear forces are uniformly exerted along the surface, which are

common load conditions in the 2D wheel optimization. The

normal force and the shear force represent uniform tire pressure

and tangential traction, respectively. The ground reaction induced

by vehicle weight is disregarded because it requires an additional

symmetric condition.

The ratio between normal and shear force is a user-speciﬁed

parameter that can signiﬁcantly change the optimized wheel

design. The force ratio is deﬁned as the magnitude of normal

force divided by shear force.

However, varying boundary conditions only can limit the pro-

duction of meaningful and diverse designs. Thus, a new objective

function in topology optimization was introduced so that the

design generator can produce engineering designs while maintain-

ing the shape of various reference designs. The modiﬁed objective

function can be formulated as

f(x)=UTK(x)U+λx*−x1(12)

where UTK(x)Uis the compliance, λis a user-speciﬁed similarity

parameter, and x*are elements of the reference design. Therefore,

the L1 norm between generated and reference design represents

the similarity. L1 norm is more preferred than L2 norm because it

can alleviate the blurring of design and shows better design

quality. In addition, cross-entropy loss can also be an alternative

to L1 norm depending on how the problem is deﬁned. However,

since cross entropy is more commonly used in discrete problems,

the pixel-wise L1 norm is more appropriate in our problem which

is continuous.

All other processes are identical with conventional three-ﬁeld

SIMP explained in Sec. 4.1. The reference design represented as

x*is a binary matrix with entries from the Boolean domain since

it is black-and-white design. Hence, sensitivity analysis for addi-

tional similarity term can be expressed as

∂

∂x(λx*−x1)≅−λx*(13)

The above expression means that if a speciﬁc element in the ref-

erence design is solid, then the sensitivity is set to −λ, and 0 other-

wise to avoid providing the positive sensitivity to the OC optimizer.

In other words, the purpose of Eq. (13) is to give additional weights

of sensitivity on the solid elements of the reference design so that

the optimized design can be affected by the shape of the reference

design.

Consequently, ﬁve discrete levels of similarity parameter and

force ratio are conﬁgured to generate new designs from topology

optimization as listed in Table 1. For instance, if 100 reference

designs are available, then 2500 designs (100 reference designs ×

5 similarity parameter levels × 5 force ratio levels) can be obtained

from topology optimization. The type of condition and the number

of levels can be determined by the designers such as voxel sizes,

solver parameters, and the number of iterations in topology optimi-

zation [21].

5 Generative Models

Section 5.1 introduces popular generative models brieﬂy and

explains the BEGAN model which is mainly used in our proposed

framework. Section 5.2 shows how to utilize BEGAN for generat-

ing and evaluating wheel designs.

5.1 Basic Theory

5.1.1 Generative Adversarial Networks. GANs are designed to

infer the data generating a distribution pdata(x) by making the model

distribution generated by generator pg(G) to be close to the real data

distribution where xis the variable of real data and Gis the

Fig. 2 Design domain and boundary conditions of a 2D wheel

design

Table 1 Condition parameter levels

Condition Levels

Similarity Five levels: 0.0005/0.005/0.05/0.5/5

Force ratio Five levels: 0/0.1/0.2/0.3/0.4

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-5

generator’s differentiable function with parameters θg. Function G

has an input noise variable zand tries to map it to the real data

space by adjusting θg, thus represented as G(z;θg). Similarly, the

discriminator’s differentiable function is derived as D(x;θd),

which attempts to predict the probability that the input is from the

real dataset. The zero-sum game of maximizing the discriminator

and the generator is equivalent to maximizing log D(x) and mini-

mizing log (1 −D(G(z))) [39,40]as

min

Gmax

DV(D,G)=Ex∼pdata(x)[ log D(x)] +Ez∼pz(z)[ log (1 −D(G(z)))]

(14)

With this standard GAN structure, various GANs have been

developed by modifying the generator, the discriminator, or objec-

tive function.

To overcome the notorious difﬁculty in training GANs, deep con-

volutional GANs (DCGANs) provide a stable training model,

which works on various datasets by constructing a convolutional

neural network in the generator and discriminator. In addition,

DCGANs suggest certain techniques such as removing a fully con-

nected layer on top, applying batch normalization, and using the

leaky rectiﬁed linear unit activation function [41].

Adversarially learned inference and bidirectional GANs

(BiGANs) adopt the encoding–decoding model to the generator

to improve the quality of generated samples in an efﬁcient way.

Moreover, BiGANs emphasize taking advantages of learned fea-

tures [42,43]. For high-resolution images with stable convergence

and scalability, energy-based GANs (EBGANs) have proven to

produce realistic 128 × 128 images. EBGANs consider the discrimi-

nator as an energy function and the energy as the reconstruction

error. From this point of view, an autoencoder architecture is used

for the discriminator [44]. The autoencoder consists of encoder

and decoder functions. The input value is transformed through the

encoder and is restored to its original form again through the

decoder [2]. Wasserstein GANs (WGANs) approach the way to

obtain good image quality by changing the distance measure of

two probability distributions. WGANs show that the earth-mover

distance, which is also called Wasserstein-1, provides a differenti-

able function and, thus, produces meaningful gradients, whereas

Kullback–Leibler and Jensen–Shannon divergence in previous

research do not when two probability distributions are disjointed

[40]. BEGANs also use the Wasserstein distance as a measure of

convergence. BEGANs present an equilibrium concept, balancing

the discriminator and the generator in training and the numerical

way of global convergence [45].

Aside from enhancing the image quality, the way to control the

mode of generated outputs is presented by cGANs [46] and informa-

tion maximizing generative adversarial nets (InfoGAN) [47]. cGANs

provide additional input values to the generator and the discriminator

for categorical image generation. Furthermore, InfoGAN lets the

generator produce uncategorical images by adding a latent code

that can be categorical and continuous. It is useful for ﬁnding

hidden representations from large amounts of data. However, inten-

tionally creating a speciﬁc image is still difﬁcult.

Hitherto, many studies on GANs contribute to good image

quality in terms of convergence and stability. However, GANs

are still weak in utilizing from the design engineering point of

view, such as uneven image quality from the same saving point

of the model, especially when relatively small amounts of training

data are given and images have insufﬁcient engineering features.

5.1.2 Boundary Equilibrium GAN. This paper employs

BEGAN among GANs for the proposed framework because it pro-

vides a robust visual quality in a fast and stable manner. The auto-

encoder architecture as the discriminator used in EBGANs is also

introduced by BEGANs. Similar to WGANs, BEGANs use the

Wasserstein distance as a measure of convergence. With these tech-

niques, BEGANs achieve reliable gradients that are difﬁcult for

high-resolution 128 × 128 images.

Rather than trying to match the probability distribution of real

data, BEGANs focus on matching autoencoder loss distribution.

It measures the loss, which is the difference between the sample

and its output that passed through the autoencoder. Subsequently,

a lower bound of the Wasserstein distance between the autoencoder

loss distribution of real and that of generated samples is derived.

The autoencoder loss function, L:RNxR+,isdeﬁned as

L(v)=|v−A(v)|η

where

RNxRNxis an autoencoder function

η∈{1,2} is a targe norm

v∈RNzis a sample of dimension Nz

⎧

⎨

⎩

(15)

Applying Jensens’inequality, the lower bound of the Wasserstein

distance is derived as

|m1−m2|(16)

where m

i

∈ℝis the mean of the autoencoder loss distribution.

For the maximization of Eq. (16) for the discriminator with

m10 and m2∞, BEGANs’objective function is described

as minimizing the discriminator’s autoencoder loss function LD

and generator’s one LGas the following where θ

D

and θ

G

are the

parameters of the discriminator and the generator, G:RNzRNz

is the generator function, z ∈[−1,1]Nzare uniform random

samples of dimension N

z

, and z

D

and z

G

are samples from z. The

objective functions are deﬁned as

LD=L(x)−ktL(G(zD)) for θD

LG=L(G(zG)) for θG

kt+1=kt+λk(γL(x)−L(G(zG))) for γ=

E[L(G(z))]

E[L(x)]

⎧

⎪

⎪

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎪

⎪

⎩

(17)

where k

t

∈[0, 1] is a control factor to determine how much L(G(zD))

is reﬂected during gradient descent, λ

k

is a proportional gain for k

such as the learning rate in machine learning terms, and γ∈[0, 1]

is a diversity ratio that results in high image diversity as it increases.

Given that k

t

is changed in every training step to maintain

E[L(G(z))] =γE[L(x)] for the equilibrium, global measure of

convergence is regarded as the closest reconstruction L(x) with

the minimum absolute value of proportional control algorithm

error | γL(x)−L(G(zG))| [31]as

Mglobal =L(x)+|γ(L(x)) −L(G(zG))|(18)

5.2 Proposed Generative Models. Although designers can

utilize various state-of-the-art generative models in parallel for

stage 5 in the proposed framework (see Fig. 1), we chose and mod-

iﬁed BEGAN architecture as illustrated in Fig. 3. The encoder of the

discriminator is a network consisting of ﬁve 3 × 3 convolutional

layers, four 2 × 2 subsampling layers with stride 2, and one fully

connected layer. For the dimension of each layer, w×h×nrepre-

sents the width, height, and the number of kernels, respectively.

The exponential linear unit is used for the activation function. Gen-

erator and decoder of the discriminator use a similar structure as this

but by replacing subsampling to upsampling. The model was

trained with 16, 32, 64, and 128-dimensional latent variables z,

and all the results were utilized. Adam optimizer was used with a

learning rate of 0.00008, initial value of k

t

as 0, λ

k

as 0.001, γas

0.7, and minibatch size of 16 (see Sec. 5.1.2). The learning rate

parameter is set with reference to the settings of previous papers

that studied image quality using BEGAN [48,49].

In stage 7, autoencoder is used to evaluate design novelty. Recon-

struction errors, which are the loss functions of autoencoder, are

widely used to detect anomaly [50]. The idea is that autoencoder

can effectively reconstruct similar data to training data but not

111405-6 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

dissimilar data. This study assumes that design novelty can be mea-

sured the same way as anomaly detection. Previous designs in stage

1 used training data for autoencoder, and then, trained autoencoder

can calculate reconstruction error of new designs generated by iter-

ative design exploration. The new design which has the higher

value of reconstructing error is regarded as the design which has

the more novelty. This study employs the autoencoder structure

which is used in BEGAN as the discriminator in Fig. 3, with the

same hyperparameter settings used. The equation of reconstruction

errors is the same as Eq. (15).

In addition, a regression model can be used as an alternative to

autoencoder when previous design data are insufﬁcient. Topology

design results (in stage 2) at the ﬁrst iteration have the similarity

parameter (λ) so that regression model can be built where output

yis set to the similarity parameter. VGG16 [51], which is one

popular CNN for the regression model, was tested. Results show

that CNN-based regression can also predict similarity which is con-

trary to novelty.

6 Results

This section shows the results of a case study applying the pro-

posed framework to the 2D wheel design. In stage 1, frontal

wheel designs were collected in the market by web crawling and

converted it to binary images. A total of 658 binary images of the

wheel are collected through postprocessing as reference designs

for the ﬁrst iteration of iterative design exploration.

6.1 Iterative Design Exploration

6.1.1 Design Exploration by Topology Optimization (Stages 2

and 3). Topology optimization is performed in parallel according

to similarity and force ratio parameters listed in Table 1. Figure 4

shows an example of optimized designs according to ﬁve levels

of force ratio when the similarity is 0.0005. A large shear force is

observed to make many thin spokes of whirlwind shape. On the

other hand, large normal force makes thick and less curved spokes.

Figure 5shows the optimal designs under ﬁve levels of similarity

(i.e., Figs. 5(b)–5(f)). Figure 5(g)shows the reference designs, and

Fig. 5(a)shows the result when the reference design is unused. The

ratio between normal and shear force is set to 0.1 in case of refer-

ence design A and 0.2 in case of reference design B, and the

volume fraction is identical to reference designs. Table 2lists the

similarity and compliance of each optimal design. The optimal

designs from topology optimization evidently indicate the trade-off

between engineering performance and similarity to the reference

design.

Figure 6demonstrates the effectiveness of the proposed objective

function in Eq. (12), which states that the proposed method can

yield different designs reﬂecting the shape of the reference design

when other boundary conditions are the same for all optimized

design such as force ratio. The described results suggest that the

proper range of similarity that optimal designs are continuously

changed varies depending on the reference design. Therefore, an

experimental investigation is encouraged in advance.

Consequently, 1619 new designs have been created after ﬁltering

at stage 3. Stage 4 is passed automatically in the ﬁrst iteration of the

iterative design exploration, and the ratio of new designs is calcu-

lated as a criterion from the second iteration.

6.1.2 Design Exploration by the Generative Model (Stages 5

and 6). A total of 2277 designs were identiﬁed after stage 4,

where 658 previous designs were identiﬁed at stage 1 and 1619

designs were obtained through stage 4. These 2277 designs are

used for training designs of BEGAN at stage 5. The training

takes around 3 h on four GTX 1080 GPUs in parallel. Figure 7

Fig. 3 Network architecture of BEGAN and autoencoder: (a) overall architecture, (b) encoder of discriminator, and

(c) generator/decoder of discriminator

Fig. 4 Topology optimization of the wheel design when the

force ratio is set to (a)0,(b) 0.1, (c) 0.2, (d) 0.3, and (e) 0.4

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-7

presents examples of 128 × 128 images from the BEGAN generator,

and Fig. 8shows that global convergence Mglobal is achieved

without oscillation. A total of 128 designs were achieved through

ﬁltering at stage 6. These designs were used as reference designs

for stage 2 at the second iteration of iterative design exploration.

New 385 topology designs are regenerated after topology optimiza-

tion (stages 2 and 3).

As shown in Fig. 7, the BEGAN design results are roughly sym-

metrical, circular, and have holes in the center. Many GAN research

studies in computer science use face dataset as benchmark data, and

their results also demonstrate GANs capture symmetry features of

human faces very well, without being taught [41,45].

6.2 Design Evaluation

6.2.1 Novelty Evaluation by the Autoencoder (Stage 7). For the

autoencoder, 80% of the previous designs were used as training data

and 20% as test data. Figure 9shows examples comparing recon-

struction results between test data of previous designs and generated

designs. Designs similar to previous ones portray satisfactory recon-

struction, while dissimilar designs portray otherwise.

6.2.2 Evaluation and Visualization (Stages 8 and 9). Finally,

2004 new designs are generated after two iterations, which are

not included in the previous design set. Table 3summarizes the

number of input and output designs used at each stage. The termi-

nation criteria calculated at stage 4 after two iterations is 23.8%

(385/1619 =23.8) which is less than the threshold of 0.3.

Examples of design options are shown in Fig. 10. A 3D scatter

plot for 2004 design options was crafted using three design attri-

butes (novelty, cost, and compliance) as an axis (Fig. 11(a)).

Each attribute value used in the plot is normalized from 0 to

1. Figure 11(b)shows that trade-offs between compliance and

cost make a smooth Pareto curve because designs are all topologi-

cally optimized. On the Pareto curve, two designs are shown as

examples, one with the lowest cost and the other with the lowest

compliance. Figures 11(c)and 11(d)show trade-offs between

novelty and other attributes. Assuming novelty is a positive trait,

two designs located on the Pareto curves are shown as examples

in the ﬁgures, respectively.

Designers can trade-off three attributes and select designs accord-

ing to their design purpose and preference. Then, they can create a

3D design based on a 2D design and prototype it by 3D printing.

Figure 12 shows an example of a 3D wheel design (i.e., STL ﬁle

for 3D printing) after selecting a 2D design.

Fig. 5 Topology optimization results when the similarity is set to (a)0,(b) 0.0005, (c) 0.005, (d) 0.05, (e) 0.5, (f) 5, and (g) reference

design

Fig. 6 Optimized designs under the same boundary conditions and different reference designs

Table 2 Similarity and compliance of each reference design in

Fig. 5

Similarity 0 0.0005 0.005 0.05 0.5 5

Compliance Reference

design A

5.28 5.23 6.17 7.14 8.87 9.28

Reference

design B

8.88 8.90 8.94 9.76 10.71 13.02

111405-8 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

7 Discussion

This section analyzes and discusses performance and necessity of

the main methods used in the proposed framework.

7.1 Topology Optimization. As an additional analysis, to

check the necessity of the reference designs, we conducted topology

optimization without reference designs as shown in Fig. 13. Results

without the reference designs as the benchmark would converge to

an identical optimum if there is no change on boundary conditions.

Therefore, it cannot yield aesthetical diversity. Also, it sometimes

fails to converge, when it starts from uniform density and has no

shear force since the displacement caused by the normal force

exerted on the surface is almost uniform (see Figs. 13(a)and

13(b)). Therefore, the reference designs can enhance the diversity

of designs while achieving convergence.

In addition, topology optimization can theoretically generate inﬁ-

nite designs without the help from generative models, when topol-

ogy optimization results are used as reference designs for topology

optimization in the next iteration. We used the topology optimiza-

tion results of Figs. 4(b)–4(e)as reference designs for the next

Fig. 7 Generated wheel designs by BEGAN

Fig. 8 Convergence results of BEGAN

Fig. 9 Comparison between previous designs and generated designs in reconstruction of the

autoencoder

Table 3 Number of new generated designs at each stage

Iteration

First

Second

Method

Topology

optimization

(stages 2 and 3)

BEGAN

(stages 5

and 6)

Topology

optimization

(stages 2 and 3)

Input 658 2277

(=1619 +

658)

128

Output 1619 128 385

New topology

designs

(accumulated)

1619 —2004 (=1619 +

385)

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-9

topology optimization. Figure 14 shows the selected results which

were most different from the reference designs. The results show

that iterative topology optimization generates similar designs

since their original parent (reference design) is the same. This can

be an empirical evidence that reference designs with fundamentally

different topologies are needed to obtain a diversity of generated

designs, and generative models make this possible. Therefore, we

do not use topology optimization results as reference designs in

the proposed process.

7.2 Boundary Equilibrium Generative Adversarial

Network. One of the main problems of GAN is that there is no stan-

dardized method of measuring model performance. The following are

some ways. First, one can check in person if generated data look rea-

sonable. Second, convergence criteria can be checked as shown in

Fig. 8. Third, we can compute the difference between real data and

generated data in feature space, for example, using Inception Score

or Fréchet Inception Distance [52]. In our proposed method, we

take the second approach and check the global convergence for

Fig. 10 Generated design options

Fig. 11 Visualized design options by three attributes: novelty, cost, and compliance

111405-10 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

model validation. One advantage of our framework is that generated

designs by BEGAN are not used as ﬁnal output but as input for topol-

ogy optimization. Therefore, even though the performance of

BEGAN is slightly low, the framework can still work robustly.

Other than BEGAN, we additionally tested other generative

models such as DCGAN and VAE which were introduced in

Sec. 5.1. In our experiments, DCGAN and VAE display similar

designs as shown in Fig. 15. They appear to produce more detailed

and complex shapes than BEGAN, but they are blurrier and less

symmetrical. In addition, the generated designs lack novelty in

that the designs are more similar to those in the train data. We

also tested these results as reference designs for topology optimiza-

tion, but the results produced are less diverse.

In sum, we acquire some empirical insights for utilizing generative

models in design exploration. First, BEGAN is a good choice for

generating reference designs because it tends to create topologically

novel, yet simple, designs. Detailed shapes of reference designs used

in topology optimization fail to generate diverse designs. In fact,

CAE softwares also have a feature that simpliﬁes computer-aided

design (CAD) models before conducting topology optimization

(e.g., ﬁlling up holes). In future research, we plan to train a

network that selects only the “good”reference designs from the

results of multiple generative models. This is because DCGAN

and VAE could also have some reference designs that BEGAN

cannot create. Second, designers have to try different latent space

dimensions and epochs. In our study, we varied dimensions for

latent variables (i.e., 16, 32, 64, and 128), saved the models at differ-

ent epochs, and ultimately obtained data from many variations of the

model. It is not practical to select only one latent dimension or epochs

because each model generates unique designs.

Fig. 12 An example of a 3D wheel design using the selected 2D design

Fig. 13 Topology optimization without reference designs when the force ratio is set to (a)0,

(b) 0.1, (c) 0.2, (d) 0.3, and (e) 0.4

Fig. 14 Iterative design exploration by topology optimization only

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-11

7.3 Autoencoder. To validate the performance of autoencoder

more quantitatively, we tested how much the model can classify pre-

vious designs and generated designs, assuming generated designs

have more novelty than previous designs. We select 131 test data

set (20% of 658 previous designs) for previous design and generated

design and create a confusion matrix as shown in Fig. 16. The auto-

encoder calculates a reconstruction error for 262 test designs, and we

sort them by error size. We classify the top 50% of the designs as

generated designs and the bottom 50% as previous designs. In

false positive and false negative cases which represent the designs

that were misclassiﬁed, we see that it is not easy to distinguish

between previous and generated designs even by human eye. As

measuring criteria, both precision and recall are 91.6%.

8 Conclusion

This study proposes a design automation framework that gener-

ates various designs ensuring engineering performance and aesthet-

ics, and its effectiveness is demonstrated by the 2D wheel design

case study. The contribution of this research can be addressed as

follows.

First, this research considers engineering performance and aes-

thetics simultaneously. The proposed framework is able to control

the similarity with reference designs and engineering performance

as a multi-objective function.

Second, a large number of designs starting from a small number

of designs was generated. An iterative process is proposed where

topology optimization is conducted to create training data for the

generative models, and output designs from generative models are

used as reference designs for topology optimization again.

Third, the proposed framework offered diverse designs in com-

parison with the conventional generative design. Moreover,

increased diversity is accounted to the use of reference designs gen-

erated by generative models.

Fourth, the robustness on the quality of designs is improved.

The conventional generative models are prone to induce the mode

collapse and large variance of the quality. However, results of

the generative model in the proposed framework are reﬁned

through topology optimization instead of direct utilization (e.g.,

postprocessing).

Finally, a comparison between the novelty of generated design

and the previous designs can be evaluated. The reconstruction

error of autoencoder is used for the index of similarity to existing

designs.

This research is performed on a 2D design space and pixel-wise

images, which is identiﬁed as its main drawback. Thus, a 3D design

application with voxel data should be further investigated for prac-

tical design, and various case studies should be tested. In addition, a

recommendation system that suggests the appropriate designs (i.e.,

predicting the preference of designers and consumers among design

candidates) will be carried out.

Acknowledgment

The authors would like to thank Ah Hyeon Jin, Seo Hui Joung,

Gyuwon Lee, and Yun Ha Park who are undergraduate interns of

Sookmyung Women’s University for preprocessing and postproces-

sing of data and also Yonggyun Yu of Korea Advanced Atomic

Research Institute for his advice and ideas.

Funding Data

•National Research Foundation of Korea (NRF) (Grant No.

2017R1C1B2005266; Funder ID: 10.13039/501100003725).

•NRF (Grant No. 2018R1A5A7025409; Funder ID: 10.13039/

501100001321).

Fig. 15 Example of generated designs by DCGAN and VAE

Fig. 16 Confusion matrix for the autoencoder

111405-12 / Vol. 141, NOVEMBER 2019 Transactions of the ASME

Nomenclature

c=compliance

p=penalization factor

z=noise variable

D=differentiable function of the discriminator

G=differentiable function of the generator

E=expectation

L=loss function of the autoencoder

x

e

=density variable

˜

xe=ﬁltered density variable

˜

xe=projected density variable

References

[1] LeCun, Y., Bengio, Y., and Hinton, G., 2015, “Deep Learning,”Nature,

521(7553), p. 436.

[2] Goodfellow, I., Bengio, Y., and Courville, A., 2016, Deep Learning, The MIT

Press, Cambridge, MA.

[3] Sosnovik, I., and Oseledets, I., 2017, “Neural Networks for Topology

Optimization,”preprint arXiv:1709.09578.

[4] Yu, Y., Hur, T., Jung, J., and Jang, I. G., 2019, “Deep Learning for Determini ng a

Near-Optimal Topological Design Without Any Iteration,”Struct. Multidiscipl.

Optim.,59(3), pp. 787–799.

[5] Banga, S., Gehani, H., Bhilare, S., Patel, S., and Kara, L., 2018, “3D Topology

Optimization Using Convolutional Neural Networks,”preprint

arXiv:1808.07440.

[6] Guo, T., Lohan, D. J., Cang, R., Ren, M. Y., and Allison, J. T., 2018, “An Indirect

Design Representation for Topology Optimization Using Variational

Autoencoder and Style Transfer,”2018 AIAA/ASCE/AHS/ASC Structures,

Structural Dynamics, and Materials Conference, Kissimmee, FL, Jan. 8–12,

p. 0804.

[7] Cang, R., Yao, H., and Ren, Y., 2019, “One-Shot Generation of Near-Optimal

Topology Through Theory-Driven Machine Learning,”Comput. Aided Des.,

109, pp. 12–21.

[8] Burnap, A., Liu, Y., Pan, Y., Lee, H., Gonzalez, R., and Papalambros, P. Y., 2016,

“Estimating and Exploring the Product Form Design Space Using Deep

Generative Models,”ASME 2016 International Design Engineering Technical

Conferences and Computers and Information in Engineering Conference, Paper

No. V02AT03A013.

[9] Umetani, N., 2017, “Exploring Generative 3D Shapes Using Autoencoder

Networks,”SIGGRAPH Asia 2017 Technical Briefs, Bangkok, Thailand, Nov.

27–30, ACM, p. 24.

[10] Guo, X., Li, W., and Iorio, F., 2016, “Convolutional Neural Networks for Steady

Flow Approximation,”Proceedings of the 22nd ACM SIGKDD International

Conference on Knowledge Discovery and Data Mining, San Francisco, CA,

Aug. 13–17, ACM, pp. 481–490.

[11] Tompson, J., Schlachter, K., Sprechmann, P., and Perlin, K., 2017, “Accelerating

Eulerian Fluid Simulation With Convolutional Networks,”Proceedings of the

34th International Conference on Machine Learning-Volume 70, Sydney,

NSW, Australia, Aug. 6–11, JMLR, pp. 3424–3433.

[12] Farimani, A. B., Gomes, J., and Pande, V. S., 2017, “Deep Learning the Physics

of Transport Phenomena,”preprint arXiv:1709.02432.

[13] Cang, R., Xu, Y., Chen, S., Liu, Y., Jiao, Y., and Ren, M. Y., 2017,

“Microstructure Representation and Reconstruction of Heterogeneous Materials

Via Deep Belief Network for Computational Material Design,”ASME J. Mech.

Des.,139(7), p. 071404.

[14] Yang, Z., Li, X., Brinson, L. C., Choudhary, A. N., Chen, W., and Agrawal, A.,

2018, “Microstructural Materials Design Via Deep Adversarial Learning

Methodology,”ASME J. Mech. Des.,140(11), p. 111416.

[15] Cang, R., Li, H., Yao, H., Jiao, Y., and Ren, Y., 2018, “Improving Direct Physical

Properties Prediction of Heterogeneous Materials From Imaging Data Via

Convolutional Neural Network and a Morphology-Aware Generative Model,”

Comput. Mater. Sci.,150, pp. 212–221.

[16] Burnap, A., Pan, Y., Liu, Y., Ren, Y., Lee, H., Gonzalez, R., and Papalambros,

P. Y., 2016, “Improving Design Preference Prediction Accuracy Using Feature

Learning,”ASME J. Mech. Des.,138(7), p. 071404.

[17] Pan, Y., Burnap, A., Hartley, J., Gonzalez, R., and Papalambros, P. Y., 2017,

“Deep Design: Product Aesthetics for Heterogeneous Markets,”Proceedings of

the 23rd ACM SIGKDD International Conference on Knowledge Discovery

and Data Mining, Halifax, NS, Canada, Aug. 13–17, ACM, pp. 1961–1970.

[18] Shea, K., Aish, R., and Gourtovaia, M., 2005, “Towar ds Integrated Performance-

Driven Generative Design Tools,”Autom. Constr.,14(2), pp. 253–264.

[19] Krish, S., 2011, “A Practical Generative Design Method,”Comput. Aided Des.,

43(1), pp. 88–100.

[20] McKnight, M., 2017, “Generative Design: What It Is? How Is It Being Used?

Why It’s a Game Changer,”KnE Eng.,2(2), pp. 176–181.

[21] Matejka, J., Glueck, M., Bradner, E., Hashemi, A., Grossman, T., and

Fitzmaurice, G., 2018, “Dream Lens: Exploration and Visualization of

Large-Scale Generative Design Datasets,”Proceedings of the 2018 CHI

Conference on Human Factors in Computing Systems, Montreal QC, Canada,

Apr. 21–26, ACM, p. 369.

[22] Autodesk, 2019, “Generative Design,”https://www.autodesk.com/solutions/

generative-design

[23] Oh, S., Jung, Y., Lee, I., and Kang, N., 2018, “Design Automation By Integrating

Generative Adversarial Networks and Topology Optimization,”ASME 2018

International Design Engineering Technical Conferences and Computers and

Information in Engineering Conference, Paper No. V02AT03A008.

[24] Kang, N., 2014, “Multidomain Demand Modeling in Design for Market

Systems,”PhD Thesis, University of Michigan, Ann Arbor, MI.

[25] Dering, M. L., and Tucker, C. S., 2017, “A Convolutional Neural Network Model

for Predicting a Product’s Function, Given Its Form,”ASME J. Mech. Des.,

139(11), p. 111408.

[26] Dering, M. L., and Tucker, C. S., 2017, “Generative Adversarial Networks for

Increasing the Veracity of Big Data,”2017 IEEE International Conference on

Big Data (Big Data), Boston, MA, Dec. 11–14, IEEE, pp. 2595–2602.

[27] Bendsøe, M. P., and Kikuchi, N., 1988, “Generating Optimal Topologies in

Structural Design Using a Homogenization Method,”Comput. Methods Appl.

Mech. Eng.,71(2), pp. 197–224.

[28] Bendsøe, M. P., 1989, “Optimal Shape Design as a Material Distribution

Problem,”Struct. Optim.,1(4), pp. 193–202.

[29] Bendsoe, M. P., and Sigmund, O., 2013, Topology Optimization: Theory,

Methods, and Applications, Springer Science & Business Media, Berlin.

[30] Sigmund, O., 2001, “A 99 Line Topology Optimization Code Written in Matlab,”

Struct. Multidiscipl. Optim.,21(2), pp. 120–127.

[31] Sigmund, O., 2007, “Morphology-Based Black and White Filters for Topology

Optimization,”Struct. Multidiscipl. Optim.,33(4–5), pp. 401–424.

[32] Andreassen, E., Clausen, A., Schevenels, M., Lazarov, B. S., and Sigmund, O.,

2011, “Efﬁcient Topology Optimization in MATLAB Using 88 Lines of

Code,”Struct. Multidiscipl. Optim.,43(1), pp. 1–16.

[33] Sigmund, O., and Maute, K., 2013, “Topology Optimization Approaches,”Struct.

Multidiscipl. Optim.,48(6), pp. 1031–1055.

[34] Sigmund, O., 1997, “On the Design of Compliant Mechanisms Using Topology

Optimization,”J. Struct. Mech., 25(4), pp. 493–524.

[35] Bruns, T. E., and Tortorelli, D. A., 2001, “Topology Optimization of Non-Linear

Elastic Structures and Compliant Mechanisms,”Comput. Methods Appl. Mech.

Eng.,190(26–27), pp. 3443–3459.

[36] Bourdin, B., 2001, “Filters in Topology Optimiz ation,”Int. J. Numer. Methods

Eng.,50(9), pp. 2143–2158.

[37] Guest, J. K., Prévost, J. H., and Belytschko, T., 2004, “Achieving Minimum

Length Scale in Topology Optimization Using Nodal Design Variables and

Projection Functions,”Int. J. Numer. Methods Eng.,61(2), pp. 238–254.

[38] Xu, S., Cai, Y., and Cheng, G., 2010, “Volume Preserving Nonlinear Density Filter

Based on Heaviside Functions,”Struct. Multidiscipl. Optim.,41(4), pp. 495–505.

[39] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,

Courville, A., and Bengio, Y., 2014, “Generative Adversarial Nets,”Advances in

Neural Information Processing Systems, Palais des Congrès de Montréal,

Montréal, Canada, Dec. 8–13, pp. 2672–2680.

[40] Arjovsky, M., Chintala, S., and Bottou, L., 2017, “Wasserstein gan,”preprint

arXiv:1701.07875.

[41] Radford, A., Metz, L., and Chintala, S., 2016, “Unsupervised Representation

Learning With Deep Convolutional Generative Adversarial Networks,”ICLR

2016, San Juan, Puerto Rico, May 2–4.

[42] Dumoulin, V., Belghazi, I., Poole, B., Mastropietro, O., Lamb, A., Arjovsky, M.,

and Courville, A., 2017, “Adversarially Learned Inference,”ICLR 2017, Palais

des Congrès Neptune, Toulon, France, Apr. 24–26.

[43] Donahue, J., Krähenbühl, P., and Darrel l, T., 2017, “Adversarial Feature

Learning,”ICLR 2017, Palais des Congrès Neptune, Toulon, FrancePalais des

Congrès Neptune, Toulon, France, Apr. 24–26.

[44] Zhao, J., Mathieu, M., and LeCun, Y., 2017, “Energy-Based Generative

Adversarial Network,”ICLR 2017 , Palais des Congrès Neptune, Toulon,

France, Apr. 24–26.

[45] Berthelot, D., Schumm, T., and Metz, L., 2017, “BEGAN: Boundary Equilibrium

Generative Adversarial Networks,”preprint arXiv:1703.10717.

[46] Mirza, M., and Osindero, S., 2014, “Conditional Generative Adversarial Nets,”

preprint arXiv:1411.1784.

[47] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P.,

2016, “Infogan: Interpretable Representation Learning by Information

Maximizing Generative Adversarial Nets,”Advances in Neural Information

Processing Systems, Centre Convencions Internacional Barcelona, Spain, Dec.

5–10, pp. 2172–2180.

[48] Vertolli, M. O., and Davies, J., 2017, “Image Quality Assessment Techniques

Show Improved Training and Evaluation of Autoencoder Generative

Adversarial Networks,”preprint arXiv:1708.02237.

[49] Kocaoglu, M., Snyder, C., Dimakis, A. G., and Vishwanath, S., 2017,

“Causalgan: Learning Causal Implicit Generative Models With Adversarial

Training,”preprint arXiv:1709.02023.

[50] Sakurada, M., and Yairi, T., 2014, “Anomaly Detection Using Autoencoders

With Nonlinear Dimensionality Reduction,”Proceedings of the MLSDA

2014 2nd Workshop on Machine Learning for Sensory Data Analysis, Gold

Coast, Australia QLD, Australia, Dec. 2, ACM, p. 4.

[51] Simonyan, K., and Zisserman, A., 2015, “Very Deep Convolutional

Networks for Large-Scale Image Recognition,”ICLR 2015, San Diego, CA,

May 7–9.

[52] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S., 2017,

“GANs Trained By a Two Time-Scale Update Rule Converge to a Local Nash

Equilibrium,”Advances in Neural Information Processing Systems, Long

Beach, CA, Dec. 4–9, pp. 6626–6637.

Journal of Mechanical Design NOVEMBER 2019, Vol. 141 / 111405-13