Content uploaded by Hang Z. Yu
Author content
All content in this area was uploaded by Hang Z. Yu on Jun 25, 2020
Content may be subject to copyright.
Quantitative microstructure analysis for solid-state metal
additive manufacturing via deep learning
Yi Han1, R. Joey Griffiths2, Hang Z. Yu2, Yunhui Zhu1,
a)
1
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Virginia 24061, USA
2
Department of Materials Science and Engineering, Virginia Tech, Blacksburg, Virginia 24061, USA
a)
Address all correspondence to this author. e-mail: yunhuiz@vt.edu
Received: 13 January 2020; accepted: 27 April 2020
Metal additive manufacturing (AM) provides a platform for microstructure optimization via process control, but
establishing a quantitative processing-microstructure linkage necessitates an efficient scheme for microstructure
representation and regeneration. Here, we present a deep learning framework to quantitatively analyze the
microstructural variations of metals fabricated by AM under different processing conditions. The principal
microstructural descriptors are extracted directly from the electron backscatter diffraction patterns, enabling a
quantitative measure of the microstructure differences in a reduced representation domain. We also demonstrate
the capability of predicting new microstructures within the representation domain using a regeneration neural
network, from which we are able to explore the physical insights into the implicitly expressed microstructure
descriptors by mapping the regenerated microstructures as a function of principal component values. We validate
the effectiveness of the framework using samples fabricated by a solid-state AM technology, additive friction stir
deposition, which typically results in equiaxed microstructures.
INTRODUCTION
The last decade has witnessed waves of advances in metal addi-
tive manufacturing (AM), from the popularly used beam-based
technologies, such as powder bed fusion and directed energy
deposition [1], to the more emerging solid-state technologies
such as ultrasonic AM [2] and additive frictions stir deposition
[3, 4]. Given the far-from-equilibrium processing conditions in
most metal AM, the microstructure in the as-printed material
is dictated by the processing kinetics and is sensitively depen-
dent on the processing parameters [1, 3, 5, 6]. Typically involv-
ing a significant number of tunable processing parameters and
therefore a large processing space [1, 7], metal AM does not
only unlock the freedom in 3D shaping with complex geome-
tries but also allows for microstructure design in the as-printed
components, from which the mechanical properties can be
controlled. Unfortunately, achieving the desired microstructure
by AM parameter optimization is still mostly a trial-and-error
process, which is slow and expensive.
With metal AM providing an optimal platform for micro-
structure control through processing, establishment of a quan-
titative processing-microstructure linkage is essential for
microstructure optimization per given applications. However,
such an establishment is impossible without an efficient
scheme for quantitative description of the microstructures
resulting from metal AM. The microstructure of a polycrystal-
line material is traditionally described by imaging-based
qualitative interpretation. This relies on characterization
techniques such as optical microscopy, electron microscopy,
and most representatively electron backscatter diffraction
(EBSD), which provides orientation and positional information
of individual grains [8]. Recently, simple quantitative micro-
structure descriptions based on the EBSD patterns have
become widespread in material research, wherein pre-defined
microstructure descriptors, such as average grain size, grain
size deviation, micro-texture, grain boundary misorientation
distribution, and other features, are quantitatively analyzed
[8, 9, 10]. There is no doubt that this type of description can
provide important information of the microstructure, but it is
insufficient in two critical aspects. First, the selection of the
pre-defined microstructure descriptors is arbitrary; it is not
guaranteed that they can comprehensively or efficiently repre-
sent the essence of any given microstructure. Second, the quan-
tification is based on statistical homogenization and
distribution functions, so location-dependent microstructure
Article
DOI: 10.1557/jmr.2020.120
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 1
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
information is not effectively preserved. To address these prob-
lems, more sophisticated microstructure representation
approaches have been proposed, including Hyperspherical har-
monics [11, 12], network representation and spectral graphic
theory [13], and n-point correlation functions [14]. It has
also been proposed to register the complete geometry informa-
tion for each grain using microstructure basis functions in the
rotational grain boundary space [15, 16].
Fundamentally, the challenges in microstructure represen-
tation lie in the processing and analysis of the high-
dimensional data describing the microstructure as well as the
identification of the principal descriptors that most effectively
represent the microstructural features or variations for a
given problem. It is important to note that the principal micro-
structure descriptors may differ case by case, depending on the
goal of the target problem—e.g., whether it is to identify the
most salient feature changes by varying processing parameters,
or to recognize the most influential features that control the
yield strength or the fracture toughness. With the advance
and resurgence of artificial intelligence, new opportunities
arise in terms of resolving the microstructure representation
problem using data-driven approaches in addition to the con-
ventional physics-based approaches. This strategy is promising
as the data-driven approaches have been proven effective for
big-data analytics and feature extraction [17, 18, 19, 20]. In par-
ticular, deep learning [21] has emerged as a prominent
approach for quantitative analysis of high-dimensional data
and has demonstrated unprecedented capabilities of identifying
multiscale features from complex data patterns. Examples
abound in computer vision and data processing, such as
image classification [22, 23, 24], semantic segmentation [25],
object detection [26, 27, 28], instance segmentation [29], cluster-
ing analysis [30, 31, 32], texture synthesis, and reconstruction
[33, 34], and computer-aided material design [35, 36, 37].
Deep learning has also been actively applied in EBSD imaging
denoizing and indexing [38, 39, 40], wherein the crystal orienta-
tion information is extracted from noisy and blurring Kikuchi
patterns. Noteworthy recent advancements in deep learning-
based image analysis include StyleGan, which is a generative
adversarial network that can generate visually indistinguishable
fake images from pre-defined attributes [41], as well as
PointRend, which is an instance segmentation algorithm with
high accuracy and efficiency [42].
Encouraged by the success in feature extraction and image
reconstruction, we explore the potential of using deep learning
and deep neural networks (DNNs) for microstructure represen-
tation and regeneration in metal AM, which is based on the
analysis and feature extraction of EBSD patterns (i.e., the
inverse pole figure maps). Following Gatys’multilayer pattern
representation scheme [34], we present a deep learning frame-
work that extracts the key microstructural features from a
pre-trained DNN. Unlike the conventional feature extraction
method that only uses the last layer, Gatys’method uses the
Gram matrix in multiple layers to permit a multiscale micro-
structure representation. This framework allows us to examine
the microstructure formed under drastically different processing
conditions in metal AM and to identify the most distinguishable
microstructural features (i.e., microstructure descriptors) using
principal component analysis, from which a reduced representa-
tion of the microstructure is established. Within the reduced rep-
resentation domain, possible microstructures are predicted via
microstructure regeneration through convolutional neural net-
works [43].
As one of the first attempts to leverage DNN in microstruc-
ture representation and regeneration, here we test the frame-
work using samples fabricated by a solid-state metal AM
technology, additive friction stir deposition, which is known
to result in simple equiaxed grains rather than the complicated
dendritic microstructures commonly seen in powder bed fusion
and directed energy deposition. We confirm the effectiveness of
the resultant microstructure representation and regeneration
and discuss the physical insights into the implicitly expressed
microstructural descriptors by mapping the regenerated micro-
structures in the reduced representation domain. This explor-
atory work lays the foundation toward establishment of
quantitative processing-structure linkages in metal AM and
can be potentially employed in general materials science prob-
lems, such as heterogeneous material design and optimization.
FRAMEWORK OF QUANTITATIVE
MICROSTRUCTURE REPRESENTATION VIA
DEEP LEARNING
Quantitative microstructure analysis based on the EBSD pat-
terns or inverse pole figure maps can be viewed as a quantita-
tive image analysis problem, which has benefited significantly
from deep neural networks (DNNs), such as VGG16 [44],
Inception [24], and ResNet [23]. These networks are all trained
on ImageNet, an extremely large dataset with over 10
6
images
from 1000 different classes [45]. With such a large database, the
network is able to learn the representative features of the
images for successful classification. By taking advantages of
the learned representations on the convolutional and fully con-
nected layers, transfer learning can be applied to enable feature
extraction in unsupervised machine learning [46] and super-
vised learning on a small dataset [47]. The special characteristic
of EBSD patterns is that the image is "textured"—which, in
image processing terms, means that it involves repeated local
patterns that are translational invariant. This should be distin-
guished from the concept of texture in metallurgy, which
corresponds to preferred crystallographic orientations in a
polycrystal. In the realm of image analysis, the texture
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 2
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
representation scheme developed by Gatys has seen great suc-
cess in texture synthesis [34] and image style transfer [41]
and has been proven to be capable of generating images with
astonishingly high similarity [43]. The presented deep learning
framework follows the same philosophy of this scheme to
analyze the EBSD patterns for microstructure representation
and regeneration in metal AM, and in addition uses principal
component analysis to extract the microstructural descriptors
that represent the differences between the measured
microstructures.
Figure 1 shows the overall flowchart of the presented deep
learning framework for microstructure representation and
regeneration in metal AM, including three major steps. First,
an EBSD measurement is implemented on samples processed
with drastically different additive friction stir deposition condi-
tions. The measured EBSD patterns are augmented to generate
a large dataset for multiscale feature extraction through a pre-
trained DNN in Step 2. The Gram matrix is calculated for the
last layer on each of the five different scales, yielding a multi-
scale feature vector. Based on this multiscale feature extraction,
a reduced representation is generated via principal component
analysis (PCA), which maximizes the microstructure differ-
ences among samples made by various additive friction stir
deposition conditions. Supervised classification is then per-
formed on the reduced representation, which only requires
the use of a few principal components. Finally, in Step 3, a
regeneration network is established to retrieve the microstruc-
tures at given locations within the reduced representation
domain. This framework is implemented using Python with
Tesonflow [48], Keras [49] and Scikit-Learn [50].
Preprocessing of EBSD data
The dataset of training comes from commercially pure copper
fabricated by additive friction stir deposition under different
manufacturing conditions [51], which are mainly defined by
the tool head rotation rate Ωand in-plane motion velocity V.
There are three different sets of processing conditions includ-
ing Ω= 300 RPM and V= 9 in/min, Ω= 600 RPM and V=
3 in/min, and Ω= 600 RPM and V= 9 in/min. The obtained
EBSD patterns (inverse pole figures) are shown in Figs. 2(a)–
2(c) with the same field of view and labeled as M1, M2, and
M3, respectively [51].
To generate the training data from such a small dataset
(only three EBSD orientation maps), we implement a data aug-
mentation procedure. The large microstructure images are
cropped into a set of smaller pieces with a scanning window.
The size of the cropping window is chosen to be slightly
above the scale, where microstructure characteristics can be
considered to be uniform. Random horizontal and vertical
flip are implemented to further increase the size of training
datasets. All cropped images are generated with the same
scale and the aspect ratio to maintain the grain size and
shape characteristics. The dataset is then randomly split into
training and testing sets by the ratio of 8:2.
Multilayer feature extraction from VGG16
The generated training data is then fed into a DNN. In this
work, we use a convolutional neural network (VGG16),
which was trained on ImageNet to classify natural images
[44]. VGG16 is a very deep DNN developed for large-scale
Figure 1: Experimental flowchart. Step 1: Microstructures of the processed samples are measured via EBSD. Step 2: Reduced representation is established through
PCA analysis using multilayer feature extraction. Step 3: Predictions of microstructures are generated through the restoration of the Gram matrix from designated
principal component values.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 3
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
image recognition tasks and has demonstrated capability of
processing and extracting spatial features with very high
dimensions. The DNN contains five convolutional blocks
that perform convolutional calculations on five different
length scales. At the end of each block, a max-pooling layer
downsamples the extracted feature map. Normally, activation
of the final convolutional layer is used as the extracted feature
to identify the object in the image. In our application, how-
ever, more emphasis is needed for the statistical correlation
of patterns across multiple scales. We apply Gatys’sscheme
[34] to extract image texture features based on the Gram
matrix, where we focus on the pair-wise correlation between
feature maps generated on multiple layers of the network. By
explicitly including correlation features on each of the five
scales of the blocks, the texture feature extraction is proven
to be more sensitive to the multiscale patterns. Features are
extracted based on the Gram matrix calculated for multiple
layers with the following equation:
Gl
ij =
k
Fl
ikFl
jk,
where ldenotes the index of the layer, F
l
is the matrix that
stores the activation of the feature map and filters, iand j
are the indices for one pair of feature, Fl
ik corresponds to
the activation of the ith filter in layer lat position k.Five
Gram matrices are calculated from the activation of the last
layers from each convolutional block of VGG16. The feature
map size generated by the VGG16 for the five blocks are 64,
128, 256, 512, and 512, respectively. After the correlation cal-
culation, the Gram matrices of all five layer contains 64 × 64
+ 128 × 128+ 256 × 256 + 512 × 512+ 512 × 512 = 610,304
elements. This Gram matrix is flattened as a feature vector
andisusedinthefollowinganalysis.
Reduced microstructure representation
The Gram matrix-based feature vector, while extracting com-
prehensive correlation information, is cumbersome. Direct
classification and visualization analysis based on such high-
dimensional data is infeasible. We, thus, perform a principal
component analysis (PCA) on the flattened Gram matrix to
generate a reduced representation from the training data of
the EBSD measurements. The analysis identifies the principal
components that maximally preserve the variance between
the input microstructures. In this way, PCA can significantly
reduce the dimensions of representation by removing the
correlated features.
The established principal components (PC) from the train-
ing data can be used as basis features to span a reduced repre-
sentation domain for microstructures. Arbitrary microstructure
can be projected into the reduced representation domain by
evaluating its Gram matrices from the pre-trained DNN and
applying the PCA transform. As a result, the microstructure
differences can be quantitatively measured in terms of distances
in the PC representation domain.
Note that the reduced PC domain is established to best rep-
resent the differences between the microstructures in the
Figure 2: Microstructure measurements from EBSD for three different processing conditions, labeled as (a) M1, (b) M2, and (c) M3, from Ref. [51]. (d–f) show a
small subset of randomly augmented patches cropped with a scanning window size of 179 × 179 μm
2
for each microstructure, respectively. Random horizontal and
vertical flip are applied to increase the number of data patches.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 4
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
training datasets. It is possible that new microstructure data
may contain differences that are not captured by the PCs gen-
erated from the training dataset. The effectiveness of the repre-
sentation can be evaluated by restoring the Gram matrix feature
vector from the reduced representation using the PCA inverse
transformation process, followed by calculating the fidelity η
through the complement of normalized mean square error
(NMSE) between the restoration and the ground truth via
h=1−i(Gi−
ˆ
Gi)2
iG2
i
,(1)
where the largest possible value of 1 means full restoration and
lower value means worse restoration. If the differences of the
new microstructure are well captured by the established PC,
we know that the new microstructure is well represented in
the reduced domain. On the other hand, a poor NMSE value
indicates that the new microstructure involves feature variance
on a different dimension and that the established representa-
tion needs to be extended.
Classification of microstructures
The established reduced representation is then used to classify
microstructure patches for each measurement. Support vector
machine (SVM) is proven to be reliable for binary classifica-
tion, by which a hyperplane is formed to separate the datasets
while maximizing the margin of separation [52, 53]. For multi-
class classification, one-to-rest strategy can be used to train one
binary SVM classifier for each class. In our framework, linear
SVM classifiers are trained and validated on the principal fea-
tures to classify the microstructures from the three processing
conditions. The classification accuracy of the SVM has been
used as a metric for finding the best number of dimensions
to keep. The accuracy is evaluated from 2 to 10 principal com-
ponents. The number of principal components is chosen at the
point where accuracy converges.
Microstructure regeneration based on the reduced
representation
While we have demonstrated the approach that reduces the
number of microstructural descriptors through PCA, a success-
ful reduced representation also requires that high-dimensional
microstructure data be restored from the few-PC based repre-
sentation. The restoration of the microstructure from the
reduced representation is achieved by feature-based image
regeneration. We first restore the feature vector from PC by
the inverse transform process of PCA. As illustrated in the pre-
vious section, the quality of the restoration can be evaluated by
the NMSE. Next, the restored Gram matrix feature vector Gis
used as the target feature map. Following Gatys’style
transformation work, an initial random noise image with fea-
ture vector ˆ
Gis repeatedly updated and optimized to match
the restored Gram matrix G. The loss between the Gram matri-
ces is evaluated for each layerlvia
El=1
4|Fl|2
i
(Gl
i−
Gli)2,(2)
where |F
l
| denotes the size of F
l
. The total loss is summed up
for all five layers. A total variation loss [54] is added to the
loss function as a regularization term to increase the smooth-
ness of the generated image and suppress noise. Optimization
is achieved via L-BFGS-B algorithm [55] and back propagation.
After a number of iterations, or after the loss converges, the
algorithm stops and the output of the updated image resembles
the predicted microstructure for the given data point in the PC
domain. This method allows us not only to regenerate micro-
structures similar to the input EBSD measurements but also
to generate “in-between”microstructures with a set of
in-between PC values. In this way, we can predict new micro-
structures at arbitrary locations within the PC domain. Note
that in Gatys’original work, one can generate an image with
a similar Gram matrix from an image. In our case, the Gram
matrix is extracted directly from the image or restored from
the PC representation.
RESULTS AND DISCUSSION
Microstructure characterization via EBSD
The obtained EBSD patterns from the three samples for data
training are shown in Figs. 2(a)–2(c). All three microstructures
demonstrate clear equiaxed grain shapes that typically result
from thermomechanical processing, while exhibiting multiple
scale features. On a small length scale, it is evident that there
are many grains of various sizes, orientations, and lattice dis-
tortion, the latter of which is noted by color gradients within
a single grain. At a large length scale, these patterns repeat
themselves to some extent. To compare, conditions M1 and
M3 have produced relatively large grains with minimal stored
internal energy. M1 is notable for having prominent twin
boundaries, and low lattice distortion within individual color-
gradient grains. Condition M2 results in much smaller grains
than the majority of those produced by M1 and M3.
These EBSD images are cropped and flipped to generate a
large dataset for each of the microstructure. Four random
patches from the generated dataset [Figs. 2(d)–2(f )] are
shown for each of the microstructures, respectively. The crop-
ping window size of 179 μm × 179 μm is chosen to avoid
local fluctuations and incomplete grain sampling. The symmet-
ric characteristics of the microstructures allows us to flip the
patches to increase the size of the training data. According to
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 5
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
the original sizes of the EBSD images, a total number of 315,
448, and 1038 patches are generated for M1, M2, and M3,
respectively. 80% of the generated patches are used as training
data to extract the multiscale features, while 20% of the patch
data are used for testing.
Reduced representation of microstructure
The patch data shown in Figs. 2(d)–2(f) is then processed
through the pre-trained DNN network. A multiscale feature
vector is generated via calculating the correlation Gram matrix
on each of the five layers. In this way, we capture the statistical
features of the microstructure pattern across various different
length scales. As discussed earlier, this Gram matrix-based fea-
ture vector contains 610,304 components. Most of these com-
ponents represent the common features shared by all EBSD
orientation maps, which are not of interest for the analysis of
microstructure variations in material processing. Instead, the
feature components that represent the differences between
the measured microstructures are of interest. Such a reduced
representation is generated through the PCA analysis illustrated
in Sec. II.D, which identifies the principal components (i.e.,
microstructural descriptors) that are responsible for the vari-
ances among the input data patches from the three different
microstructures (M1, M2, and M3). Efficiency of the represen-
tation is evaluated by the capability of classifying different
microstructures from the testing data patches. As shown in
Fig. 3(a), the classification achieves near 100% accuracy with
merely five principal components, indicating that the differ-
ences in the detected microstructures can be represented with
a dramatically reduced number of microstructural descriptors.
Figure 3(c) shows a visualization of clustering for the sample
data points in the 3D space spanned by the first three PCs in
the 5PC domain, where training data and testing data are rep-
resented by bright and dark colors, respectively. Clear cluster-
ing is observed for the three microstructures. In addition, all
of the test data are correctly categorized into the corresponding
cluster, as shown in the confusion matrix in Fig. 3(b). The PC
values at the centroid location for the M1, M2, and M3 clusters
are summarized in Table I.
The generated principal components span a continuous 5D
space that can be used to represent microstructures beyond the
training datasets. In particular, this representation can be used
to characterize “in-between”microstructures and quantify the
differences with the input microstructures. This capability is
especially important for future works on processing-structure
modeling. To demonstrate this point, we introduce two new
microstructures and characterize them with the established
5PC representation. One EBSD microstructure is obtained
from a Cu sample fabricated using additive friction stir deposi-
tion, but with a different processing condition (Ω= 300 RPM
and V= 3 in/min). This new microstructure is labeled as M4
and its EBSD measurement is shown in Fig. 4(a). A similar
cropping process is implemented to generate a series of
EBSD data patches with the same size as the training data.
By processing the patches from M4 through the established
neural network and calculating the Gram matrix and its prin-
cipal components, we are able to locate the new microstructure
in the established 5PC space. It is found that the Gram matrix
for M4 is well represented by the five principal components,
with an NMSE score η= 0.9909. As shown in the projection
3D plot in Fig. 4(b), the new microstructure is clustered
between M1 and M3 and is relatively far from M2. This quan-
titative analysis result is consistent with the qualitative observa-
tion; M4 is visually more similar to M1 and M3. The centroid
location for M4 patches in the 5PC domain is calculated with
the PC values shown in Table II.
On the other hand, dramatically different microstructures
can be identified as outliers. We characterize another EBSD
microstructure pattern obtained from Al samples fabricated
using additive friction stir deposition (Ω= 200 RPM and V=
3 in/min). The elongated grain shapes in the Al data, shown
in Fig. 4(c), are significantly different from the equiaxed Cu
data. Such microstructure differences originate from different
levels of stacking fault energy and dynamic recovery capability,
which is elaborated in a separate work [51]. In consistency with
the observation, the Al cluster is located far away from all Cu
microstructure clusters [Fig. 4(d)]. The NMSE score for the
Al data also decreases drastically, yielding η= 0.9572. In
other words, 1 −ηincreases from 0.0091 for Cu to 0.428 for
Al. This indicates that the current training data from Cu may
not be sufficient to represent the outlier of Al. Re-training
should be considered when the fidelity drops significantly
with the inclusion of new data, indicating a poor representation
of the new microstructure. In our case, re-training is needed for
the added Al data. Al and Cu have different microstructure
evolution mechanisms during solid-state AM, so the micro-
structures are characterized by different microstructural
descriptors.
Regeneration of microstructures
In the previous section, we demonstrate the reduced represen-
tation of the EBSD microstructure patterns in the five-
dimensional PC representation domain. Here, we demonstrate
the reverse process of generating new microstructure patterns
from the 5PC representation. The regeneration is demonstrated
for both the measured microstructures as well as the new
“in-between”microstructures.
We first regenerate microstructures for the measured
microstructures M1, M2, and M3. These regenerations are
implemented by matching the generated Gram matrix G
l
to
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 6
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
the measured Gram matrix G via Eq. 2. Figure 5 shows the
regenerated microstructures for M1, M2, and M3, respectively.
We can visually observe the similarities of grain size and orien-
tation (color) distribution between the original measurements
and the regeneration results. Note that the regeneration
described in Sec. II.E is a stochastic process that aims to
match the statistical multiscale correlation features. As a result,
the regeneration procedure will produce microstructures with
random variations for each generated image. These random
variation details, however, are not considered to influence the
key characteristics of the microstructure. As shown in Fig. 5,
the regenerated microstructures R1, R2, and R3 have generally
reserved the features of the experimental measurements M1,
M2, and M3. In particular, the grain sizes are observed to be
similarly distributed, and the color tones which represent the
grain orientations have also resembled their origins. Figure 6
shows the predicted microstructures at arbitrary locations in
the representation 5PC domain. Three nonexistent microstruc-
tures (R4, R5, and R6) are predicted at the centroid location of
M4, between M1 and M3, and between M2 and M3,
Figure 3: PCA and classification results. (a) Classification accuracy as a function of the number of principal components kept. Near 100% of accuracy is achieved
with five principal components, demonstrating the efficiency of the deep learning-based EBSD microstructure representation. (b) Confusion matrix shows the per-
fect classification results with 5PC. (c) Scattering plot of patch data projected in the reduced domain of the first three principal components. Clear clustering can be
observed for patches cropped from different microstructure measurements.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 7
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
respectively. While R4, R5, and R6 are regenerated virtual
microstructures, they resemble the experimentally processed
microstructures obtained in AFSD. In particular, R4 can be
compared with M4, with the similar grain sizes and color dis-
tributions, despite the fact that M4 is not included in the train-
ing data, and thus cannot influence the regeneration of R4.
This proves the accuracy and uniqueness of the established
5PC representation.
Physics interpretation of the representation results
The 5PC representation is highly efficient and accurate to char-
acterize and simulate microstructures for the chosen metal AM
of additive friction stir deposition. However, using the
presented deep learning framework, the principal microstruc-
tural descriptors, i.e., the principal components, are implicitly
expressed. To explore the physics insights into the principal
components for microstructure representation, we study the
trend of the regenerated microstructures with varying PC val-
ues (Table III). As shown in Fig. 7, microstructures are regen-
erated by sequentially stepping along the PC1, PC2, and PC3
axes, respectively. Each row consists of five microstructures.
The center one is regenerated at the centroid location of M3,
while others are regenerated by decreasing or increasing the
PC value with one unit step (the 2nd and 4th in the row) or
two unit steps (the 1st and 5th in the row).
As shown in the 1st row, it appears that PC1, the most
important feature component, is mostly associated with the
grain size. The larger PC1 values correspond to smaller grain
sizes. PC2 appears to be associated with the color distribution
of the EBSD pattern, which corresponds to the crystallographic
orientation distribution of the microstructure. The increase of
PC2 changes the microstructure orientation from purple and
red to bright green. On the other hand, PC3 gradually changes
from light green to dark blue. PC2 and PC3 are, thus, likely to
TABLE I. The PC values for the centroid of M1, M2, and M3.
Indexing PC1 (10
11
) PC2 (10
11
) PC3 (10
11
) PC4 (10
11
) PC5 (10
11
)
M1 centroid −8.021 4.245 4.083 0.6232 2.448
M2 centroid 17.790 0.418 0.105 0.135 0.328
M3 centroid −5.352 −1.418 −1.174 −0.242 −0.814
Figure 4: Projection of new microstructures into the 5PC representation domain. (a) EBSD measurement of a Cu sample processed at a new processing condition
(M4), from Ref. [51]. (b) The projected dots for cropped patches from M4 and their averaged centroid location. (c) EBSD measurement of an Al sample (M
Al
), from
Ref. [51]. (d) The projected dots for cropped patches from M
Al
. The result indicates that M4 is similar to M2 and M3, while M
Al
is not close to any of the input
microstructures.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 8
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
be two orthogonal orientation descriptors, representing a set of
orthogonal angles or their linear combinations from the 3D
Euler angles for crystal orientation. Furthermore, the increase
of PC2 and PC3 introduces more straight grain boundary seg-
ments. Therefore, PC2 and PC3 should correspond to the crys-
tallographic orientation and grain boundary morphology of the
microstructure.
Limitations of microstructure regeneration in this
work
The regenerated and newly created EBSD maps (R1–R6)
appear fairly realistic and in line with typical maps collected
from real samples. Grains appear roughly rounded and vary
in size according to their parent condition. Even the color gra-
dient visualization of lattice distortion has been preserved in
the regenerations, with R4 showing it clearly in a number of
its grains. The deep learning network can identify features
from EBSD images regardless of their underlying physical ori-
gins. Possible regeneration artifacts, however, may stir from the
encoding scheme based on the Gram matrix. Such a pixel–pixel
correlation scheme has been demonstrated to represent the tex-
ture pattern with high fidelity, but it is less sensitive to nonsta-
tistical features, such as large-scale shapes. In our attempt, it
does appear to perform less ideal for M1 with larger grain
sizes, and it also misses the occasional twinning.
To improve microstructure regeneration in future works,
most notably, quantitative measures on regeneration qualities
must be implemented. While the regeneration approach pro-
duces realistic orientation maps, the accuracy of these maps
is not yet quantified, and important information such as mis-
orientation across a grain boundary remains to be extracted
from the newly generated images. These challenges originate
from the color coding of inverse pole figure maps (i.e., the col-
ored EBSD patterns), which is based on the function of rotation
referenced to a user assigned direction. While it is straightfor-
ward to convert the grain orientation into the referenced rota-
tion, it requires at least two sets of rotational angles from
orthogonal reference directions to fully retrieve the grain orien-
tation based on color patterns. In other words, the 3D orienta-
tion information (i.e., Euler angles) is not fully preserved in the
two-dimensional color-coded maps. An alternative EBSD plot-
ting scheme known as Quaternions can potentially solve the
problem by displaying all of the Euler angle orientation infor-
mation with a complex dual color-contrast color scheme [56].
The complex color-coding scheme has been difficult for human
visualization due to the involvement of less saturated colors
and, thus, has received less popularity than the inverse- pole
TABLE II. The PC values for the centroid of M4.
Indexing PC1 (10
11
) PC2 (10
11
) PC3 (10
11
) PC4 (10
11
) PC5 (10
11
)
M4/R4 9.086 7.893 −5.691 2.793 0.962
Figure 5: Regenerated microstructures (R1, R2, and R3) based on the Gram matrices from M1, M2, and M3, respectively. (a) Original EBSD measurements and (b)
regenerated microstructures. We see that the grain orientation distribution and grain size distribution are preserved for each regenerated microstructure category.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 9
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
figure. However, this will not be an issue for machine learning
and computer vision. Another notable feature lost in the regen-
erated microstructures is twinning, which is a characteristic fea-
ture in M1. This problem may be addressed by increasing the
depth of the DNN to improve the emphasis on the small-scale
features. In addition, removing flipped samples from the train-
ing data can also be helpful to preserve the twin features in
microstructure regeneration, because twin features are not sym-
metric upon image flipping.
CONCLUSIONS
In conclusion, we have presented a deep learning-enabled
framework for microstructure representation and regeneration
and have demonstrated its effectiveness by examining the
microstructures of samples fabricated by a solid-state AM tech-
nology: additive friction stir deposition. The most important
conclusions from this exploratory study include:
•By analyzing the EBSD patterns through a DNN, we
successfully identify and extract a set of principal
microstructural descriptors to reveal the most salient changes
between the input microstructures. In the example of copper
samples processed under different conditions, the difference
of microstructures in the processing domain is captured by
merely five principal components. The efficiency and
Figure 6: Prediction of microstructures from arbitrary PC values via regeneration. (a) Predicted microstructure R4 with PC values set at the centroid location of M4
in Table II, R5 from the middle point between M1 and M3, and R6 from the middle point between M2 and M3. (b) Locations of the predicted microstructure in the
reduced domain of the first three PCs. (c) Patch data from the measurement of M4. The similarity between M4 and R4 demonstrates the accuracy of the repre-
sentation framework.
TABLE III. The PC values for regenerated microstructure in Fig. 7.
Indexing PC1 (10
11
) PC2 (10
11
) PC3 (10
11
) PC4 (10
11
) PC5 (10
11
)
Row 1A −15.295 −1.418 −1.174 −0.242 −0.814
Row 1B −10.033 −1.418 −1.174 −0.242 −0.814
Row 1C −5.352 −1.418 −1.174 −0.242 −0.814
Row 1D −0.404 −1.418 −1.174 −0.242 −0.814
Row 1E 4.560 −1.418 −1.174 −0.242 −0.814
Row 2A −5.352 −29.385 −1.174 −0.242 −0.814
Row 2B −5.352 −15.401 −1.174 −0.242 −0.814
Row 2C −5.352 −1.418 −1.174 −0.242 −0.814
Row 2D −5.352 12.565 −1.174 −0.242 −0.814
Row 2E −5.352 26.549 −1.174 −0.242 −0.814
Row 3A −5.352 −1.418 −12.107 −0.242 −0.814
Row 3B −5.352 −1.418 −6.641 −0.242 −0.814
Row 3C −5.352 −1.418 −1.174 −0.242 −0.814
Row 3D −5.352 −1.418 4.292 −0.242 −0.814
Row 3E −5.352 −1.418 9.758 −0.242 −0.814
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 10
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
accuracy of the established representation are validated
by the nearly perfect classification and visually similar
regeneration.
•Microstructure regeneration is successfully implemented
within the 5D reduced representation domain. This includes
both repetition of known microstructures and prediction of
“in-between”microstructures at arbitrary locations in the 5D
domain.
•The physical meaning of the microstructure descriptors is
explored by mapping the regenerated microstructures in the
representation domain, wherein the most important
descriptors are suggested to correspond to the grain size,
grain orientation, and grain boundary morphology.
Quantitative grain orientation analysis can be implemented
in the regenerated microstructure by employing more complex
color-coding schemes in future works. The presented frame-
work will then enable quantitative trend analysis and micro-
structure prediction, thereby paving the road toward
resolving several core materials science problems, such as
microstructural evolution prediction, processing-structure link-
age modeling, and heterogeneous material design and
optimization.
EXPERIMENTAL PROCEDURES
EBSD data were obtained from Cu-110 and Al 6061 samples
fabricated by additive friction stir deposition using an
MELD R2 machine (MELD Manufacturing Corporation,
Christiansburg, Virginia, USA). Deposition conditions were
changed by adjusting the tool rotational velocity and tool
travel velocity during deposition. The deposited materials
were cut and sectioned in order to examine the cross section
in line with the longitudinal direction of the tool during dep-
osition. The imaging was implemented on approximately the
same position for each sample, which was in the top layer
close to the centerline of the deposit. The samples were
electro-polished in preparation for EBSD, which was per-
formed using an FEI Helios 600 NanoLab DualBeam
Microscope, Hillsboro, Oregon, USA. The inverse pole figure
maps were generated from the EBSD data using the open-
source software ATEX [57].
Acknowledgment
Y.Z. gratefully acknowledges the support of the National
Science Foundation under Award No. CMMI-1825646.
Figure 7: Microstructure evolution with varying principal component values. Starting from the centroid location of M3, microstructures are generated by sequen-
tially stepping along the PC1, PC2, and PC3 axes by –2, –1, 0, 1, and 2 units, respectively. Representative grain boundary morphology in 2A, 2E and 3A, 3E is
highlighted. The PC values used to generate these microstructures are summarized in Table III.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 11
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
References
1. I. Gibson, D.W. Rosen, and B. Stucker:Additive Manufacturing
Technologies : 3D Printing, Rapid Prototyping and Direct Digital
Manufacturing, 2nd ed. (Springer, New York; London, 2015).
2. A. Hehr and M. Norfolk: A comprehensive review of ultrasonic
additive manufacturing. Rapid Protyp. J. 26(3), 445–458 (2019).
3. R.J. Griffiths, M.E. Perry, J.M. Sietins, Y. Zhu, N. Hardwick,
C.D. Cox, H.A. Rauch, and Z.Y. Hang: A perspective on solid-
state additive manufacturing of aluminum matrix composites
using MELD. J. Mater. Eng. Perform.28, 648–656 (2019).
4. H.Z. Yu, M.E. Jones, G.W. Brady, R.J. Griffiths, D. Garcia,
H.A. Rauch, C.D. Cox, and N. Hardwick: Non-beam-based metal
additive manufacturing enabled by additive friction stir deposition.
Scr. Mater.153, 122–130 (2018).
5. A. Yadollahi and N. Shamsaei: Additive manufacturing of fatigue
resistant materials: Challenges and opportunities. Int. J. Fatigue 98,
14–31 (2017).
6. Z. Wang, T.A. Palmer, and A.M. Beese: Effect of processing
parameters on microstructure and tensile properties of austenitic
stainless steel 304L made by directed energy deposition additive
manufacturing. Acta Mater.110, 226–235 (2016).
7. W.E. Frazier: Metal additive manufacturing: A review. J. Mater.
Eng. Perform.23, 1917–1928 (2014).
8. A.J. Schwartz, M. Kumar, B.L. Adams, and D.P. Field:Electron
Backscatter Diffraction in Materials Science (Springer, Boston, MA,
2000).
9. F. Humphreys: Characterisation of fine-scale microstructures by
electron backscatter diffraction (EBSD). Scr. Mater.51, 771–776
(2004).
10. T. Maitland and S. Sitzman:Electron Backscatter Diffraction
(EBSD) Technique and Materials Characterization Examples
(Springer, Berlin, 2007).
11. J. Mason and C. Schuh: The generalized Mackenzie distribution:
Disorientation angle distributions for arbitrary textures. Acta
Mater.57, 4186–4197 (2009).
12. J. Mason and C. Schuh: Hyperspherical harmonics for the rep-
resentation of crystallographic texture. Acta Mater.56, 6141–6155
(2008).
13. O.K. Johnson, J.M. Lund, and T.R. Critchfield: Spectral graph
theory for characterization and homogenization of grain boundary
networks. Acta Mater. 146,42–54 (2018).
14. D.T. Fullwood, S.R. Niezgoda, B.L. Adams, and S.R. Kalidindi:
Microstructure sensitive design for performance optimization.
Prog. Mater. Sci.55, 477–562 (2010).
15. S. Patala: Understanding grain boundaries –The role of crystal-
lography, structural descriptors and machine learning. Comput.
Mater. Sci.162, 281–294 (2019).
16. J.K. Mason and S. Patala: Basis functions on the grain boundary
space: Theory. arXiv preprint arXiv:1909.11838 (2019).
17. T. Guo, D.J. Lohan, R. Cang, M.Y. Ren, and J.T. Allison:An
indirect design representation for topology optimization using
variational autoencoder and style transfer. In AIAA/ASCE/AHS/
ASC Structures, Structural Dynamics, and Materials. 210049 edn,
American Institute of Aeronautics and Astronautics Inc, AIAA,
AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and
Materials Conference, 2018, Kissimmee, United States, 1/8/18.
18. Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi: Deep feature
extraction and classification of hyperspectral images based on
convolutional neural networks. IEEE Trans. Geosci. Remote Sens.
54, 6232–6251 (2016).
19. Y. Lv, Y. Duan, W. Kang, Z. Li, and F.-Y. Wang: Traffic flow
prediction with big data: A deep learning approach. IEEE Trans.
Intell. Transp. Syst. 16, 865–873 (2014).
20. X.-W. Chen and X. Lin: Big data deep learning: Challenges and
perspectives. IEEE Access 2, 514–525 (2014).
21. Y. LeCun, Y. Bengio, and G. Hinton: Deep learning. Nature 521,
436 (2015).
22. A. Krizhevsky, I. Sutskever, and G.E. Hinton: Imagenet classifi-
cation with deep convolutional neural networks. Adv. Neural
Inform. Process. Syst. 1, 1097–1105 (2012).
23. K. He, X. Zhang, S. Ren, and J. Sun: Deep residual learning for
image recognition. In 2016 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR),(LasVegas,NV,2016);
pp. 770–778.
24. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov,
D. Erhan, V. Vanhoucke, and A. Rabinovich: Going deeper with
convolutions. In 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), (Boston, MA, 2015); pp. 1–9.
25. J. Long, E. Shelhamer, and T. Darrell: Fully convolutional
networks for semantic segmentation. In 2015 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), (Boston, MA,
2015); pp. 3431–3440.
26. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu,
and A.C. Berg: SSD: Single Shot Multibox Detector. In
Computer Vision –ECCV 2016. ECCV 2016. Lecture Notes in
Computer Science, vol 9905,B.Leibe,J.Matas,N.Sebe,and
M.Welling,eds.(Springer,Cham,2016);pp.21–37.
27. S. Ren, K. He, R. Girshick, and J. Sun: Faster R-CNN: Towards
real-time object detection with region proposal networks. In IEEE
Transactions on Pattern Analysis and Machine Intelligence,
(vol. 39, no. 6, 2017) pp. 1137–1149.
28. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi: You only
look once: Unified, real-time object detection. In 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR),
(Las Vegas, NV, 2016); pp. 779–788.
29. K. He, G. Gkioxari, P. Dollár, and R. Girshick: Mask R-CNN. In
2017 IEEE International Conference on Computer Vision (ICCV),
(Venice, 2017); pp. 2980–2988.
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 12
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120
30. S.M. Azimi, D. Britz, M. Engstler, M. Fritz, and F. Mücklich:
Advanced steel microstructural classification by deep learning
methods. Sci. Rep.8, 2128 (2018).
31. A. Chowdhury, E. Kautz, B. Yener, and D. Lewis: Image driven
machine learning methods for microstructure recognition.
Comput. Mat. Sci. 123, 176–187 (2016).
32. A.R. Kitahara and E.A. Holm: Microstructure cluster analysis
with transfer learning and unsupervised learning. Integr. Mater.
Manuf. Innov. 7, 148–156 (2018).
33. Q. Gao and S. Roth: Texture synthesis: From convolutional RBMs
to efficient deterministic algorithms. In Structural, Syntactic, and
Statistical Pattern Recognition. S+SSPR 2014. Lecture Notes in
Computer Science, vol 8621, P. Fränti, G. Brown, M. Loog,
F. Escolano, and M. Pelillo, eds. (Springer, Berlin, Heidelberg,
2014); pp. 434–443.
34. L. Gatys, A.S. Ecker, and M. Bethge: Texture synthesis using
convolutional neural networks. Adv. Neural Inf. Process. Syst.1,
262–270 (2015).
35. R. Cang, Y. Xu, S. Chen, Y. Liu, Y. Jiao, and M. Yi Ren:
Microstructure representation and reconstruction of heteroge-
neous materials via deep belief network for computational material
design. J. Mech. Des.139(7), 071404 (2017).
36. R. Liu, A. Kumar, Z. Chen, A. Agrawal, V. Sundararaghavan,
and A. Choudhary: A predictive machine learning approach for
microstructure optimization and materials design. Sci. Rep.5,
11551 (2015).
37. X. Li, Y. Zhang, H. Zhao, C. Burkhart, L.C. Brinson, and
W. Chen: A transfer learning approach for microstructure recon-
struction and structure-property predictions. Sci. Rep.8, 13461
(2018).
38. R. Liu, A. Agrawal, W.-K. Liao, A. Choudhary, and M. De Graef:
Materials discovery: Understanding polycrystals from large-scale
electron patterns. In 2016 IEEE International Conference on Big
Data (Big Data), (Washington, DC, 2016); pp. 2261–2269.
39. K. Kaufmann, C. Zhu, A.S. Rosengarten, D. Maryanovsky,
T.J. Harrington, E. Marin, and K.S. Vecchio: Paradigm shift in
electron-based crystallography via machine learning. arXiv pre-
print arXiv:1902.03682 (2019).
40. D. Jha, S. Singh, R. Al-Bahrani, W.-K. Liao, A. Choudhary,
M. De Graef, and A. Agrawal: Extracting grain orientations from
EBSD patterns of polycrystalline materials using convolutional
neural networks. Microsc. Microanal.24, 497–502 (2018).
41. T. Karras, S. Laine, and T. Aila: A style-based generator archi-
tecture for generative adversarial networks. In 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR),
(Long Beach, CA, USA, 2019); pp. 4396–4405.
42. A. Kirillov, Y. Wu, K. He, and R. Girshick: PointRend: Image
segmentation as rendering. arXiv preprint arXiv:1912.08193
(2019).
43. N. Lubbers, T. Lookman, and K. Barros: Inferring low-
dimensional microstructure representations using convolutional
neural networks. Phys. Rev. E 96, 052111 (2017).
44. K. Simonyan and A. Zisserman: Very deep convolutional net-
works for large-scale image recognition. arXiv preprint
arXiv:1409.1556 (2014).
45. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei:
Imagenet: A large-scale hierarchical image database. In 2009 IEEE
Conference on Computer Vision and Pattern Recognition, (Miami,
FL, 2009); pp. 248–255.
46. G. Mesnil, Y. Dauphin, X. Glorot, S. Rifai, Y. Bengio,
I. Goodfellow, E. Lavoie, X. Muller, G. Desjardins, and
D. Warde-Farley: Using Recurrent Neural Networks for Slot
Filling in Spoken Language Understanding. In IEEE/ACM
Transactions on Audio, Speech, and Language Processing, (vol. 23,
no. 3, 2015) pp. 530–539.
47. J. Chi, E. Walia, P. Babyn, J. Wang, G. Groot, and M. Eramian:
Thyroid nodule classification in ultrasound images by fine-tuning deep
convolutional neural network. J. Digit. Imaging 30,477–486 (2017).
48. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean,
M. Devin, S. Ghemawat, G. Irving, and M. Isard: Tensorflow:
A system for large-scale machine learning. In 12th {USENIX}
Symposium on Operating Systems Design and Implementation
({OSDI} 16), (Savannah, GA, USA, 2016); pp. 265–283.
49. F. Chollet, Keras: https://github.com/fchollet/keras, 2015.
50. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel,
B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, and
V. Dubourg: Scikit-learn: Machine learning in Python. J. Mach.
Learn. Res.12, 2825–2830 (2011).
51. R.J. Griffiths, D. Garcia, J. Song, V.K. Vasudevan, M.A. Steiner,
W. Cai, and H.Z. Yu: Solid-state additive manufacturing of
aluminum and copper via additive friction stir deposition: Process-
microstructure linkages. (Materialia, 2020). (under review).
52. C. Cortes and V. Vapnik: Support-vector networks. Mach. Learn.
20, 273–297 (1995).
53. J.A. Suykens and J. Vandewalle: Least squares support vector
machine classifiers. Neural Process. Lett.9, 293–300 (1999).
54. A. Chambolle: An algorithm for total variation minimization and
applications. J. Math. Imaging Vis.20,89–97 (2004).
55. R.H. Byrd, P. Lu, J. Nocedal, and C. Zhu: A limited memory
algorithm for bound constrained optimization. SIAM J. Sci.
Comput.16, 1190–1208 (1995).
56. A. Melcher, A. Unser, M. Reichhardt, B. Nestler, M. Pötschke,
and M. Selzer: Conversion of EBSD data by a quaternion based
algorithm to be used for grain structure simulations. Tech. Mech.
30, 401–413 (2010).
57. B. Beausir and J. Fundenberger: Analysis tools for electron and
X-ray diffraction. ATEX software (2017). Available at: www.atex-
software.eu (accessed May 22–June 12, 2019).
Article
▪Journal of Materials Research ▪2020 ▪www.mrs.org/jmr
© Materials Research Society 2020 cambridge.org/JMR 13
Downloaded from https://www.cambridge.org/core. IP address: 98.244.101.187, on 25 Jun 2020 at 17:20:09, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1557/jmr.2020.120