Content uploaded by Xiaobing Shen
Author content
All content in this area was uploaded by Xiaobing Shen on Apr 22, 2024
Content may be subject to copyright.
Machine Learning Model for High-Frequency Magnetic Loss
Predictions Based on Loss Map by a Measurement Kit
Xiaobing Shen and Wilmar Martinez
KU Leuven - EnergyVille
Thor Park 8310, 3600 Genk, Belgium
xiaobing.shen, wilmar.martinez@kuleuven.be
https://homes.esat.kuleuven.be/∼wmartine/
Index Terms—Core loss modelling, Deep Neural Net-
work, Measurements, Machine learning, Magnetic device.
Abstract—Accurately forecasting the losses in high-
frequency magnetic materials is a significant challenge
when optimizing the design of high-frequency (HF) mag-
netic components. Existing models do not adequately
consider the intricate interactions among geometry, and
temperature factors, which have distinct and substantial
impacts on core losses. A new method is introduced, which
utilizes a Deep Neural Network (DNN) model to construct
parameterized models for high-frequency magnetic core
loss based on measurement data. The DNN employs the
Gaussian Error Linear Unit (GELU) activation function
and Huber loss function, and its performance is compared
to that of a conventional Rectified Linear Unit (ReLU) acti-
vation and Mean Squared Error (MSE) loss function. The
proposed DNN demonstrates significantly higher accuracy
and improved robustness.
I. INTRODUCTION
HF magnetic components play a critical role in
power electronics systems and their evolution towards
higher power density and efficiency. The design of
HF magnetic components can be the bottleneck for
designing the whole power conversion systems, for being
the bulkiest components of the system and possessing
the main losses, including winding and magnetic iron
losses [1]. Nevertheless, accurately modelling magnetic
core losses is not a trivial task, but requires exten-
sive measurements. These challenges arise from various
developments, including the use of modulation control
methods like pulse width modulation (PWM) that in-
troduce a diversity of excitation waveforms beyond the
traditional sinusoidal excitation of magnetic materials
[2]. As a result, these waveforms generally contain a
significant amount of harmonic content. Additionally, the
increase in switching frequencies into the MHz range,
driven by advances in semiconductor technology towards
wide bandgap devices, is necessary to realize compact
power converters, but it further exacerbates the complex-
ity of magnetic component characterization, modeling,
and measurements [3]. Magnetic materials also have a
changing behavior, such Ferrites N87, different sizes and
shapes, as well as temperatures with different excitations
could lead relatively different magnetic losses [4].
To understand the core losses, measurements should
be done first, then modelling could be harnessed to
predict the losses in a specific situation. Engineers tend
to go to the core loss models for predictions directly.
There are many methods of magnetic loss modelling.
They can be divided into categories as follows.
•Core Loss Separation: the loss is torn down into
hysteresis and eddy current loss, illustrated in [5].
It can be applied with no sufficient measurement
situation.
•Steinmetz Equation (SE) Families: core loss density
is derived based on SE [6]. [7]- [11] improved the
SE to predict more accurate losses with various
excitations.
•BH Curve Based: the energy produced inside the
core is the area of the BH loop, such as Preisach
[12] and Jiles-Atherton models. They need lots of
parameters and accurate BH loops.
Since SE-based models are most often used by
engineers, measurements are needed by such approaches.
Core loss measurements can be categorized as calori-
metric and electric methods. Core losses cause temper-
ature increases, and based on thermal properties and
temperature changes, the core losses could be estimated.
However, it is challenging to separate the iron loss from
the winding conduction loss. The electric method can be
used to avoid this problem. By measuring the induced
voltage and excited current, the winding losses could be
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.1
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
excluded from the measurements. The electric method is
chosen, and a measurement kit is proposed in this paper.
All the SE-based models rely on the Steinmetz
parameters. Engineers use them with constant Steinmetz
parameters. However, the parameters change with differ-
ent frequencies and different core sizes and structures.
All these discrete factors, such as the toroid size of
Ferrites N87, can affect the core losses. Therefore this
paper provides comprehensive measurement data among
different frequency ranges and sizes of magnetic material
to further improve the magnetic losses modelling, to shift
our model from material datasheets based to empirical
measurements based.
With all these discrete parameters, an accurate core
loss model must contain information on core geometry,
excitations, and nonlinear features with different oper-
ation conditions. Then, in this paper, machine learning
techniques, proven to be effective in solving nonlinear
problems, are applied to deal with these challenges. In
the context of the general advances of Machine Learning
technologies in power electronics applications, the DNN
approaches are widely used in designing and modelling
high-frequency magnetics in many different ways [13]–
[19]. For example, MagNet projects in [15]– [17] are
aimed at storing massive data for data-driven research on
high-frequency magnetics with machine learning tools.
Magnet collected massive core data for various core
materials. However, for a specific core, the core sizes
with different operating conditions are not sufficient for
the specific core loss modelling. [18] uses the Magnet
database with frequency and maximum flux density as
the input to train the DNN to get the prediction, without
considering sizes of magnetic material, temperature, and
other factors. [19] includes the DC bias as the input and
output predictions for core losses and winding losses to
train the DNN. This paper is intended to build massive
data taking the side effects and different operations into
account to improve the core loss model by the DNN
approach. During the modeling process, a comparison
of various activation functions and loss functions reveals
that the Gaussian Error Linear Unit (GELU) activation
function and Huber loss function outperform the conven-
tional Rectified Linear Unit (ReLU) activation function
and MSE loss function for core loss modeling.
The paper introduced core loss maps from the SE
family. Then a measurement kit is proposed for practical
measurements. And the measurement results are verified
with the datasheet. Then the datasets from the mea-
surements, including discrete size factors, and operating
conditions, are applied for the proposed DNN model.
The model will be testified with certain conclusions in
the end.
II. CO RE LO SS MA P FO R SE
For SE, core losses can be predicted by a formula
where the losses are derived from the excitation fre-
quency and the maximum flux density.
Pcoreloss =k·fα·Bβ(1)
Where k,α, and βare the Steinmetz parameters,
which are usually given by manufacturers or can be
derived by curve fitting. From the manufacture datasheets
[20], the SE parameters could be characterized as shown
in Table I.
TABLE I: N87 SE Parameters by Datasheets
k α β T (°C)f(kHz)
29.865 1.2 2.421 25 100
With the constant SE parameters, we could have
a constant Ferrite N87 core loss map in Fig. 1. The
problem is that with all the different sizes, the core losses
are under the assumption of constant SE parameters. It
will be shown in the following, the sizes and different ex-
citations change the core losses with the same frequency
and maximum flux density. This paper will take these
factors into account to get better core loss predictions.
Fig. 1: Ferrite N87 Core Loss Map by SE
III. MEASUREMENT SET UP A ND DATA COLLECTION
This paper is aimed at giving comprehensive mea-
surement datasets for the core loss prediction model with
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.2
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
characterizing size and temperature factors. A measure-
ment kit was proposed, and the device under test (DUT)
is the core to be characterized and tested. Four different
sizes of Ferrite N87 cores (B64290L0618X087) were
selected, as shown in Fig. 2.
(a)
(b)
(c)
Fig. 2: (a) Measurement Kit Setup, (b) Measurements
with Rectangular Excitations, (c) Ferrite N87 Cores
under Test.
TABLE II: Ferrites N87 Core Materials
Cores Ae(mm2)le(mm)Ve(mm3)
N87 R50 195.7 120.4 23560
N87 R34 82.6 82.06 6678
N87 R25 51.26 60.07 3079
N87 R16 19.73 38.52 760
The magnetic characteristics of these four sizes of
cores are shown in Table II, where Aestands for the ef-
fective cross-sectional area, le, for the effective magnetic
path length, and Ve, for the effective magnetic volume.
With the measurement setup, the current and voltage
data are collected first from the measurement kit. The
data processing techniques are implemented in MAT-
LAB. First based on the current and voltage waveform,
the operational BH loop can be derived by (2) and (3).
H(t) = N1
le
·i(t)(2)
Bm(t) = 1
N2AeZT
0
v(t)dt (3)
Core power loss would be derived, with the provided
core effective magnetic volume Ve, the power loss den-
sity is shown in (4).
Ploss density =N1
N2T VeZT
0
i(t)v(t)dt (4)
Based on (4), the core loss density of each size with
different operating conditions could be derived and ready
to do core losses characterization and data collection.
All the data post-processing steps are done in MAT-
LAB. First, Ferrites N87 R34 was tested, as the one
10
1
10
2
10
3
10
0
10
1
10
2
10
3
f
= 100 kHz
Datasheet
Measured
Maximum Flux Density (mT)
Core Loss Density (W/m³)
Fig. 3: Measurements Comparisons at 25°C
in Datasheet, to verify our measurement results. Fig. 3
shows the comparisons between our measurement and
the datasheets with the same size core at the temperature
T=25°C.
Then, triangular excitations are used for measure-
ments. Our measurements are done without DC bias,
where a capacitor is connected to the DUT for filtering
the DC offset. An example of how core losses change
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.3
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
100 120 140 160 180 200 220 240 260 280 300
Maximum Flux Density(mT)
102
103
Core Loss Power Density(W/m³)
f = 100kHz, T = 25°C
N87 R16
N87 R25
N87 R34
N87 R50
Fig. 4: Core Loss Analysis Examples
with sizes and different operating conditions is intro-
duced in Fig. 4. With constant Steinmetz parameters,
the core loss model predictions with different core size
are large, as an example of comparisons between mea-
surement results and SE predictions of N87 R16 cores
in Fig. 5. Based on this, this paper aims at providing
an efficient and more accurate DNN prediction model.
Then the data set is collected as in Fig. 6. Overall, for
100 150 200 250 300 350
Maximum Flux Density (mT)
102
103
Core Loss Power Density (W/m
3)
f = 100kHz
, T=25oC
Measured
SE with Datasheet SE pars
Fig. 5: Core Loss Analysis Examples
each size, 3440 data points were collected and in total,
13760 sets of data points were aggregated for further
DNN training.
Primary Current and Secondary
Voltage Data from Oscilloscope
Data Post Processing in MATLAB for
Core Loss Density at each f and Bmax
Data Aggregation for DNN Model
Fig. 6: The Data Collection Flow
IV. DNN BASED COR E LOS S (DNNCL)
PREDICTIONS
The DNN is a well-known technique to tackle dis-
crete, multi-input, and non-linear regression problems
[13]– [17]. It can model high-frequency magnetic be-
haviour with less effort without any profound and de-
tailed physical theories [21]– [23]. In Fig. 7, an illus-
tration of DNN structure. DNN has three main layers:
one input layer, one output layer, and hidden layers in
between the input and output layer. With multiple hidden
layers, the deep network is vividly defined.
Fig. 7: DNNCL Structure
In this paper, the input layer contains four input
features, frequency, flux density, size information, and
temperature. The output layer is the core loss power
density. To note that, DNNCL can work as a surrogate
model for accelerating the process of power electronics
design.
As the basic component of neural networks, a neuron
with activation function is mathematically expressed as:
y=ϕ n
X
i=1
(aiXi+b)!(5)
Xirepresents the i-th element of the input matrix. The
weight factor of this element is denoted by ai, and b
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.4
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
represents the offset or bias term. The activation func-
tion, represented by the symbol ϕ, is applied to introduce
nonlinearity to the neurons and can be utilized to fit
different nonlinear models. Among various activation
functions, the ReLU is widely employed in practical
applications.
One limitation of ReLU is the ”dying ReLU” prob-
lem, where some neurons may become inactive and
output zero for a specific range of inputs, effectively
”dying” and not contributing to the learning process.
GELU can help alleviate this issue by providing non-
zero outputs for a wider range of inputs, reducing the
likelihood of dead neurons. Unlike ReLU, which is a
piecewise linear function, GELU is a smooth function.
It is derived from the Gaussian cumulative distribution
function, which gives it a smooth and continuous shape.
This smoothness can help with better gradient flow
during training and can result in more stable and efficient
learning [24]. The GELU activation function can be
mathematically represented as follows:
GELU =x·P(X≤x) = x·Φ(x)(6)
In the given equation, Φ(x)presents the cumulative
function of the Gaussian normal distribution for the
variable x. However, directly computing this equation is
not feasible. As a result, an approximation can be used,
which can be expressed as follows:
GELU(x)=0.5x 1 + tanh "r2
πx+ 0.044715x3#!
(7)
When it comes to the number of neurons and hidden
layers, various choices can promise a good performance
of DNN. Therefore, DNNCL is not fixed. In general, for
power electronics applications [17], no strict principle for
the number requirements. A small network can limit the
data-driven learning capacity, while a large network can
lead to overfitting and prolonged learning time. However,
data scaling techniques and training processing can be
effective for the DNN performances [18].
Using the dataset of measurements collected above,
the DNNCL can be trained and the structure and parame-
ters of trained DNNCL can be applied later for engineers
to predict core losses with user-defined inputs. MATLAB
built-in DNN toolbox is used to explore the suitable
DNN structures. Finally, a DNN with 15 hidden layers
was chosen. The dataset was split into three catalogs
randomly: training (70%), validation (15%), and testing
(15%) sets
The Mean Squared Error (MSE) loss function is
commonly employed in general regression problems due
to its effectiveness in gradient descent convergence [25].
However, when it comes to DNNCL modeling, the
presence of outlier samples poses a challenge. The MSE
loss function squares the error, amplifying the impact of
outliers and leading to a slower decrease in the test loss
during training. Consequently, achieving the desired loss
threshold becomes difficult, or even impossible, under
these circumstances.
The Huber loss function is a robust loss function
that combines the best properties of the Mean Absolute
Error (MAE) and MSE [26]. It is less sensitive to outliers
compared to MSE. The Huber loss function calculates
the squared error when the absolute error is small and
the absolute error itself when the error is large. This
allows the Huber loss function to balance the advantages
of both MAE and MSE. Hence, the Huber loss function
is applied to DNNCL, which is defined as follows:
Lδ(y, f (x)) = (1
2(y−f(x))2,if |y−f(x)| ≤ δ
δ|y−f(x)| − 1
2δ2,otherwise
(8)
In this paper, the hyperparameter (denoted as δ) of
the Huber loss function is set to 1. The Huber loss
function is defined in terms of the true value (y) and
the predicted value (f(x)). In this case, y stands for the
measured core loss density, and f(x) for the predicted
core loss density from DNNCL.
Automatic Measurement
Data Collection
Data Normalization
and Data Split
DNNCL Construction
Loss Function De�inition
DNNCL Model
Optimization
Trained DNNCL
Fig. 8: The workflow of training DNNCL
An optimization algorithm is employed to minimize
the loss function, which serves as the objective function
during the optimization process. The weight factors and
offset in the neural network are continuously adjusted
to reduce the training error associated with the loss
function. To leverage the benefits of the adaptive moment
estimation (Adam) algorithm [27], which allows for
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.5
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
independent adaptive learning rates by computing first-
and second-order moment estimates of gradients, the
Adam function is chosen as the optimization function.
The workflow of training DNNCL can be observed in
Fig.8.
V. DNNCL PREDICTION RE SU LTS
Fig.9 (a) and (b) show the training performance with
Huber loss and MSE values. From the figures, it can be
included that with the measured data set, the DNNCL
with GELU and Huber loss function learns faster with
the input parameters and output core loss densities,
and provides more robustness. The Huber loss function
offers robustness to outliers while providing a balanced
measure of error. It can be a suitable alternative to MSE
in scenarios where outliers may be present or when you
want to strike a balance between penalizing large errors
and maintaining stability in training.
(a)
(b)
Fig. 9: (a) DNN with ReLU and MSE, (b) DNNCL with
GELU and Huber Loss Function
Also, Fig. 10 depicts the error histogram, repre-
senting data points and the relative error between the
measured values and the predicted values. It is confirmed
that the DNNCL is trained for the tested data to verify
the prediction accuracy and capacity. Based on the
performance results, the trained DNNCL structure and
parameters are stored for the test and future predictions.
Fig. 10: Training Error Histogram
Based on the comparisons of predicted vs. target val-
ues for the training, validation, and test datasets, which
achieved an impressive R-value of 0.9997 in Fig.11,
it is evident that there exists a significant correlation
between the predicted and target values. This showcases
the strong predictive capability of DNNCL with GELU
activation and Huber loss function, enabling accurate
core loss predictions using four discrete input features.
VI. CONCLUSION
This paper proposed a DNNCL with GELU and
Huber loss function with geometry and temperature
information, including different operating points. The
results indicate that the specific DNNCL is able to
distinguish different sizes with same operation conditions
to give a relatively accurate core loss density predictions.
The proposed model for magnetic loss estimations can
work as an accurate surrogate model in the process of
high frequency magnetic design for power electronics
designers. With the achieved performance, the DNNCL
can benefit from various power electronics engineering
challenges such as loss estimation for HF magnetic
components design.
REFERENCES
[1] W. Martinez, S. Odawara, and K. Fujisaki, “Iron loss
characteristics evaluation using a high-frequency GAN
inverter excitation,” IEEE Transactions on Magnetics,
vol. 53, no. 11, pp. 1–7, 2017.
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.6
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
(a)
(b)
(c)
Fig. 11: Predicted vs. Target Values for (a) Training Data,
(b) Validation, (c) Test Data
[2] Z. Zhao et al., “Modeling Magnetic Hysteresis Under
DC-Biased Magnetization Using the Neural Network,”
IEEE Trans. Magn., vol. 45, no. 10, pp. 3958–3961, Oct.
2009,
[3] W. Martinez and C. Suarez, “Total Harmonic Dis-
tortion Analysis in Magnetic Characterization using
High Frequency GaN Inverter in the MHz Order,” in
EPE ’19 ECCE Europe, Sep. 2019, p. P.1-P.9. doi:
10.23919/EPE.2019.8915546
[4] W. Martinez, X. Shen, S. Lin and J. Friebe, ”Magnetic
Core Evaluation Kit for the Comparison of Core Losses,”
2022 24th European Conference on Power Electronics
and Applications (EPE’22 ECCE Europe), Hanover, Ger-
many, 2022, pp. P.1-P.9
[5] W. Roshen, “Ferrite core loss for power magnetic compo-
nents design,”IEEE Transactions on Magnetics, vol. 27,
no. 6, 11 1991.
[6] C. P. Steinmetz, “On the Law of Hysteresis,” Transactions
of the American Institute of Electrical Engineers, vol. IX,
no. 1, 1 1892.interface,” IEEE Trans. J. Magn. Japan, vol.
2, pp. 740–741, August 1987 [Digests 9th Annual Conf.
Magnetics Japan, p. 301, 1982].
[7] M. Albach, T. Durbaum, and A. Brockmeyer, “Calculat-
ing core losses in transformers for arbitrary magnetizing
currents a comparison of different approaches,” in PESC
Record. 27th Annual IEEE Power Electronics Specialists
Conference. IEEE, 1996.
[8] J. Reinert, A. Brockmeyer, and R. De Doncker, “Calcula-
tion of losses in ferro- and ferrimagnetic materials based
on the modified Steinmetz equation,” IEEE Transactions
on Industry Applications, vol. 37, no. 4, 2001.
[9] Jieli Li, T. Abdallah, and C. Sullivan, “Improved cal-
culation of core loss with nonsinusoidal waveforms,”
in Conference Record of the 2001 IEEE Industry Ap-
plications Conference. 36th IAS Annual Meeting (Cat.
No.01CH37248). IEEE, 2001.
[10] K. Venkatachalam, C. Sullivan, T. Abdallah, and H.
Tacca, “Accurate prediction of ferrite core loss with
nonsinusoidal waveforms using only Steinmetz param-
eters,” in 2002 IEEE Workshop on Computers in Power
Electronics, 2002. Proceedings. IEEE, 2002.
[11] J. Muhlethaler, J. Biela, J. W. Kolar, and A. Ecklebe,
“Improved Core-Loss Calculation for Magnetic Com-
ponents Employed in Power Electronic Systems,” IEEE
Transactions on Power Electronics, vol. 27, no. 2, 2 2012.
[12] I. D. Mayergoyz and G. Friedman, “Generalized Preisach
model of hysteresis,” IEEE Transactions on Magnetics,
vol. 24, no. 1, pp. 212–217, 1988.
[13] H. Li, S. R. Lee, M. Luo, C. R. Sullivan, Y. Chen,
and M. Chen, “MagNet: A machine learning framework
for magnetic core loss modeling,” in Proc. IEEE 21st
Workshop Control Model. Power Electron., 2020,pp. 1–8.
[14] E. Dogariu, H. Li, D. S. L´
opez, S. Wang, M. Luo, and
M. Chen,“Transfer learning methods for magnetic core
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.7
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)
loss modeling,” in Proc. IEEE 22nd Workshop Control
Model. Power Electron., 2021, pp. 1–6.
[15] H. Li et al., “Magnet: An open-source database for data-
driven magnetic core loss modeling,” in Proc. IEEE Appl.
Power Electron. Conf. Expo., 2022, pp. 588–595.
[16] D. Serrano et al., “Neural network as datasheet: Mod-
eling B-H loops of power magnetics with sequence-to-
sequence lstm encoder-decoder architecture,” in Proc.
IEEE 23rd Workshop Control Model. Power Electron.,
2022, pp. 1–8.
[17] S. Zhao, F. Blaabjerg, and H. Wang, “An overview
of artificial intelligence applications for power electron-
ics,” IEEE Trans. Power Electron., vol. 36, no. 4, pp.
4633–4658, Apr. 2021.
[18] X. Shen, H. Wouters and W. Martinez, ”Deep Neu-
ral Network for Magnetic Core Loss Estimation using
the MagNet Experimental Database,” 2022 24th Euro-
pean Conference on Power Electronics and Applications
(EPE’22 ECCE Europe), Hanover, Germany, 2022, pp.
1-8.
[19] N. Rasekh, J. Wang and X. Yuan, ”Artificial Neural Net-
work Aided Loss Maps for Inductors and Transformers,”
in IEEE Open Journal of Power Electronics, vol. 3, pp.
886-898, 2022, doi: 10.1109/OJPEL.2022.3223936.
[20] TDK “Ferrites and accessories SIFERRIT material N87”
[21] T. Guillod, P. Papamanolis, and J. W. Kolar, “Artificial
neural network (ANN) based fast and accurate inductor
modeling and design,” IEEE Open J. Power Electron.,
vol. 1, pp. 284–299, 2020.
[22] E. I. Amoiralis, P. S. Georgilakis, T. D. Kefalas, M. A.
Tsili, and A. G. Kladas, “Artificial intelligence combined
with hybrid FEM-BE techniques for global transformer
optimization,” IEEE Trans. Magn., vol. 43, no. 4, pp.
1633–1636, Apr. 2007.
[23] C. Nussbaum, H. Pfutzner, T. Booth, N. Baumgartinger,
A. Ilo, and M. Clabian, “Neural networks for the predic-
tion of magnetic transformer core characteristics,” IEEE
Trans. Magn., vol. 36, no. 1, pp. 313–329, Jan. 2000.
[24] D. Hendrycks and K. Gimpel, “Gaussian error linear
units (GELUs),” 2016, arXiv:1606.08415.
[25] B. Girod, “Psychovisual aspects of image processing:
What’s wrong with mean squared error?” in Proc. 7th
Workshop Multidimensional Signal Process., Sep. 1991,
p. P-2, doi: 10.1109/MDSP.1991.639240.
[26] C. Willmott, S. Ackleson, R. Davis, J. Feddema, K.
Klink, D. Legates, J. O’donnell, and C. Rowe, “Statistics
for the evaluation of model performance,” J. Geophys.
Res., vol. 90, no. C5, pp. 8995-9005, Sep. 1985, doi:
10.1029/jc090ic05p08995.
[27] D. P. Kingma and J. Ba, “Adam: A method for stochastic
optimization,”2014, arXiv:1412.6980.
Machine Learning Model for High-Frequency Magnetic Loss Predictions Based on
Loss Map by a Measurement Kit SHEN Xiaobing
EPE'23 ECCE Europe ISBN: 978-9-0758-1542-9 – IEEE: CFP23850-USB P.8
Assigned jointly to the European Power Electronics and Drives Association & the Institute of Electrical and Electronics Engineers (IEEE)