Content uploaded by April Novak
Author content
All content in this area was uploaded by April Novak on May 18, 2022
Content may be subject to copyright.
Coupled Monte Carlo and Thermal-Hydraulics Modeling of a
Prismatic Gas Reactor Fuel Assembly Using Cardinal
A.J. Novak1, D. Andrs2, P. Shriwise1, D. Shaver1, P.K. Romano1,
E. Merzari3, and P. Keutelian4
1Argonne National Laboratory
{anovak, pshriwise, dshaver, promano}@anl.gov
2Idaho National Laboratory 3Pennsylvania State University 4Radiant Industries, Inc.
David.Andrs@inl.gov ebm5351@psu.edu paul@radiantnuclear.com
ABSTRACT
Cardinal is a MOOSE application that couples OpenMC Monte Carlo transport and NekRS com-
putational fluid dynamics to the MOOSE framework, closing the neutronics and thermal-fluid
gaps in conducting tightly-coupled, high-resolution multiscale and multiphysics analyses. By
leveraging MOOSE’s interfaces for wrapping external codes, Cardinal overcomes many chal-
lenges encountered in earlier multiphysics coupling works, such as file-based I/O or overly-
restrictive geometry mapping requirements. In this work, we leverage a subset of the multiphysics
interfaces in Cardinal to perform coupling of OpenMC neutron transport, MOOSE heat conduc-
tion, and THM thermal-fluids for steady-state modeling of a prismatic gas reactor fuel assembly.
1. INTRODUCTION
Many reactor phenomena are inherently multiscale and multiphysics, with scales spanning many orders
of magnitude. A challenge common to multiscale and multiphysics modeling of nuclear systems is the
historical development of state-of-the-art tools for individual physics domains using a wide variety of dis-
cretization schemes and software architectures. Depending on code design, it may not be a simple feat to
establish just the “mechanics” of code coupling — the data transfers, parallel communication, and iterative
solutions — between codes developed across varying groups. These challenges may limit the scope of mul-
tiphysics studies by restricting the pool of available modeling tools, shortening the project’s future growth
with coupling scripts confined to specific geometries, and shifting engineering effort to API development.
The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a finite element framework devel-
oped at Idaho National Laboratory (INL) that allows applied math practitioners to translate physics mod-
els into high-quality, state-of-the-art engineering software [1]. A wide variety of MOOSE-based applica-
tions are under development across various institutions and include physics such as coarse-mesh Thermal-
Hydraulics (T/H) [2] and nuclear fuel performance [3]. Because all MOOSE applications share the same
code base, a common data transfer and field interpolation system can be used to couple MOOSE applica-
tions to one another through source terms, Boundary Conditions (BCs), and virtually any other mechanism
by which physics and scales can be coupled.
In 2018, a new mechanism for coupling external applications to the MOOSE “ecosystem” was introduced.
This ExternalProblem interface provides insertion points for the initialization, solution, and postpro-
cessing of non-MOOSE codes while exposing the time stepping, synchronization, and data transfer systems
in MOOSE. Plugging an external code into the MOOSE framework in this manner is referred to as “wrap-
ping.” The ExternalProblem interface allows external tools to couple in a physics- and dimension-
agnostic manner to any other MOOSE application, greatly expanding capabilities for multiphysics studies.
Cardinal seeks to bring Computational Fluid Dynamics (CFD) and Monte Carlo (MC) transport to the
MOOSE ecosystem in order to close neutronics and T/H gaps in conducting multiscale and multiphysics
analyses of nuclear systems. Cardinal leverages many years of effort at Argonne National Laboratory
(ANL) in the development of CFD and MC tools by wrapping NekRS [4], a spectral element CFD code
targeting Graphics Processing Unit (GPU) architectures; and OpenMC, a neutron and photon transport MC
code [5]. Cardinal seeks to eliminate several limitations common to earlier T/H and MC couplings:
• All data is communicated in memory, obviating the need for code-specific I/O programs and reducing the
potential for file-based communication bottlenecks.
• Mappings between the NekRS CFD mesh, the OpenMC Constructive Solid Geometry (CSG) cells, and
the MOOSE meshes are constructed automatically with no requirements on node/element/cell alignment.
• All data is transferred using existing MOOSE interpolation and transfer systems. A general design allows
NekRS and OpenMC to be coupled to any MOOSE application. For instance, the same OpenMC model
can provide neutronics feedback to NekRS CFD, Pronghorn porous media, and SAM 1-D flow loops.
Our initial applications of Cardinal have been to Pebble Bed Reactors (PBRs) and Sodium Fast Reactors
(SFRs). In 2021, we demonstrated fully-coupled NekRS, OpenMC, and BISON simulations of a salt-
cooled PBR with 127,000 pebbles [6] and of a 7-pin SFR bundle [7]. Additional ongoing activities include
coupling NekRS to the MOOSE tensor mechanics module for thermal striping applications and coupling
NekRS to SAM for multiscale loop analysis [8]. Under the support of the DOE-NE Nuclear Energy Ad-
vanced Modeling and Simulation (NEAMS) Center of Excellence (CoE) for Thermal Fluids Applications
in Nuclear Energy, a joint collaboration between ANL, INL, and Radiant, a reactor development company
pursuing a prismatic-block gas-cooled microreactor, is underway to develop high-resolution multiphysics
modeling capabilities for High Temperature Gas Reactors (HTGRs) using Cardinal. The relatively small
size of microreactors allows high-resolution computational tools to serve an important role in areas such as
optimization and closure development. The objective of this CoE project is to demonstrate Cardinal as a
tool for full-core modeling of prismatic gas reactors; this paper presents our progress towards this goal.
We demonstrate tight coupling of OpenMC, MOOSE heat conduction, and Thermal Hydraulics Module
(THM) 1-D Navier-Stokes [9] for steady-state modeling of a prismatic gas-cooled fuel assembly *. This
work emphasizes the dimension-agnostic and “plug-and-play” nature of MOOSE simulations — without
any additional software development, in this paper we substitute a 1-D THM flow solution for the NekRS
CFD feedback used in our earlier applications [6,7]. In Section 2, we provide a high-level description of
how Cardinal wraps OpenMC and NekRS within MOOSE; Section 3 describes the target prismatic gas
reactor fuel assembly design; Section 4 discusses the computational models built for each single-physics
tool and the data transfers used to achieve coupling. Finally, Section 5 provides fully-coupled predictions
for the power, temperatures, pressures, and velocities. Section 6 then presents concluding remarks and
outlines remaining work to be conducted through the CoE.
2. CARDINAL
Cardinal leverages MOOSE’s ExternalProblem interface to wrap OpenMC and NekRS as MOOSE
applications. Cardinal is itself a MOOSE application, but essentially replaces MOOSE’s finite element
solution with API calls to external codes. Each wrapping has three main parts:
• Create a “mirror” of the external application’s mesh/geometry as a MooseMesh; this mesh is the receiv-
ing point for all field data to be sent in/out of the external application.
*Part of the CoE project objectives are to develop documentation to support industry users. A tutorial based on this
paper can be found on the Cardinal website, https://cardinal.cels.anl.gov
• Map from the external application to the MooseMesh; this mapping facilitates all data transfers in/out
of the external application by reading/writing to MooseVariables defined on the mesh. After data is
mapped to the mesh, the MOOSE Transfers handle transfers to/from coupled MOOSE applications.
• Run the external application by calling API functions (such as nekrs::runStep) at various insertion
points defined by the ExternalProblem interface.
To simplify model setup, the spatial mappings, tally creation, and transfer of coupling data is fully auto-
mated. The input files used to run a simulation consists of the usual standalone code input files plus thin
“wrapper” input files that specify what data is to be transferred to MOOSE.
2.1. NekRS Wrapping
Cardinal includes two modes for coupling NekRS to MOOSE – 1) Conjugate Heat Transfer (CHT) coupling
via temperature and heat flux BCs, and 2) volume coupling via volumetric heat sources, temperatures, and
densities. In our previous SFR work, we combined both modes together to couple NekRS via CHT to
MOOSE but via volumetric temperatures and densities to OpenMC [7]. The mesh mirror is constructed
automatically by looping over NekRS’s mesh data structures and reconstructing the mesh as a first- or
second-order MooseMesh. The overall calculation workflow for each time step is then to 1) read data
from the mesh mirror and write into NekRS’s source term and/or BC arrays; 2) run NekRS for one time
step; and 3) read coupling data from NekRS’s arrays and write onto the mesh mirror. Interpolation is used
to transfer data between the mesh mirror and the high-order NekRS solution, with conservation ensured
through normalization. There are no requirements on node/element alignment between NekRS’s mesh
and that of the coupled MOOSE application, which allows boundary layers to be highly refined without
requiring comparable refinement in adjacent solids.
NekRS supports both Central Processing Unit (CPU) and GPU backends; when using a GPU backend,
Cardinal facilitates all necessary copies between CPU (where MOOSE runs) and GPU (where NekRS
runs). An MPI distributed mesh coupling of NekRS and MOOSE is available, which keeps the cost of data
transfers to less than 5% of runtime even for very large problems.
2.2. OpenMC Wrapping
Cardinal couples OpenMC to MOOSE through a kappa-fission tally (recoverable fission energy) and
cross section feedback from temperature and density. The fission power is tallied using either 1) cell tallies
or 2) libMesh unstructured mesh tallies. OpenMC tracks particles through a CSG space, or a collection of
cells formed as intersections of the half-spaces of common surfaces. Because a delta tracking implemen-
tation is currently under development, all results presented in this work use surface tracking; the uniform
cross section sampling implementation therefore requires uniform temperature and density in each cell.
The CSG space causes the significance of the mesh mirror to differ slightly from that for the NekRS
wrapping. With the OpenMC wrapping, the mesh mirror is created off-line using mesh generation software
based on the transfer resolution desired by the modeler. In many cases, we simply use the same mesh of
the coupled MOOSE application, though this is not required. During initialization, Cardinal automatically
loops over all the elements in this mesh mirror and maps by centroid to an OpenMC cell. There are no
requirements on alignment of elements/cells or on preserving volumes — the OpenMC cells and mesh
mirror elements do not need to be conformal. Elements that don’t map to an OpenMC cell simply do not
participate in the coupling (and vice versa for the cells); this feature is used in the present work to exclude
axial reflectors from multiphysics feedback. An illustrative example of the mapping for an SFR application
is shown in Fig. 1. The inset shows the centroid mapping in a region where cells do not align with elements.
The overall calculation workflow for each time step is then to 1) read temperature and density from the
mesh mirror, and for each OpenMC cell set the temperature and density from a volume average over the
corresponding elements; 2) run a k-eigenvalue calculation; and 3) read the tally from OpenMC’s arrays and
write onto the mesh mirror. This data transfer conserves power through normalization by element volumes.
For TRistructural ISOtropic (TRISO) applications, many T/H tools homogenize the TRISO particles into
the matrix using multiscale treatments to avoid explicitly meshing the TRISO particles [2]. Cardinal op-
tionally allows all the cells contained within a parent cell to be grouped together from the perspective of
tally and temperature feedback. In other words, a homogenized temperature from a T/H tool can be applied
to all cells contained within an OpenMC TRISO universe; this feature is used in this work.
Figure 1: Illustration of an OpenMC geometry and the mapping of cells to a user-supplied mesh.
3. PRISMATIC GAS REACTOR DESIGN
Fig. 2 shows the fuel assembly of interest; All specifications are taken from a 2016 point design developed
as part of a DOE initiative to explore high-temperature test reactors [10]. Without loss of generality, these
design specifications are considered to be sufficiently representative of the Radiant design without consist-
ing of any proprietary information. The assembly is a graphite prismatic hexagonal block with 108 helium
coolant channels, 210 fuel compacts, and 6 poison compacts in a triangular lattice.
Figure 2: Top-down view of a fuel assembly, colored by material.
The core contains 12 fuel assemblies and a number of graphite reflector assemblies in a hexagonal lattice,
enclosed within a steel reactor vessel. Upper and lower graphite reflectors reduce leakage. The present
analysis considers a single fuel assembly plus the reflectors above and below the core. Each fuel compact
contains TRISO particles dispersed in graphite with a packing fraction of 15%. Because explicit modeling
of the inter-assembly flow is outside the scope, the inter-assembly gaps are treated as solid graphite.
The TRISO particles are based on a fairly conventional design with a 435 µmdiameter kernel of uranium
oxycarbide enriched to 15.5 w/o 235U. The boron carbide poison compacts are defined with a 0.2 w/o 10 B
enrichment. The remaining TRISO particle dimensions and material properties are available elsewhere
[10]. The 200 MWth core power is removed by helium flowing at 117.3 kg s−1downwards through the
core with an inlet temperature of 325°C. The outlet pressure is 7.1 MPa. These parameters are scaled to
the single-assembly case by assuming average core conditions, or an assembly power of 16.67 MWth and
an assembly mass flowrate of 9.775 kg s−1. The mass flowrate in each flow channel is assumed uniform.
4. COMPUTATIONAL MODEL
This section describes the computational model of the prismatic gas reactor assembly. Mesh and physics
refinement studies are conducted for each code in a “standalone” setting for which coupling is approximated
through simple feedback terms. It is assumed that these single-physics convergence studies are sufficient
justification for convergence of the coupled problem because the homogenized temperature feedback in
OpenMC, helium’s transparency to neutrons, and THM’s area-averaged equations are assumed to induce
similar solution gradient magnitudes in the fully-coupled case as the standalone cases.
4.1. OpenMC Neutron Transport Model
The OpenMC model is shown in Fig. 2 (colored by material) and Fig. 3 (colored by cell ID or instance).
In the axial direction, the geometry is divided into nllayers of equal height. The outer surface of the
assembly is a periodic boundary, while the top and bottom surfaces are vacuum boundaries. The TRISO
particles are represented explicitly, with positions determined using Random Sequential Addition (RSA).
To accelerate the particle tracking, a Cartesian search lattice is superimposed in the fuel compacts, and the
same TRISO universe fills each fuel compact. Cell tallies are used to measure the recoverable fission energy
release. Temperature-dependent ENDF/B-VII.1 cross sections are evaluated using statistical linear-linear
interpolation between data set temperatures. S(α, β )data is applied for all graphite materials.
OpenMC uses “distributed cells” to repeat a cell multiple times throughout the domain. Each new occur-
rence of a cell is referred to as an “instance.” Cardinal applies uniform temperature and density feedback
to OpenMC for each unique cell ID +instance combination. The distributed cell feature is used here to
construct the fuel assembly from four universes: 1) a fuel pin; 2) a coolant pin; 3) a poison pin; and 4) a
solid graphite region to represent the “borders.” OpenMC receives on each axial plane a total of 721 tem-
peratures and 108 densities (one per coolant channel). With references to the colors shown in Fig. 3, the
721 cell temperatures correspond to:
210 fuel compacts
| {z }
1 TRISO compact (rainbow)
1 matrix region (purple)
+ 108 coolant channels
| {z }
1 coolant region (various)
1 matrix region (various)
+ 6 poison compacts
| {z }
1 poison region (brown)
1 matrix region (blue)
+ 73 graphite fillers
| {z }
1 matrix region (mustard)
The number of axial layers is selected by performing a cell refinement study considering 25, 50, 75, and
100 axial layers. For each choice of nl, the Shannon entropy and generation kare monitored to determine
the number of inactive cycles. OpenMC’s tally trigger system is used to automatically terminate the active
cycles once reaching less than 1.5% relative tally uncertainty. The number of layers is chosen by requiring
less than 3% difference in the power distribution relative to the next-coarser mesh. Based on this study, the
OpenMC model uses 10,000 particles/cycle with 500 inactive cycles, 2000 active cycles, and nl= 50.
Figure 3: OpenMC CSG geometry, colored by cell ID or instance.
4.2. MOOSE Heat Conduction Model
The MOOSE heat conduction module is used to solve for heat transfer within the solid phase,
−∇ (ks∇Ts)−˙qs= 0 ,(1)
where ksis the thermal conductivity, Tsis the temperature, and ˙qsis a volumetric heat source. On the
fluid-solid interface, the temperature is set to the fluid temperature computed by THM. All other surfaces
are assumed insulated. The heat source is set to the normalized fission tally computed by OpenMC. The
TRISO particles are homogenized into the compacts by volume averaging material properties. This ho-
mogenization of the buffer layer thermal resistance is well-known to under-predict kernel temperatures [2];
future work will leverage Pronghorn’s multiscale TRISO model [11].
Figure 4: Top-down view of the converged MOOSE heat conduction mesh.
The solid mesh is shown in Fig. 4. This mesh is created with the reactor MOOSE module [12] with
easy-to-use mesh generators that programmatically construct reactor core meshes as building blocks of
bundle and pincell meshes. Mesh convergence is ensured by uniformly refining the mesh and requiring less
than 1% difference in radially-averaged axial temperature distributions between successive meshes.
4.3. THM Navier-Stokes Model
THM is a MOOSE-based T/H code applicable to single-phase systems-level analysis. THM solves for
conservation of mass, momentum, and energy with 1-D area averages of the Navier-Stokes equations,
∂
∂t (Aρf) + ∂
∂x (Aρfu)=0,(2)
∂
∂t (Aρfu) + ∂
∂x Aρfu2+AP =˜
P∂A
∂x −f
2Dh
ρfu|u|A(3)
∂
∂t (AρfEf) + ∂
∂x [Au (ρfEf+P)] = Hwaw(Twall −Tbulk )A(4)
where xis the coordinate along the flow length, Ais the cross-sectional area, ρfis the density, uis the
velocity, Pis the pressure, ˜
Pis the average pressure on the curve boundary, fis the friction factor, Dhis
the hydraulic diameter, Efis the fluid total energy, Hwis the wall heat transfer coefficient, awis the heat
transfer area density, Twall is the wall temperature, and Tbulk is the area average bulk fluid temperature. The
Churchill correlation is used for fand the Dittus-Boelter correlation is used for Hw[9].
The heat flux imposed in the NTHM elements is an area average of the heat flux from MOOSE in N
layers along the fluid-solid interface. For the reverse transfer, the wall temperature sent to MOOSE is
set to a uniform value along the fluid-solid interface according to a nearest-node mapping to the NTHM
elements. Mesh convergence is based on the same uniform refinement strategy described in Section 4.2.
4.4. Multiphysics Coupling
Fig. 5 summarizes the multiphysics data transfers; the inset describes the 1-D/3-D data transfers with THM.
While the coolant is quite transparent to neutrons, we include fluid feedback for completeness.
Figure 5: Summary of data transfers between OpenMC, MOOSE, and THM.
Note that all data transfers are achieved using existing MOOSE MultiApp transfers and are not specific
to Cardinal — this enables a seamless interchange of any of OpenMC, MOOSE heat conduction, and THM
for other MOOSE applications. OpenMC, MOOSE, and THM are coupled together using Picard iteration.
For each iteration, each application fully converges its physics (OpenMC and MOOSE heat conduction
solve steady equations, while THM is time evolved to steady state). The coupled solution is considered
converged once kis within the uncertainty band of the previous iteration and there is less than 2 Kchange
in maximum fluid, fuel, and matrix temperatures. A constant relaxation factor of 0.5 applied to the fission
power was required to achieve convergence.
5. RESULTS
Coupled convergence between OpenMC, MOOSE heat conduction, and THM is achieved in 3 Picard iter-
ations. Fig. 6 shows the OpenMC fission power distribution. The inset shows the power distribution on
the x−yplane where the maximum power occurs. Slight azimuthal asymmetries exist due to the non-zero
tally uncertainty. Neutrons reflecting from the axial reflectors cause local power peaking at the ends of the
assembly, while the negative fuel temperature coefficient causes the power distribution to shift towards the
reactor inlet where temperatures are lower. The six corner poison compacts induce local power depression;
enhanced moderation and distance from the poisons results in high center-assembly powers.
Figure 6: OpenMC fission power; note the use of a separate color scale for the inset.
Fig. 7 shows the MOOSE solid temperature (left) and the cell temperature imposed in OpenMC (right).
The bottom row shows the temperature on an x−yplane. The color scales differ because the volume
(top) and slice (bottom) plots are scaled to the max/min visible temperature. The insulated BCs, combined
with a lower “density” of coolant channels near the lateral faces, result in higher compact temperatures
near the assembly peripheries, except in the vicinity of the poison pins. Each OpenMC cell is set to the
volume-average temperature from the mesh mirror elements whose centroids mapped to each cell; a similar
procedure is used to set cell temperatures and densities for the fluid cells.
Fig. 8 shows the MOOSE solid temperature on several x−yplanes with the THM fluid temperature
shown as tubes. An inset shows the fluid temperature on the outlet plane. The absence of compacts in
the center region results in lower fluid temperatures in this region, while the highest fluid temperatures are
observed for channels surrounded by 6 compacts that are sufficiently close to the periphery to be affected
by the lateral insulated BCs. Finally, Fig. 9 shows the radially-averaged (a) fission distribution and fluid,
compact, and graphite temperatures; and (b) velocity and pressure as a function of axial position. The
negative temperature feedback results in an inlet-peaked power distribution. The fuel temperature peaks
near the mid-plane due to the combined effects of the high power density and the fluid temperature, which
increases monotonically from the inlet. The pressure gradient is nearly constant with axial position. Due
to mass conservation, the heating of the fluid causes the velocity to increase with distance from the inlet.
Figure 7: MOOSE solid temperature (left) and volume-average temperature imposed in OpenMC
(right). Insets show a plane at z= 2.15 m from inlet. Note the use of separate color scales.
Figure 8: Fluid temperature predicted by THM (tubes and inset) and solid temperature predicted
by MOOSE (five slices). Note the use of three separate color scales.
The total runtime is 464 core·hours, broken down as 93.4% solving OpenMC, 1.6% solving MOOSE +
THM, 1.6% transferring data, and 3.4% on other activities such as computing user objects and writing
output files. We expect the implementation of delta tracking in OpenMC and more aggressive physics-
based relaxation schemes that decrease the total number of Picard iterations to significantly reduce cost.
6. CONCLUSIONS
This paper described Cardinal, an application that couples OpenMC and NekRS to MOOSE. The data
transfers and coupling algorithms use in-memory communication, distributed parallel meshes, and contin-
uous field transfers without requirements on conformal geometries/meshes/cells. As part of a CoE project
between ANL, INL, and Radiant, this paper presented a multiphysics coupling of OpenMC, THM, and
MOOSE heat conduction for a prismatic gas reactor assembly using Cardinal. By leveraging existing
MOOSE data transfer and field interpolation systems, OpenMC was coupled — without any additional
source code developments — to mixed-dimension 1-D/3-D fluid and solid feedback. Coupled physics
predictions were then provided for the distribution of fission reaction rates, temperatures, pressure, and
velocities. Future work will expand this analysis from a single fuel assembly to a full core, with select
“high-importance” flow channels modeled using NekRS CFD, while also exploring the multiscale TRISO
models to better capture the heterogeneous solid temperatures for neutronics feedback.
(a) Temperatures and power (b) Pressure and velocity
Figure 9: Radially-averaged temperatures, power, pressure, and velocity as a function of position.
ACKNOWLEDGEMENTS
Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Nuclear
Energy, NEAMS program, under contract DE-AC02-06CH11357.
REFERENCES
[1] C. J. Permann et al. “MOOSE: Enabling massively parallel multiphysics simulation.” SoftwareX,
volume 11, p. 100430 (2020).
[2] A. Novak et al. “Pronghorn: A Multidimensional Coarse-Mesh Application for Advanced Reactor
Thermal Hydraulics.” Nuclear Technology,volume 207, pp. 1015–1046 (2021).
[3] J. Hales et al. “BISON Theory Manual.” INL/EXT-13-29930 Rev. 3 (2016).
[4] P. Fischer et al. “NekRS, a GPU-Accelerated Spectral Element Navier-Stokes Solver.” (2021).
ArXiv:2104.05829.
[5] P. Romano et al. “OpenMC: A State-of-the-Art Monte Carlo Code for Research and Development.”
Annals of Nuclear Energy,volume 82, pp. 90–97 (2015).
[6] P. Fischer et al. “Highly Optimized Full-Core Reactor Simulations on Summit.” (2021).
ArXiv:2110.01716.
[7] A. Novak et al. “Coupled Monte Carlo Transport and Conjugate Heat Transfer for Wire-Wrapped
Bundles Within the MOOSE Framework.” In Proceedings of Nureth (2022).
[8] A. Huxford et al. “Development of Innovative Overlapping-Domain Coupling Between SAM and
nekRS.” In Proceedings of NURETH (2022).
[9] R. A. Berry et al. “RELAP-7 Theory Manual.” INL/EXT-14-31366 (2014).
[10] “High-Temperature Gas-Cooled Test Reactor Point Design.” INL/EXT-16-38296 (2016).
[11] A. Novak et al. “Multiscale Thermal-Hydraulic Modeling of the Pebble Bed Fluoride-Salt-Cooled
High-Temperature Reactor.” Annals of Nuclear Energy,volume 154, p. 107968 (2021).
[12] E. Shemon et al. “MOOSE Framework Enhancements for Meshing Reactor Geometries.” In Pro-
ceedings of Physor (2022).