Content uploaded by April Novak
Author content
All content in this area was uploaded by April Novak on Mar 10, 2022
Content may be subject to copyright.
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
COUPLED MONTE CARLO TRANSPORT AND CONJUGATE
HEAT TRANSFER FOR WIRE-WRAPPED BUNDLES WITHIN THE
MOOSE FRAMEWORK
A.J. Novak, P. Shriwise, R. Rahaman, P.K. Romano
Argonne National Laboratory
anovak@anl.gov; pshriwise@anl.gov; rahaman@anl.gov; promano@anl.gov
E. Merzari D. Gaston
Pennsylvania State University Idaho National Laboratory
ebm5351@psu.edu Derek.Gaston@inl.gov
ABSTRACT
This paper introduces a new application, Cardinal, that couples OpenMC Monte Carlo transport and NekRS com-
putational fluid dynamics to the MOOSE framework, closing the neutronics and thermal-fluid gaps in conducting
tightly-coupled, high-resolution multiscale and multiphysics analyses of nuclear systems. This coupling specifically
aims to address and overcome challenges encountered in earlier multiphysics coupling works such as file-based com-
munication, overly-restrictive mesh mappings, or rigid limitations to pin-type fuels. In addition, coupling within the
MOOSE framework enables a broad range of applications by leveraging the efforts and progress of a diverse user
community. This work describes the data transfers and solution algorithms in Cardinal and demonstrates the analysis
framework via a tightly coupled simulation of a 7-pin fast reactor bundle.
KEYWORDS
Multiphysics, Monte Carlo, CFD, Cardinal
1. INTRODUCTION
Many reactor phenomena are inherently multiscale and multiphysics. The time and length scales charac-
terizing fluid flow and heat transfer in reactor systems typically span many orders of magnitude. At the
lower length scale, fine-scale effects include turbulent energy dissipation, turbulent boundary layers, and
heat transfer through irradiated nuclear fuel. At the higher length scales, large-scale effects are typically
described in terms of core pressure drop and bulk heat balances with characteristic lengths on the order of
meters. Physics phenomena on this broad range in scales have significant implications for reactor design and
licensing, and multiscale techniques are often necessary to reduce the high computational cost of resolving
all scales in full-core simulations.
An excellent example of a strongly-coupled multiphysics phenomenon can be found in many fast spectrum
reactors, where interactions between solid mechanics, neutronics, fuel performance, and Thermal-Hydraulic
(T/H) physics produce a complicated reactivity feedback effect known as “core radial expansion.” The
combination of thermal expansion, irradiation swelling, and irradiation creep results in bowing that has
significant implications for reactor control [1, 2] and refueling operations [3]. The tightly coupled nature of
the core physics often requires multiphysics analyses to accurately predict the core bowing phenomenon.
1
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
Historically, a challenge common to multiscale and multiphysics modeling of nuclear systems is the de-
velopment of state-of-the-art tools for neutron transport, T/H, and solid mechanics using a wide variety of
spatial discretization schemes, software architectures, and solution data structures. Depending on code de-
sign, it may not be a simple feat to establish just the “mechanics” of scale and physics coupling – the data
transfers, the parallel communication, and iterative solution.
The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a finite element framework devel-
oped at Idaho National Laboratory (INL) that allows applied math practitioners to translate physics models
into high-quality, state-of-the-art engineering software [4]. Because all MOOSE applications share the same
code base, a common data transfer and field interpolation system can be used to couple MOOSE applica-
tions to one another through source terms, Boundary Conditions (BCs), and virtually any other mechanism
by which physics and scales might be coupled. This work describes the development of a new MOOSE
application that seeks to bring high-resolution Computational Fluid Dynamics (CFD) and particle transport
capabilities to the MOOSE “ecosystem” in order to close neutronics and T/H gaps in conducting tightly-
coupled, high-resolution multiscale and multiphysics analyses of nuclear systems.
This application, named Cardinal, aims to leverage many years of effort at Argonne National Laboratory
(ANL) in the development of CFD and Monte Carlo particle transport tools by “wrapping” externally-
developed codes as MOOSE applications [5]. Cardinal is a wrapping of two codes – NekRS [6], a GPU-
oriented version of the spectral element CFD code Nek5000; and OpenMC [7], a neutron and photon Monte
Carlo transport code. By developing Cardinal, we seek to eliminate many limitations common to earlier T/H
and Monte Carlo couplings:
• All data is communicated in-memory, obviating the need for code-specific I/O programs [8] and
reducing potential file-based communication bottlenecks [9].
• Mappings between the NekRS CFD mesh, the OpenMC Constructive Solid Geometry (CSG) cells,
and MOOSE meshes are constructed automatically with no requirements on node/element/cell align-
ment. This eliminates the need for rigid one-to-one mappings [10].
• By using MOOSE’s field interpolation and data transfer system, Cardinal represents much more than
just a coupling of NekRS and OpenMC. A purposefully general design allows NekRS and OpenMC
to be coupled to any other MOOSE application. For instance, the same OpenMC model can pro-
vide neutronics feedback to NekRS turbulence-resolved CFD, Pronghorn subchannel/porous media
models, and SAM 1-D flow loop models.
Our initial application of Cardinal has been to pebble bed systems; in early 2021, we demonstrated a fully-
coupled NekRS, OpenMC, and BISON model of a salt-cooled pebble bed with 127,000 pebbles [11]. Build-
ing off the success of this work, Cardinal has recently expanded into fast reactor applications. This paper
is the first from a multi-year project that aims to improve the understanding of core bowing reactivity feed-
back effects by simulating the tightly-coupled neutronics, T/H, and solid mechanics physics with NekRS,
OpenMC, and the MOOSE tensor mechanics module. The objectives of this paper are to 1) introduce the
coupling methodologies used in Cardinal and 2) demonstrate a tight coupling of NekRS, OpenMC, and
the MOOSE heat conduction module for a smaller-scale version of a driver fuel assembly in the Advanced
Burner Reactor (ABR) [12]. The remainder of this paper is organized as follows. Section 2 describes the
data transfers and coupling algorithms in Cardinal; Section 3 discusses the various single-physics models
used in the present analysis; and Section 4 presents a fully-coupled multiphysics simulation of a 7-pin ABR
fuel bundle. Section 5 then provides concluding remarks.
2
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
2. CARDINAL MULTIPHYSICS INTERFACES
In 2018, the MOOSE framework introduced a new mechanism for coupling to external applications via the
ExternalProblem interface. Up until this point, non-MOOSE applications interacted with the framework
through custom Transfer classes that directly mapped from an external application’s solution to a coupled
MOOSE application; our first coupling of OpenMC and Nek5000 with MOOSE was based on this approach
[13]. It was the code developer’s responsibility to implement all details of the data transfer from the external
application to MOOSE. To avoid implementing OpenMC- and Nek5000-specific nearest-point lookups and
mesh interpolations, we based this initial coupling on functional expansion tallies so that all data transfers
were agnostic of the spatial discretization (but still particular to cylindrical fuel rod geometries).
The ExternalProblem class replaces code-specific data transfers with the same transfer systems used by
native MOOSE applications. The ExternalProblem class essentially replaces libMesh and PETSc solves
with an abstract interface to an external application:
void ExternalProblem::solve()
{
syncSolutions(Direction::TO_EXTERNAL_APP);
externalSolve();
syncSolutions(Direction::FROM_EXTERNAL_APP);
}
In connection with this interface, there are three main steps to wrap an application as an ExternalProblem:
1. Create a “mirror” of the external application’s mesh; this involves looping over all the elements that
will be coupled to MOOSE and constructing the same data in the MooseMesh format used by native
MOOSE applications. This mesh mirror is the receiving point for all field data to be sent in/out of
the external application, and essentially governs the “resolution” of the data transfers. For external
applications that may not be based on a mesh, such as OpenMC, the mesh mirror is instead constructed
off-line using mesh generation software based on the field transfer resolution desired by the modeler.
2. Establish a mapping from the external application’s geometry to the MooseMesh. This mapping is
used to 1) read an external solution and write it into a MooseVariable defined on the mesh mirror
and 2) read a MooseVariable defined on the mesh mirror and write it into the source terms/BCs in
the external application. This mapping is used to facilitate all subsequent data transfers.
3. Run the external application. The externalSolve method calls functions such as
openmc::openmc run or nekrs::runStep to run a k-eigenvalue calculation or a single time step
of the Navier-Stokes equations, respectively.
The primary difference from the pre-2018 implementation is that 1) all data transfers to/from the external
application go through an intermediate mesh mirror and 2) any existing Transfer can then be used to
send the mirrored data to a coupled MOOSE application. That is, wrapped applications can automatically
leverage all available MOOSE data transfers, such as nearest node transfers, interpolations, projections, and
even general features such as transfers between solves on different dimensions (such as between 3D and 1D
domains) – all agnostic of the fact that the actual physics solve is performed with an external application.
For all intents and purposes, Cardinal’s use of MOOSE transfers allows NekRS and OpenMC to behave as
native MOOSE applications. The NekRS and OpenMC solutions can be used for any purpose that a native
3
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
MOOSE solution would be used for – such as providing physics feedback to another MOOSE application,
applying an initial condition from a restart file, or projecting closure terms from one application to another.
To simplify model setup, all of the wrapping steps – constructing mesh mirrors, establishing mappings, and
exchanging data – are performed automatically by Cardinal. The input files required to run a simulation
consist of the usual standalone code input files plus a thin “wrapper” input file that simply creates the
appropriate ExternalProblem class (NekRSProblem or OpenMCCellAverageProblem).
It is important to stress the generality of the calculations enabled by Cardinal. Cardinal allows NekRS to be
coupled via Conjugate Heat Transfer (CHT) to any MOOSE application that can compute a heat flux (such
as Pronghorn and SAM) and via temperatures and densities to any MOOSE application that can compute
a heat source (such as Griffin and Cardinal’s OpenMC wrapping). Similarly, Cardinal allows OpenMC to
be coupled via a heat source to any MOOSE application that can compute temperatures and densities (such
as Pronghorn, SAM, and Cardinal’s NekRS wrapping). Sections 2.1 and 2.2 next describe the NekRS and
OpenMC wrappings in Cardinal in greater detail, with particular emphasis on this generality.
2.1. NekRS Wrapping
Cardinal includes two modes for coupling NekRS to MOOSE – 1) boundary CHT coupling via temperature
and heat flux BCs, and 2) volume coupling via volumetric heat sources, temperatures, and densities. In this
work, we combine both modes together, such that NekRS communicates via CHT with the MOOSE heat
conduction module, but via volumetric material properties, temperatures, and fission power with OpenMC.
Fig. 1 depicts the NekRS mesh and the various mesh mirrors constructed by Cardinal; the form of the mesh
mirror depends on whether NekRS is coupled to MOOSE via boundaries, volumes, or both. NekRS solves
the Navier-Stokes equations on the mesh indicated as “NekRS mesh;” note that the lines shown on this mesh
correspond to the Gauss-Lobatto-Legendre (GLL) quadrature points, and not the edges of elements.
Figure 1. Illustration of NekRS CFD mesh and the mesh mirrors used to facilitate data transfers.
For CHT-only coupling, the mesh mirror contains the boundaries to be coupled to MOOSE (for this example,
the pincell outer surfaces and the duct inner surface). Otherwise, for volume and boundary/volume coupling,
the mesh mirror contains the entire NekRS volume mesh. No solve occurs on the mesh mirrors – the mesh
mirrors are only used to receive field data for coupling. Both first and second order mesh mirrors are
4
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
available; the insets in Fig. 1 show the nodes created in the mesh mirror when using either first and second
order. After constructing the mesh mirror, the overall calculation workflow for each time step is as follows:
1. Read coupling data from the mesh mirror and write into NekRS’s source term and/or BC arrays. In-
terpolation from the mesh mirror (either a first or second order version of NekRS’s mesh) to the GLL
points (which typically represent polynomial basis of order five or above) is performed using Vander-
monde matrices. For CHT coupling, a boundary heat flux is applied to NekRS’s energy equation. For
volume coupling, a heat source is applied as a source term to NekRS’s energy equation. In both cases,
the transfer conserves power through a normalization.
2. Run NekRS for one time step.
3. Read coupling data from NekRS’s internal arrays and write onto the mesh mirror. Interpolation is
performed using Vandermonde matrices. For CHT coupling, a boundary temperature is written to the
mesh mirror. For volume coupling, both temperature and density are written to the mesh mirror.
The MOOSE transfer system is then used to transfer the fields on the mesh mirror to a coupled MOOSE
application, where the NekRS solution is used to apply BCs, set source terms, and/or update material prop-
erties (such as cross sections). NekRS supports both CPU and GPU backends; when using a GPU backend,
Cardinal facilitates all necessary copies between CPU (where MOOSE runs) and GPU (where NekRS runs).
Note that Cardinal supports a distributed mesh coupling of NekRS and MOOSE, where both the NekRS and
MOOSE meshes are distributed among MPI processes. For very large problems, this feature allows the data
transfers between applications to remain below about 5% of the total runtime.
With MOOSE’s nearest node transfer, there are no requirements on node/element alignment between NekRS’s
mesh mirror and the coupled MOOSE application’s mesh, which allows fluid boundary layers to be highly
refined without also incurring transition layers in adjacent solid regions. Because NekRS is often run in
non-dimensional form, Cardinal also handles conversions between non-dimensional and dimensional scales.
Finally, to monitor the CFD solution, Cardinal contains many postprocessors to evaluate maximums, mini-
mums, and averages of the NekRS solution. These postprocessors have recently been used to couple NekRS
to 1-D SAM flow networks [14].
2.2. OpenMC Wrapping
Cardinal couples OpenMC to MOOSE through a volumetric kappa-fission tally (recoverable fission
energy) and cross section feedback from temperatures and densities. Cardinal includes two options for
tallying the fission power in OpenMC – 1) cell tallies or 2) libMesh unstructured mesh tallies. At the
time of writing, OpenMC does not have the ability to track particles on an unstructured mesh – OpenMC’s
solution is represented as volume averages over CSG cells or mesh elements. This causes the significance
of the mesh mirror to differ slightly from that of the NekRS wrapping. For the NekRS wrapping, recall
that the mesh mirror represents a lower-order version of the mesh on which the CFD calculation occurs.
For the OpenMC wrapping, the mesh mirror is instead created off-line by the user, and (combined with the
cell definitions in the OpenMC model) represents the resolution of coupling data sent in/out of OpenMC.
Because the mesh mirror is only used for receiving data, there are no requirements on node continuity across
elements.
Fig. 2 depicts an OpenMC geometry, a mesh mirror on which coupling data is received, and the mapping
from the mesh mirror to the OpenMC cells. During initialization, Cardinal loops over all the elements in
5
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
the mesh mirror and maps each element to an OpenMC cell according to the element centroid. For the cell
IDs colored in the lower left, the element-to-cell mapping is shown on the right. The inset in the lower right
shows the boundary of an OpenMC cell as a white dashed line; the element centroids, shown as white dots,
determine the cell-to-element mapping. There are no requirements on alignment of elements/cells or on
preserving volumes – the OpenMC cells and mesh mirror elements do not need to be conformal. Elements
that don’t map to an OpenMC cell simply do not participate in the multiphysics coupling (and vice versa for
the cells); this feature may be used to exclude regions such as reflectors from multiphysics feedback.
Figure 2. Illustration of OpenMC particle transport geometry and the mapping of OpenMC cells to
a user-supplied mesh (referred to as the “mesh mirror”).
After establishing the mapping, the overall calculation workflow for each time step is as follows:
1. Read temperature and density from the mesh mirror; for each OpenMC cell, set its temperature and
density according to a volume average over that cell’s corresponding elements.
2. Run a k-eigenvalue OpenMC calculation.
3. Read a kappa-fission tally from OpenMC’s internal arrays and write onto the mesh mirror using
the cell-to-element mapping (for cell tallies) or element-to-element mapping (for unstructured mesh
tallies). For cell tallies, all elements that correspond to cell ID/instance pair ireceive the same heat
source, while for unstructured mesh tallies, each element in the mesh mirror receives a unique heat
source corresponding to the element tally bin. For both cases, the transfer conserves power through a
normalization using the total or global ‘kappa-fission‘ tally.
The MOOSE transfer system is then used to transfer the fields on the mesh mirror to a coupled MOOSE ap-
plication where they are used to apply a fission heat source term. Note that while the notion of “time steps”
is used here, it should be understood that the OpenMC-MOOSE coupling in Cardinal is limited to pseudo-
steady state type calculations due to the use of a k-eigenvalue transport calculation. Note that because
MOOSE simulations are dimension-agnostic (but typically in SI units), Cardinal also handles conversions
between meters and OpenMC’s length unit of centimeters. To visualize the resolution of the mapping and
6
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
data sent into OpenMC, Cardinal also contains many auxiliary kernels to evaluate cell IDs/instances, tem-
peratures, and densities projected onto the mesh mirror. These auxiliary kernels are helpful when building
coupled models; a CellIDAux was used to depict the mapping in Fig. 2, for instance.
3. COMPUTATIONAL MODEL
This section describes the ABR fuel assembly and the single-physics computational models constructed to
describe the heat transfer, fluid dynamics, and neutron transport physics. Because the primary emphases of
this work are to 1) describe Cardinal’s multiphysics coupling methodologies, and 2) demonstrate prelim-
inary multiphysics predictions for hexagonal fuel assemblies, the present simulations are solely intended
to demonstrate Cardinal’s “readiness” for steady-state fast reactor applications with proof-of-concept sim-
ulations. That is, the purpose at this early stage is not to validate Cardinal for fast reactor applications,
but only to show that the coupling data transfers are sufficiently sophisticated to capture the multiphysics
interactions between T/H, solid heat conduction, and neutron transport. As such, several simplifications are
made in the formulation of the geometry, material properties, and operating conditions. Because Cardinal’s
physics-agnostic coupling methodology allows each single-physics model to be substituted with higher-
fidelity inputs as-needed, the specifications for this test problem are chosen to exhibit the full range of data
transfers and code couplings needed for follow-on high-fidelity modeling and validation studies, but with a
smaller domain and simpler single-physics models.
The domain consists of a 7-pin version of the nominally 271-pin ABR metal driver fuel assemblies; nominal
fuel dimensions and material properties for the ABR driver fuel are taken from benchmark specifications
in the literature [12]. Differences between the present demonstration problem and the nominal design are
bolded in Table I. All dimensions related to the pins are identical; the present test case shortens the active
region from 4.22Lwto 3Lw, neglects the rod sodium/gas plenum, and models the upper and lower reflectors
each with a height of 25 cm. To compensate for the reduced fuel volume fraction associated with the 7-pin
geometry, the duct thickness is halved. The fuel also nominally consists of five axial enrichment zones; for
simplicity, the fuel properties are set uniformly to the composition of layer 2, but with the Pu239 enrichment
arbitrarily increased to reach an initial eigenvalue close to unity. The nominal core power of 1000 MWth is
removed by sodium coolant flowing at various mass flowrates through a number of orifice groups, resulting
in a nominal core temperature rise of 155 K, or 36.7 Kper wire pitch.
Table I. Geometric comparison between present demonstration case and the nominal design [12].
Parameter Demonstration Case Nominal Design
Pin diameter, Dp0.7646 cm 0.7646 cm
Pin pitch, P0.8966 cm 0.8966 cm
Clad thickness, tc0.0587 cm 0.0587 cm
Wire diameter, Dw0.103 cm 0.103 cm
Wire axial pitch, Lw20.32 cm 20.32 cm
Duct thickness,td0.1983 cm 0.3966 cm
Rod plenum height 0.0 cm 121.07 cm
Top reflector height 25.0 cm 112.39 cm
Bottom reflector height 25.0 cm 35.76 cm
Active height 60.96 cm 85.82 cm
Additional specifications particular to each physics are described in Sections 3.1–3.3. Section 3.4 then
revisits the data transfers between the three applications and depicts the geometries used by each application.
7
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
3.1. NekRS Model
NekRS is used to solve for the flow and heat transfer within the fluid phase with the incompressible Navier-
Stokes equations,
∇ · u= 0 ,(1)
ρf∂u
∂t +u· ∇u=−∇P=∇ · τ+ρff,(2)
ρfCp,f ∂Tf
∂t +u· ∇Tf=∇ · (kf∇Tf) + ˙qf,(3)
where kf,ρf, and Cp,f are the fluid thermal conductivity, density, and isobaric specific heat capacity; uis
the velocity; Pis the pressure; τis the viscous stress tensor; fis a force term, Tfis the fluid temperature;
and ˙qfis a volumetric heat source. In this case ˙qf= 0, but Cardinal does allow setting a fluid heat source in
NekRS. On walls, the fluid velocity is zero and the heat flux is provided by MOOSE. At the inlet, a uniform
temperature and axial velocity are specified to obtain the desired Reynolds number. At the outlet, velocity
and temperature use outflow conditions, while pressure is set to zero.
To approximate the effect of the wire wraps on crossflow, the momentum source model of Hu and Fan-
ning [15] was implemented in NekRS. This model attempts to reproduce crossflow velocities by imposing
the body force fin Eq. (2) at all quadrature points “inside” the wire region (if the wire region were ex-
plicitly represented). The momentum sink has components tangent to the wire (ft), tangential to the pin but
perpendicular to the wire (fn), and perpendicular to both the wire and the pin (fpn),
−f=fB
u2
t
2Dw
ˆnt
| {z }
ft
+un
∂un
∂nn
+ut
∂un
∂nt
+upn
∂un
∂npn ˆnn
| {z }
fn
+un
∂upn
∂nn
+ut
∂upn
∂nt
+upn
∂upn
∂npn ˆnn
| {z }
fpn
,(4)
where t,n, and pn subscripts correspond to the three force component directions and fBis a friction factor.
In the two wire-normal directions, the primary effect of the wire is to block the flow, such that the body
force components are the dominant contributions to the change in momentum. In the direction tangent to
the wire, the primary effect of the wire is to instead introduce a frictional resistance, which is approximated
by the Blasius correlation. Additional implementation details are described by Hu and Fanning [15].
While ANL is currently working toward the effort, at the time of writing, NekRS lacks wall functions for its
k-τturbulence model. To simplify the fluid flow physics and reduce boundary layer meshing requirements,
the Reynolds number for this demonstration problem is set to 500 such that the flow is laminar. As stressed
previously, this simplification does not compromise the proof-of-concept intention of this test case because
all data transfers are independent of the underlying physics approximations. Finally, NekRS’s mesh is com-
prised of 1.04×106HEX20 elements, and the solution is represented with 5th order Lagrange interpolants
of the GLL quadrature points. Mesh convergence was ascertained by performing p-refinement of a separate
CHT calculation with a uniform power distribution and enforcing less than 1% relative change in pressure
drop, maximum temperature, and maximum velocity components.
3.2. OpenMC Model
OpenMC is used to solve for the neutron transport over the fluid and solid phases. The outer surface of
the duct is assumed periodic (neglecting the thin sodium interwrapper space), while the top and bottom of
8
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
the domain are vacuum boundaries. Because the Reynolds number is decreased to 500 to ensure laminar
conditions, the OpenMC tallies are normalized by the power that yields the same temperature rise/wire pitch
as the nominal design. In other words, the total imposed power of 1150 Wresults in a temperature rise of
110.1 Kfor flow at a Reynolds number of 500.
OpenMC receives physics feedback in the form of solid temperature, fluid temperature, and fluid density;
NekRS computes fluid temperatures and densities, while the MOOSE heat conduction module computes
solid temperatures. Cross sections from the ENDF/B-VII.1 library are evaluated as a function of temperature
using statistical linear-linear interpolation between the two loaded data set temperatures that bound the cell
temperature. A libMesh unstructured mesh tally is used for scoring the recoverable fission energy release; to
save memory, a single mesh with 1680 tally bins is translated to each pin location. The geometry is divided
into 60 axial layers, with each layer divided into 38 cells (18 subchannel-type fluid cells, 7 fuel cells, 7
cladding cells, and 6 duct cells). Aside from requirements related to maximum tally error, no optimization
or mesh convergence study was performed for either the cell divisions or the mesh tally construction.
OpenMC is run in k-eigenvalue mode with 50000 particles per batch, 200 inactive batches, and 800 active
batches. These selections were obtained by requiring the maximum tally relative error to be less than 1%
and the initial axial offset (power difference between top and bottom of bundle, normalized by total power)
to be less than a 10−6for an uncoupled, uniform temperature and density case.
3.3. MOOSE Heat Conduction Model
The MOOSE heat conduction module is used to solve for heat transfer within the solid phase,
ρsCp,s
∂Ts
∂t −∇·(ks∇Ts)−˙qs= 0 ,(5)
where ks,ρs, and Cp,s are the solid thermal conductivity, density, and isobaric specific heat capacity; Ts
is the solid temperature; and ˙qsis a volumetric heat source in the solid. Because the coupled calculation
represents a pseudo-steady state, the time derivative term is neglected to accelerate the approach to steady
state. On all fluid-solid interfaces, the surface temperature is set to the fluid temperature predicted by
NekRS. All other surfaces are assumed insulated. The heat source is obtained by normalizing OpenMC’s
fission power onto the heat conduction mesh.
The thermal conductivity of the metal fuel (approximated by the material properties for uranium metal)
and HT9 cladding and duct and taken from the literature [16, 17]. In order to obtain solid temperatures
characteristic of prototypic conditions, all ksvalues are divided by a factor of 200, since the Reynolds
number of 500 is on the order of 200 times smaller than the prototypic Reynolds number. Without such
an adjustment to ks, the domain would be nearly isothermal due to the small heat source used to obtain a
temperature rise of 110.1 Kwith a Reynolds number of 500. Finally, the MOOSE heat conduction mesh
consists of 7.2×105HEX8 elements; the solution is represented with a 1st order Lagrange basis. Mesh
convergence was ascertained by performing uniform h-refinement of a separate heat conduction simulation
with uniform heat source and linearly-increasing surface temperature and enforcing less than 1% relative
change in maximum temperature.
3.4. Multiphysics Coupling
Because OpenMC lacks transient Monte Carlo transport, all Cardinal simulations involving OpenMC seek
the solution to the converged steady-state coupled physics. That is, the converged solution is obtained
9
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
by evolving time-dependent coupled physics (or time-independent physics, for the case of OpenMC’s k-
eigenvalue mode and the MOOSE heat conduction model used in the present work) until the coupled solution
becomes independent of time. MOOSE, NekRS, and OpenMC are coupled using Picard iteration. Each
application uses a unique time step. Due to the pseudo-steady nature of this example, no iterations are
performed within a given time step because nonlinearities are instead resolved in time (as opposed to in a
fixed point sense).
For each time step in the pseudo-steady calculation, Fig. 3 summarizes the data transfers that occur among
the three applications. All data transfers are facilitated through MOOSE’s MultiApp system, with OpenMC
as the master application and NekRS and MOOSE heat conduction both as “peer-level” sub-applications.
Both OpenMC and the MOOSE heat conduction module use a time step 750×larger than NekRS’s time
step of 0.02 (in non-dimensional units). By allowing sub-cycling, this results in OpenMC and MOOSE
running 750×fewer time steps than NekRS. Because NekRS’s model uses a characteristic length equal to
the pin diameter, each coupled iteration effectively solves for the T/H physics over 15 length units. As the
total height is 3Lw/Dp≈80 length units, the NekRS solve is not fully converged within a single Picard
iteration (but is fully converged through the multi-iteration Picard solve). This sub-cycling execution strat-
egy is similar in concept to many physics-based relaxation schemes [18]. No optimization was performed
in selecting the time step ratio between OpenMC/MOOSE and NekRS. A top-down view of the mesh used
in each application is also shown in Fig. 3; the reflector regions do not participate in the coupling, so are not
shown. Finally, the solution is considered converged once kis within the uncertainty band of the previous
iteration and there is less than 1 Kchange in maximum fluid, fuel, clad, and duct temperatures.
Figure 3. Illustration of all data transfers that occur within each time step.
4. RESULTS
This section presents the results of a tight coupling of OpenMC, NekRS, and the MOOSE heat conduction
module for steady-state analysis of a 7-pin Sodium Fast Reactor (SFR) bundle. Fig. 4 shows the simulation
10
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
convergence metrics as a function of iteration number. Based on the criteria discussed in Section 3.4, the
coupled calculation is converged after 9 iterations, or slightly less than two full-length flow-through times.
The convergence metrics in Fig. 4 are shown individually for each physics. The leftmost plot shows the
temperatures used to assess MOOSE’s convergence, the middle plot shows kused to assess OpenMC’s
convergence, and the rightmost plot shows the fluid temperature used to assess NekRS’s convergence.
Figure 4. Metrics used for evaluating convergence of the coupled physics solution.
Within a single iteration, A) MOOSE runs first with ˙qand Tffrom the previous iteration; B) OpenMC
runs second with Tsfrom the most recent MOOSE solve and Tfand ρffrom the previous iteration; and
C) NekRS runs third with a heat flux q00 from the most recent MOOSE solve. Because the first OpenMC
solution is performed with both nonuniform temperatures and densities, there is not a very large variation in
kwith iteration. The gray marker at iteration 0B represents the eigenvalue had OpenMC been run outside the
context of this multiphysics calculation using uniform temperatures and densities. As can be seen, adding
thermal feedback causes a decrease in kdue to an increased axial leakage in this short geometry. With
thermal feedback, the leakage fraction increases from 22.87 ±0.007% to 28.84 ±0.0078%.
Fig. 5 shows the pressure and velocity, both in nondimensional units. The locations of the wires and their
counterclockwise wrapping are shown in the left image. As the wire cuts into each channel, the wire tends
to introduce a pressure loss “behind” the wire, which qualitatively matches experimental and numerical
predictions in the literature [19, 20].
Figure 5. Pressure on the plane z=Lwand velocity on the plane z= 3Lwpredicted by NekRS.
11
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
Using subchannel terminology, the combined effects of differing flow areas and hydraulic diameters result
in the highest channel-averaged channels occurring in the edge >interior >corner channels. Define a flow
split factor Xifor channel type ias
Xi≡Ui
U,(6)
where Uiis the averaged axial channel velocity for channel type iand Uis the bulk inlet velocity. The
flow split factor for the edge channels is predicted to be 1.23, which is similar in magnitude to the 1.186
predicted by Song et al. for a 19-pin bundle [21]. However, a 7-pin bundle is generally considered too small
to reflect the flow distribution among channels observed in larger bundles [22, 23]. Combining this caveat
with the use of a low Reynolds number, additional verification and validation of the momentum source
model implementation in NekRS is needed for further interpretation of the predictions shown in Fig. 5.
Based on simulations with larger bundles, greater asymmetry in the velocity (higher velocities in the periph-
eral channels near the wire as it wraps) than what is shown in Fig. 5 is expected. Most SFR T/H simulations
employ periodic BCs, but inlet/outlet BCs are used in the present work. It is possible that the use of a
laminar Reynolds number extends the flow development length beyond the total height of 3Lwsuch that,
even though the flow is steady, the comparatively short height results in qualitatively different velocity dis-
tributions from the turbulent flow computations available for comparison in the literature. Future work will
further investigate the sensitivity to the flow BCs for this 3Lwheight geometry by incorporating periodic
BCs in the mass and momentum equations.
Fig. 6 shows the fluid temperature along five equi-spaced planes ranging from z= 0.275 m to z= 0.375 m.
Also shown is a portion of the duct temperature; two pincells are shown as gray cylinders for context. Heat
removal from the fuel causes temperature to increase in the axial direction, with the highest temperatures
occurring in the center of the bundle where the fluid is in close proximity to all seven pins.
Figure 6. Fluid temperature along five equi-spaced planes ranging from z= 0.275 to 0.375 mwith
duct temperature shown on a volume slice.
To provide additional insight into the temperature predictions, Fig. 7 shows the fuel, fluid, and duct tem-
peratures at the outlet. The small bundle size results in fairly small temperature variations in the fluid on a
12
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
given plane; combined with the nearly uniform power distribution resulting from the large mean free path
in this fast spectrum system, the fuel temperature in each pin is nearly azimuthally symmetric.
Figure 7. (a) Fuel and clad temperature and (b) fluid and duct temperature shown on the outlet.
Fig. 8 shows the fluid and fuel temperatures over the entire volume. The duct temperatures in the left image
are shown as contours. As also shown in Fig. 7, the highest duct temperatures occur at the corners due to
the proximity to the fuel. The CHT results in pin surface temperatures that vary with the fluid temperature.
Figure 8. Fluid, fuel, and duct temperatures.
Fig. 9 shows the fission power tallied on a libMesh unstructured mesh. On a separate color scale from the
full-geometry heat source, the inset provides a closer view of the heat source at the top of the bundle. While
the long mean free path of the neutrons in this fast spectrum system results in very little power variation
along a given zplane, the fairly small tally bins still result in minor radial power asymmetries between the
tally bins on a given zplane.
13
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
Figure 9. Fission power predicted by OpenMC.
Finally, Fig. 10 depicts the temperatures and heat source as a function of axial position by averaging over
x-yplanes. Due to the small radial extent of the problem and the assumption of insulated boundaries on
the duct surface, the average fluid and duct temperatures nearly overlay one another. As also shown in Fig.
9, the converged power distribution is very symmetric in this domain, even with thermal feedback. This
demonstration problem is much shorter than prototypical SFR geometries, and the mean free path in fast
spectrum systems is on the order of tens of centimeters. Therefore, the thermal feedback in this geometry
acts more in a global sense than in thermal spectrum systems. Future extensions of Cardinal’s application to
fast spectrum systems will include the full axial height and more accurate fuel-to-coolant volume ratios to
better reflect the thermal feedback in the predicted power distribution.
Figure 10. Radially-averaged temperatures (left axis) and heat source (right axis).
14
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
5. CONCLUSIONS
This paper introduced Cardinal, a new application that couples OpenMC Monte Carlo transport and NekRS
CFD to the MOOSE framework. The data transfers and coupling algorithms use in-memory communication,
distributed parallel meshes, and continuous field transfers without any requirements on conformal geome-
tries/meshes/cells. As part of a multi-year project focusing on the development of high-resolution tools for
core bowing analysis in SFRs, this work presented a “proof-of-concept” application of Cardinal to steady-
state simulation of a small-scale SFR fuel bundle. While many physics simplifications were made in the
construction of this demonstration problem, the mesh-to-mesh solution interpolations, CSG cell feedback,
and unstructured mesh tallies exhibit all requirements necessary for further extensions to more accurate and
to-scale models – such as Reynolds Averaged Navier Stokes (RANS) turbulence models in NekRS, fuel
performance models in BISON, and larger geometries more reflective of the nominal ABR fuel design. A
recent addition of on-line mesh deformation to Cardinal’s NekRS wrapping will also be incorporated into
these future applications to account for the effect of thermal expansion and irradiation swelling and creep
on the core physics.
Cardinal represents an outward-facing integration of high-fidelity tools within a large multiphysics com-
munity based on MOOSE. While the present work emphasized fast reactor analysis, Cardinal’s physics-
and geometry-agnostic coupling design has enabled diverse applications to areas such as pebble bed reac-
tor core analysis (OpenMC–NekRS–MOOSE heat conduction), thermal striping (NekRS–MOOSE tensor
mechanics), and flow loop systems (NekRS–SAM). We hope the potential for diverse applications will mo-
tivate others to consider approaching the multiphysics software design question from the point of view of
“plug-ins” to broader multiphysics frameworks.
ACKNOWLEDGMENTS
This material is based upon work supported by Laboratory Directed Research and Development (LDRD)
funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Depart-
ment of Energy (DOE) under Contract No. DE-AC02-06CH11357.
The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National
Laboratory (“Argonne”). Argonne, a U.S. DOE Office of Science laboratory, is operated under Contract No.
DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up
nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute
copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The
DOE will provide public access to these results of federally sponsored research in accordance with the DOE
Public Access Plan.
REFERENCES
1. R. Haroldsen, The Story of the BORAX Nuclear Reactor and the EBR-I Meltdown (2008)
2. B. Fontaine et al., “Description and Preliminary Results of PHENIX Core Flowering Test,” Nuclear
Engineering and Design,241, pp. 4143–4151 (2011)
3. J. J.A. Shields, “Bowing in Experimental Breeder Reactor II Reflector Subassemblies,” Nuclear Tech-
nology,52, pp. 214–227 (1981)
15
The 19th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-19) Log nr.: 35310
Brussels, Belgium, March 6 - 11, 2022
4. C. J. Permann et al., “MOOSE: Enabling massively parallel multiphysics simulation,” SoftwareX,11,
pp. 100430 (2020)
5. E. Merzari et al., “Cardinal: A Lower Length-Scale Multiphysics Simulator for Pebble-Bed Reactors,”
Nuclear Technology (2021)
6. P. Fischer et al., “NekRS, a GPU-Accelerated Spectral Element Navier-Stokes Solver,” 2021,
arXiv:2104.05829
7. P. Romano et al., “OpenMC: A State-of-the-Art Monte Carlo Code for Research and Development,”
Annals of Nuclear Energy,82, pp. 90–97 (2015)
8. A. Ivanov et al., “High Fidelity Simulation of Conventional and Innovative LWR with the Coupled
Monte-Carlo Thermal-Hydraulic System MCNP5-SUBCHANFLOW,” Nuclear Engineering and De-
sign,262, pp. 264–275 (2013)
9. J. Guo et al., “A Versatile Method of Coupled Neutronics/Thermal-Hydraulics Based on HDF5,” Proc.
Proceedings of M&C, (2017)
10. W. Gurecky and E. Schneider, “Development of an MCNP6-ANSYS FLUENT Multiphysics Coupling
Capability,” Proc. Proceedings of ICONE, (2016)
11. P. Fischer et al., “Highly Optimized Full-Core Reactor Simulations on Summit,” 2021,
arXiv:2110.01716
12. J. Cahalan et al., “Advanced Burner Reactor 1000MWth Reference Concept,” ANL-AFCI-202, Ar-
gonne National Laboratory (2007)
13. A. Novak et al., “Preliminary Coupling of OpenMC and Nek5000 Within the MOOSE Framework,”
Proc. PHYSOR, (2018)
14. A. Huxford et al., “Development of Innovative Overlapping-Domain Coupling Between SAM and
nekRS,” Proc. Proceedings of NURETH, (2022)
15. R. Hu and T. Fanning, “A Momentum Source Model for Wire-Wrapped Rod Bundles: Concept, Vali-
dation, and Application,” Nuclear Engineering and Design,262, pp. 371–389 (2013)
16. L. Leibowitz and R. Blomquist, “Thermal Conductivity and Thermal Expansion of Stainless Steels D9
and HT9,” International Journal of Thermophysics,9, pp. 873–883 (1988)
17. J. Hales et al., “BISON Theory Manual,” INL/EXT-13-29930 Rev. 3, Idaho National Laboratory (2016)
18. B. Aviles et al., “MC21/COBRA-IE and VERA-CS Multiphysics Solutions to VERA Code Physics
Benchmark Problem # 6,” Progress in Nuclear Energy,101, pp. 1–14 (2017)
19. R. Roidt et al., “Experimental Investigations of the Hydraulic Field in Wire-Wrapped LMFBR Core
Assemblies,” Nuclear Engineering and Design,62 (1980)
20. K. Hamman and R. Berry, “A CFD Simulation Process for Fast Reactor Fuel Assemblies,” Nuclear
Engineering and Design,240, pp. 2304–2312 (2010)
21. M. Song et al., “Flow Visualization on SFR Wire-Wrapped 19-Pin Bundle Geometry Using MIR-PIV-
PLIF and Comparisons with RANS-Based CFD Analysis,” Annals of Nuclear Energy,147 (2020)
22. L. Brockmeyer et al., “CFD Investigation of Wire-Wrapped Fuel Rod Bundles and Flow Sensitivity to
Bundle Size,” Proc. Proceedings of NURETH-16, (2015)
23. R. Gajapathy et al., “A Comparative CFD Investigation of Helical Wire-Wrapped 7, 19, and 37 Fuel Pin
Bundles and its Extendibility to 217 Pin Bundle,” Nuclear Engineering and Design,239, pp. 2279–2292
(2009)
16