Conference PaperPDF Available

Abstract and Figures

Cardinal is a MOOSE application that couples OpenMC Monte Carlo transport and NekRS computational fluid dynamics to the MOOSE framework, closing the neutronics and thermal-fluid gaps in conducting tightly-coupled, high-resolution multiscale and multiphysics analyses. By leveraging MOOSE's interfaces for wrapping external codes, Cardinal overcomes many challenges encountered in earlier multiphysics coupling works, such as file-based I/O or overly-restrictive geometry mapping requirements. In this work, we leverage a subset of the multiphysics interfaces in Cardinal to perform coupling of OpenMC neutron transport, MOOSE heat conduction , and THM thermal-fluids for steady-state modeling of a prismatic gas reactor fuel assembly.
Content may be subject to copyright.
Coupled Monte Carlo and Thermal-Hydraulics Modeling of a
Prismatic Gas Reactor Fuel Assembly Using Cardinal
A.J. Novak1, D. Andrs2, P. Shriwise1, D. Shaver1, P.K. Romano1,
E. Merzari3, and P. Keutelian4
1Argonne National Laboratory
{anovak, pshriwise, dshaver, promano}@anl.gov
2Idaho National Laboratory 3Pennsylvania State University 4Radiant Industries, Inc.
David.Andrs@inl.gov ebm5351@psu.edu paul@radiantnuclear.com
ABSTRACT
Cardinal is a MOOSE application that couples OpenMC Monte Carlo transport and NekRS com-
putational fluid dynamics to the MOOSE framework, closing the neutronics and thermal-fluid
gaps in conducting tightly-coupled, high-resolution multiscale and multiphysics analyses. By
leveraging MOOSE’s interfaces for wrapping external codes, Cardinal overcomes many chal-
lenges encountered in earlier multiphysics coupling works, such as file-based I/O or overly-
restrictive geometry mapping requirements. In this work, we leverage a subset of the multiphysics
interfaces in Cardinal to perform coupling of OpenMC neutron transport, MOOSE heat conduc-
tion, and THM thermal-fluids for steady-state modeling of a prismatic gas reactor fuel assembly.
1. INTRODUCTION
Many reactor phenomena are inherently multiscale and multiphysics, with scales spanning many orders
of magnitude. A challenge common to multiscale and multiphysics modeling of nuclear systems is the
historical development of state-of-the-art tools for individual physics domains using a wide variety of dis-
cretization schemes and software architectures. Depending on code design, it may not be a simple feat to
establish just the “mechanics” of code coupling — the data transfers, parallel communication, and iterative
solutions — between codes developed across varying groups. These challenges may limit the scope of mul-
tiphysics studies by restricting the pool of available modeling tools, shortening the project’s future growth
with coupling scripts confined to specific geometries, and shifting engineering effort to API development.
The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a finite element framework devel-
oped at Idaho National Laboratory (INL) that allows applied math practitioners to translate physics mod-
els into high-quality, state-of-the-art engineering software [1]. A wide variety of MOOSE-based applica-
tions are under development across various institutions and include physics such as coarse-mesh Thermal-
Hydraulics (T/H) [2] and nuclear fuel performance [3]. Because all MOOSE applications share the same
code base, a common data transfer and field interpolation system can be used to couple MOOSE applica-
tions to one another through source terms, Boundary Conditions (BCs), and virtually any other mechanism
by which physics and scales can be coupled.
In 2018, a new mechanism for coupling external applications to the MOOSE “ecosystem” was introduced.
This ExternalProblem interface provides insertion points for the initialization, solution, and postpro-
cessing of non-MOOSE codes while exposing the time stepping, synchronization, and data transfer systems
in MOOSE. Plugging an external code into the MOOSE framework in this manner is referred to as “wrap-
ping. The ExternalProblem interface allows external tools to couple in a physics- and dimension-
agnostic manner to any other MOOSE application, greatly expanding capabilities for multiphysics studies.
Cardinal seeks to bring Computational Fluid Dynamics (CFD) and Monte Carlo (MC) transport to the
MOOSE ecosystem in order to close neutronics and T/H gaps in conducting multiscale and multiphysics
analyses of nuclear systems. Cardinal leverages many years of effort at Argonne National Laboratory
(ANL) in the development of CFD and MC tools by wrapping NekRS [4], a spectral element CFD code
targeting Graphics Processing Unit (GPU) architectures; and OpenMC, a neutron and photon transport MC
code [5]. Cardinal seeks to eliminate several limitations common to earlier T/H and MC couplings:
All data is communicated in memory, obviating the need for code-specific I/O programs and reducing the
potential for file-based communication bottlenecks.
Mappings between the NekRS CFD mesh, the OpenMC Constructive Solid Geometry (CSG) cells, and
the MOOSE meshes are constructed automatically with no requirements on node/element/cell alignment.
All data is transferred using existing MOOSE interpolation and transfer systems. A general design allows
NekRS and OpenMC to be coupled to any MOOSE application. For instance, the same OpenMC model
can provide neutronics feedback to NekRS CFD, Pronghorn porous media, and SAM 1-D flow loops.
Our initial applications of Cardinal have been to Pebble Bed Reactors (PBRs) and Sodium Fast Reactors
(SFRs). In 2021, we demonstrated fully-coupled NekRS, OpenMC, and BISON simulations of a salt-
cooled PBR with 127,000 pebbles [6] and of a 7-pin SFR bundle [7]. Additional ongoing activities include
coupling NekRS to the MOOSE tensor mechanics module for thermal striping applications and coupling
NekRS to SAM for multiscale loop analysis [8]. Under the support of the DOE-NE Nuclear Energy Ad-
vanced Modeling and Simulation (NEAMS) Center of Excellence (CoE) for Thermal Fluids Applications
in Nuclear Energy, a joint collaboration between ANL, INL, and Radiant, a reactor development company
pursuing a prismatic-block gas-cooled microreactor, is underway to develop high-resolution multiphysics
modeling capabilities for High Temperature Gas Reactors (HTGRs) using Cardinal. The relatively small
size of microreactors allows high-resolution computational tools to serve an important role in areas such as
optimization and closure development. The objective of this CoE project is to demonstrate Cardinal as a
tool for full-core modeling of prismatic gas reactors; this paper presents our progress towards this goal.
We demonstrate tight coupling of OpenMC, MOOSE heat conduction, and Thermal Hydraulics Module
(THM) 1-D Navier-Stokes [9] for steady-state modeling of a prismatic gas-cooled fuel assembly *. This
work emphasizes the dimension-agnostic and “plug-and-play” nature of MOOSE simulations — without
any additional software development, in this paper we substitute a 1-D THM flow solution for the NekRS
CFD feedback used in our earlier applications [6,7]. In Section 2, we provide a high-level description of
how Cardinal wraps OpenMC and NekRS within MOOSE; Section 3 describes the target prismatic gas
reactor fuel assembly design; Section 4 discusses the computational models built for each single-physics
tool and the data transfers used to achieve coupling. Finally, Section 5 provides fully-coupled predictions
for the power, temperatures, pressures, and velocities. Section 6 then presents concluding remarks and
outlines remaining work to be conducted through the CoE.
2. CARDINAL
Cardinal leverages MOOSE’s ExternalProblem interface to wrap OpenMC and NekRS as MOOSE
applications. Cardinal is itself a MOOSE application, but essentially replaces MOOSE’s finite element
solution with API calls to external codes. Each wrapping has three main parts:
Create a “mirror” of the external application’s mesh/geometry as a MooseMesh; this mesh is the receiv-
ing point for all field data to be sent in/out of the external application.
*Part of the CoE project objectives are to develop documentation to support industry users. A tutorial based on this
paper can be found on the Cardinal website, https://cardinal.cels.anl.gov
Map from the external application to the MooseMesh; this mapping facilitates all data transfers in/out
of the external application by reading/writing to MooseVariables defined on the mesh. After data is
mapped to the mesh, the MOOSE Transfers handle transfers to/from coupled MOOSE applications.
Run the external application by calling API functions (such as nekrs::runStep) at various insertion
points defined by the ExternalProblem interface.
To simplify model setup, the spatial mappings, tally creation, and transfer of coupling data is fully auto-
mated. The input files used to run a simulation consists of the usual standalone code input files plus thin
“wrapper” input files that specify what data is to be transferred to MOOSE.
2.1. NekRS Wrapping
Cardinal includes two modes for coupling NekRS to MOOSE – 1) Conjugate Heat Transfer (CHT) coupling
via temperature and heat flux BCs, and 2) volume coupling via volumetric heat sources, temperatures, and
densities. In our previous SFR work, we combined both modes together to couple NekRS via CHT to
MOOSE but via volumetric temperatures and densities to OpenMC [7]. The mesh mirror is constructed
automatically by looping over NekRS’s mesh data structures and reconstructing the mesh as a first- or
second-order MooseMesh. The overall calculation workflow for each time step is then to 1) read data
from the mesh mirror and write into NekRS’s source term and/or BC arrays; 2) run NekRS for one time
step; and 3) read coupling data from NekRS’s arrays and write onto the mesh mirror. Interpolation is used
to transfer data between the mesh mirror and the high-order NekRS solution, with conservation ensured
through normalization. There are no requirements on node/element alignment between NekRS’s mesh
and that of the coupled MOOSE application, which allows boundary layers to be highly refined without
requiring comparable refinement in adjacent solids.
NekRS supports both Central Processing Unit (CPU) and GPU backends; when using a GPU backend,
Cardinal facilitates all necessary copies between CPU (where MOOSE runs) and GPU (where NekRS
runs). An MPI distributed mesh coupling of NekRS and MOOSE is available, which keeps the cost of data
transfers to less than 5% of runtime even for very large problems.
2.2. OpenMC Wrapping
Cardinal couples OpenMC to MOOSE through a kappa-fission tally (recoverable fission energy) and
cross section feedback from temperature and density. The fission power is tallied using either 1) cell tallies
or 2) libMesh unstructured mesh tallies. OpenMC tracks particles through a CSG space, or a collection of
cells formed as intersections of the half-spaces of common surfaces. Because a delta tracking implemen-
tation is currently under development, all results presented in this work use surface tracking; the uniform
cross section sampling implementation therefore requires uniform temperature and density in each cell.
The CSG space causes the significance of the mesh mirror to differ slightly from that for the NekRS
wrapping. With the OpenMC wrapping, the mesh mirror is created off-line using mesh generation software
based on the transfer resolution desired by the modeler. In many cases, we simply use the same mesh of
the coupled MOOSE application, though this is not required. During initialization, Cardinal automatically
loops over all the elements in this mesh mirror and maps by centroid to an OpenMC cell. There are no
requirements on alignment of elements/cells or on preserving volumes — the OpenMC cells and mesh
mirror elements do not need to be conformal. Elements that don’t map to an OpenMC cell simply do not
participate in the coupling (and vice versa for the cells); this feature is used in the present work to exclude
axial reflectors from multiphysics feedback. An illustrative example of the mapping for an SFR application
is shown in Fig. 1. The inset shows the centroid mapping in a region where cells do not align with elements.
The overall calculation workflow for each time step is then to 1) read temperature and density from the
mesh mirror, and for each OpenMC cell set the temperature and density from a volume average over the
corresponding elements; 2) run a k-eigenvalue calculation; and 3) read the tally from OpenMC’s arrays and
write onto the mesh mirror. This data transfer conserves power through normalization by element volumes.
For TRistructural ISOtropic (TRISO) applications, many T/H tools homogenize the TRISO particles into
the matrix using multiscale treatments to avoid explicitly meshing the TRISO particles [2]. Cardinal op-
tionally allows all the cells contained within a parent cell to be grouped together from the perspective of
tally and temperature feedback. In other words, a homogenized temperature from a T/H tool can be applied
to all cells contained within an OpenMC TRISO universe; this feature is used in this work.
Figure 1: Illustration of an OpenMC geometry and the mapping of cells to a user-supplied mesh.
3. PRISMATIC GAS REACTOR DESIGN
Fig. 2 shows the fuel assembly of interest; All specifications are taken from a 2016 point design developed
as part of a DOE initiative to explore high-temperature test reactors [10]. Without loss of generality, these
design specifications are considered to be sufficiently representative of the Radiant design without consist-
ing of any proprietary information. The assembly is a graphite prismatic hexagonal block with 108 helium
coolant channels, 210 fuel compacts, and 6 poison compacts in a triangular lattice.
Figure 2: Top-down view of a fuel assembly, colored by material.
The core contains 12 fuel assemblies and a number of graphite reflector assemblies in a hexagonal lattice,
enclosed within a steel reactor vessel. Upper and lower graphite reflectors reduce leakage. The present
analysis considers a single fuel assembly plus the reflectors above and below the core. Each fuel compact
contains TRISO particles dispersed in graphite with a packing fraction of 15%. Because explicit modeling
of the inter-assembly flow is outside the scope, the inter-assembly gaps are treated as solid graphite.
The TRISO particles are based on a fairly conventional design with a 435 µmdiameter kernel of uranium
oxycarbide enriched to 15.5 w/o 235U. The boron carbide poison compacts are defined with a 0.2 w/o 10 B
enrichment. The remaining TRISO particle dimensions and material properties are available elsewhere
[10]. The 200 MWth core power is removed by helium flowing at 117.3 kg s1downwards through the
core with an inlet temperature of 325°C. The outlet pressure is 7.1 MPa. These parameters are scaled to
the single-assembly case by assuming average core conditions, or an assembly power of 16.67 MWth and
an assembly mass flowrate of 9.775 kg s1. The mass flowrate in each flow channel is assumed uniform.
4. COMPUTATIONAL MODEL
This section describes the computational model of the prismatic gas reactor assembly. Mesh and physics
refinement studies are conducted for each code in a “standalone” setting for which coupling is approximated
through simple feedback terms. It is assumed that these single-physics convergence studies are sufficient
justification for convergence of the coupled problem because the homogenized temperature feedback in
OpenMC, helium’s transparency to neutrons, and THM’s area-averaged equations are assumed to induce
similar solution gradient magnitudes in the fully-coupled case as the standalone cases.
4.1. OpenMC Neutron Transport Model
The OpenMC model is shown in Fig. 2 (colored by material) and Fig. 3 (colored by cell ID or instance).
In the axial direction, the geometry is divided into nllayers of equal height. The outer surface of the
assembly is a periodic boundary, while the top and bottom surfaces are vacuum boundaries. The TRISO
particles are represented explicitly, with positions determined using Random Sequential Addition (RSA).
To accelerate the particle tracking, a Cartesian search lattice is superimposed in the fuel compacts, and the
same TRISO universe fills each fuel compact. Cell tallies are used to measure the recoverable fission energy
release. Temperature-dependent ENDF/B-VII.1 cross sections are evaluated using statistical linear-linear
interpolation between data set temperatures. S(α, β )data is applied for all graphite materials.
OpenMC uses “distributed cells” to repeat a cell multiple times throughout the domain. Each new occur-
rence of a cell is referred to as an “instance.” Cardinal applies uniform temperature and density feedback
to OpenMC for each unique cell ID +instance combination. The distributed cell feature is used here to
construct the fuel assembly from four universes: 1) a fuel pin; 2) a coolant pin; 3) a poison pin; and 4) a
solid graphite region to represent the “borders. OpenMC receives on each axial plane a total of 721 tem-
peratures and 108 densities (one per coolant channel). With references to the colors shown in Fig. 3, the
721 cell temperatures correspond to:
210 fuel compacts
| {z }
1 TRISO compact (rainbow)
1 matrix region (purple)
+ 108 coolant channels
| {z }
1 coolant region (various)
1 matrix region (various)
+ 6 poison compacts
| {z }
1 poison region (brown)
1 matrix region (blue)
+ 73 graphite fillers
| {z }
1 matrix region (mustard)
The number of axial layers is selected by performing a cell refinement study considering 25, 50, 75, and
100 axial layers. For each choice of nl, the Shannon entropy and generation kare monitored to determine
the number of inactive cycles. OpenMC’s tally trigger system is used to automatically terminate the active
cycles once reaching less than 1.5% relative tally uncertainty. The number of layers is chosen by requiring
less than 3% difference in the power distribution relative to the next-coarser mesh. Based on this study, the
OpenMC model uses 10,000 particles/cycle with 500 inactive cycles, 2000 active cycles, and nl= 50.
Figure 3: OpenMC CSG geometry, colored by cell ID or instance.
4.2. MOOSE Heat Conduction Model
The MOOSE heat conduction module is used to solve for heat transfer within the solid phase,
−∇ (ksTs)˙qs= 0 ,(1)
where ksis the thermal conductivity, Tsis the temperature, and ˙qsis a volumetric heat source. On the
fluid-solid interface, the temperature is set to the fluid temperature computed by THM. All other surfaces
are assumed insulated. The heat source is set to the normalized fission tally computed by OpenMC. The
TRISO particles are homogenized into the compacts by volume averaging material properties. This ho-
mogenization of the buffer layer thermal resistance is well-known to under-predict kernel temperatures [2];
future work will leverage Pronghorn’s multiscale TRISO model [11].
Figure 4: Top-down view of the converged MOOSE heat conduction mesh.
The solid mesh is shown in Fig. 4. This mesh is created with the reactor MOOSE module [12] with
easy-to-use mesh generators that programmatically construct reactor core meshes as building blocks of
bundle and pincell meshes. Mesh convergence is ensured by uniformly refining the mesh and requiring less
than 1% difference in radially-averaged axial temperature distributions between successive meshes.
4.3. THM Navier-Stokes Model
THM is a MOOSE-based T/H code applicable to single-phase systems-level analysis. THM solves for
conservation of mass, momentum, and energy with 1-D area averages of the Navier-Stokes equations,
∂t (f) +
∂x (fu)=0,(2)
∂t (fu) +
∂x fu2+AP =˜
P∂A
∂x f
2Dh
ρfu|u|A(3)
∂t (fEf) +
∂x [Au (ρfEf+P)] = Hwaw(Twall Tbulk )A(4)
where xis the coordinate along the flow length, Ais the cross-sectional area, ρfis the density, uis the
velocity, Pis the pressure, ˜
Pis the average pressure on the curve boundary, fis the friction factor, Dhis
the hydraulic diameter, Efis the fluid total energy, Hwis the wall heat transfer coefficient, awis the heat
transfer area density, Twall is the wall temperature, and Tbulk is the area average bulk fluid temperature. The
Churchill correlation is used for fand the Dittus-Boelter correlation is used for Hw[9].
The heat flux imposed in the NTHM elements is an area average of the heat flux from MOOSE in N
layers along the fluid-solid interface. For the reverse transfer, the wall temperature sent to MOOSE is
set to a uniform value along the fluid-solid interface according to a nearest-node mapping to the NTHM
elements. Mesh convergence is based on the same uniform refinement strategy described in Section 4.2.
4.4. Multiphysics Coupling
Fig. 5 summarizes the multiphysics data transfers; the inset describes the 1-D/3-D data transfers with THM.
While the coolant is quite transparent to neutrons, we include fluid feedback for completeness.
Figure 5: Summary of data transfers between OpenMC, MOOSE, and THM.
Note that all data transfers are achieved using existing MOOSE MultiApp transfers and are not specific
to Cardinal — this enables a seamless interchange of any of OpenMC, MOOSE heat conduction, and THM
for other MOOSE applications. OpenMC, MOOSE, and THM are coupled together using Picard iteration.
For each iteration, each application fully converges its physics (OpenMC and MOOSE heat conduction
solve steady equations, while THM is time evolved to steady state). The coupled solution is considered
converged once kis within the uncertainty band of the previous iteration and there is less than 2 Kchange
in maximum fluid, fuel, and matrix temperatures. A constant relaxation factor of 0.5 applied to the fission
power was required to achieve convergence.
5. RESULTS
Coupled convergence between OpenMC, MOOSE heat conduction, and THM is achieved in 3 Picard iter-
ations. Fig. 6 shows the OpenMC fission power distribution. The inset shows the power distribution on
the xyplane where the maximum power occurs. Slight azimuthal asymmetries exist due to the non-zero
tally uncertainty. Neutrons reflecting from the axial reflectors cause local power peaking at the ends of the
assembly, while the negative fuel temperature coefficient causes the power distribution to shift towards the
reactor inlet where temperatures are lower. The six corner poison compacts induce local power depression;
enhanced moderation and distance from the poisons results in high center-assembly powers.
Figure 6: OpenMC fission power; note the use of a separate color scale for the inset.
Fig. 7 shows the MOOSE solid temperature (left) and the cell temperature imposed in OpenMC (right).
The bottom row shows the temperature on an xyplane. The color scales differ because the volume
(top) and slice (bottom) plots are scaled to the max/min visible temperature. The insulated BCs, combined
with a lower “density” of coolant channels near the lateral faces, result in higher compact temperatures
near the assembly peripheries, except in the vicinity of the poison pins. Each OpenMC cell is set to the
volume-average temperature from the mesh mirror elements whose centroids mapped to each cell; a similar
procedure is used to set cell temperatures and densities for the fluid cells.
Fig. 8 shows the MOOSE solid temperature on several xyplanes with the THM fluid temperature
shown as tubes. An inset shows the fluid temperature on the outlet plane. The absence of compacts in
the center region results in lower fluid temperatures in this region, while the highest fluid temperatures are
observed for channels surrounded by 6 compacts that are sufficiently close to the periphery to be affected
by the lateral insulated BCs. Finally, Fig. 9 shows the radially-averaged (a) fission distribution and fluid,
compact, and graphite temperatures; and (b) velocity and pressure as a function of axial position. The
negative temperature feedback results in an inlet-peaked power distribution. The fuel temperature peaks
near the mid-plane due to the combined effects of the high power density and the fluid temperature, which
increases monotonically from the inlet. The pressure gradient is nearly constant with axial position. Due
to mass conservation, the heating of the fluid causes the velocity to increase with distance from the inlet.
Figure 7: MOOSE solid temperature (left) and volume-average temperature imposed in OpenMC
(right). Insets show a plane at z= 2.15 m from inlet. Note the use of separate color scales.
Figure 8: Fluid temperature predicted by THM (tubes and inset) and solid temperature predicted
by MOOSE (five slices). Note the use of three separate color scales.
The total runtime is 464 core·hours, broken down as 93.4% solving OpenMC, 1.6% solving MOOSE +
THM, 1.6% transferring data, and 3.4% on other activities such as computing user objects and writing
output files. We expect the implementation of delta tracking in OpenMC and more aggressive physics-
based relaxation schemes that decrease the total number of Picard iterations to significantly reduce cost.
6. CONCLUSIONS
This paper described Cardinal, an application that couples OpenMC and NekRS to MOOSE. The data
transfers and coupling algorithms use in-memory communication, distributed parallel meshes, and contin-
uous field transfers without requirements on conformal geometries/meshes/cells. As part of a CoE project
between ANL, INL, and Radiant, this paper presented a multiphysics coupling of OpenMC, THM, and
MOOSE heat conduction for a prismatic gas reactor assembly using Cardinal. By leveraging existing
MOOSE data transfer and field interpolation systems, OpenMC was coupled — without any additional
source code developments — to mixed-dimension 1-D/3-D fluid and solid feedback. Coupled physics
predictions were then provided for the distribution of fission reaction rates, temperatures, pressure, and
velocities. Future work will expand this analysis from a single fuel assembly to a full core, with select
“high-importance” flow channels modeled using NekRS CFD, while also exploring the multiscale TRISO
models to better capture the heterogeneous solid temperatures for neutronics feedback.
(a) Temperatures and power (b) Pressure and velocity
Figure 9: Radially-averaged temperatures, power, pressure, and velocity as a function of position.
ACKNOWLEDGEMENTS
Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Nuclear
Energy, NEAMS program, under contract DE-AC02-06CH11357.
REFERENCES
[1] C. J. Permann et al. “MOOSE: Enabling massively parallel multiphysics simulation.” SoftwareX,
volume 11, p. 100430 (2020).
[2] A. Novak et al. “Pronghorn: A Multidimensional Coarse-Mesh Application for Advanced Reactor
Thermal Hydraulics. Nuclear Technology,volume 207, pp. 1015–1046 (2021).
[3] J. Hales et al. “BISON Theory Manual.” INL/EXT-13-29930 Rev. 3 (2016).
[4] P. Fischer et al. “NekRS, a GPU-Accelerated Spectral Element Navier-Stokes Solver.” (2021).
ArXiv:2104.05829.
[5] P. Romano et al. “OpenMC: A State-of-the-Art Monte Carlo Code for Research and Development.
Annals of Nuclear Energy,volume 82, pp. 90–97 (2015).
[6] P. Fischer et al. “Highly Optimized Full-Core Reactor Simulations on Summit.” (2021).
ArXiv:2110.01716.
[7] A. Novak et al. “Coupled Monte Carlo Transport and Conjugate Heat Transfer for Wire-Wrapped
Bundles Within the MOOSE Framework. In Proceedings of Nureth (2022).
[8] A. Huxford et al. “Development of Innovative Overlapping-Domain Coupling Between SAM and
nekRS. In Proceedings of NURETH (2022).
[9] R. A. Berry et al. “RELAP-7 Theory Manual.” INL/EXT-14-31366 (2014).
[10] “High-Temperature Gas-Cooled Test Reactor Point Design. INL/EXT-16-38296 (2016).
[11] A. Novak et al. “Multiscale Thermal-Hydraulic Modeling of the Pebble Bed Fluoride-Salt-Cooled
High-Temperature Reactor.” Annals of Nuclear Energy,volume 154, p. 107968 (2021).
[12] E. Shemon et al. “MOOSE Framework Enhancements for Meshing Reactor Geometries. In Pro-
ceedings of Physor (2022).
... Therefore, the neutronic analysis in this paper is performed using the Monte Carlo code. Cardinal is a multi-physics coupling code wrapped by MC code OpenMC and CFD program code through MOOSE framework (Romano et al., 2015;Novak, et al., 2022;Novak et al., 2023). In this work, OpenMC is used as the coupling code for neutronic analysis. ...
Article
The heat pipe reactor utilizes heat pipes for passive heat transfer, eliminating the need for any moving parts. This feature significantly enhances its safety characteristics. Given the intricate nature of the heat pipe reactor system, a comprehensive multi-physics coupling simulation is necessary to study its safety features. In this study, a high-fidelity multi-physics coupling analysis incorporating a neutronic-thermal-mechanical-heat pipe coupling model is conducted. The work is based on the multi-physics coupling framework MOOSE, which integrates the Monte Carlo code OpenMC for neutronic analysis and built-in modules for 3-dimensional thermal-mechanical analysis. Furthermore, a heat pipe simulation method called the Thermal Resistance Grid Method is implemented within the MOOSE framework and employed in the multi-physics coupling analysis. To facilitate transient analysis, the Point Kinetics method is also integrated into the MOOSE framework to account for the time-varying power factor during reactor transients. The various modules and physical fields are coupled together using either loose or tight coupling strategies. These coupling strategies are applied to assess a test heat pipe reactor known as KRUSTY. Steady-state simulations are conducted for both normal operation and single heat pipe failure conditions, enabling the calculation of fission power distribution, temperature distribution, stress and strain profiles, as well as power and temperature redistribution following a heat pipe failure. Moreover, a load-following transient of KRUSTY is simulated, demonstrating its capability for load following without reactivity control. The aforementioned calculations validate the feasibility of the coupling framework utilized in this study and highlight its ability to accurately capture local phenomena within the reactor core.
Article
Coupled multi-physics simulations play a crucial role in the design and operation of nuclear reactors, particularly in assessing the behavior of used nuclear fuel. This study focuses on exploring the efficacy of coupled calculations for used nuclear fuel through the integration of neutron transport and thermal-hydraulics codes. Neutronics calculations were conducted using the Monte Carlo code Serpent, while thermal-hydraulic calculations utilized the Computational Fluid Dynamics (CFD) software OpenFOAM. The investigation was focused on a VVER-440 fuel pin situated in a hexagonal coolant flow area. Three computational grids were generated, containing 0.15 million, 0.39 million, and 1.1 million computational cells, along with three variants of axial material refinement featuring 42, 21, and 10 material layers. The purpose was to analyze the impact of spatial refinement on key parameters such as multiplication factor, power flux, and temperature fields. Several relaxation factors in Picard iterations were systematically compared to enhance the convergence speed of the coupling procedure. Notably, simulations without relaxation ( alpha = 1) resulted in oscillations in predicted results, while a low value of alpha led to slow convergence. The investigation revealed that employing a stochastic approximation with a varying relaxation factor coupled with a varying number of simulated particles demonstrated superior performance compared to cases with a constant relaxation factor alpha or a stochastic approximation with a constant number of simulated particles. Furthermore, it was observed that the resolution of axial fuel segmentation significantly influenced predicted multiplication factor and temperature profiles. Interestingly, the spatial resolution of the computational grid exhibited minimal impact on the predicted results.
Article
With the next generation of nuclear reactors under development, modeling and simulation tools are being developed by the U.S. Department of Energy to support their design, licensing, and future operation. Mirroring the physical test beds currently under construction (i.e., Demonstration and Operation of Microreactor Experiments, known as DOME, and Laboratory for Operating and Testing in the United States, known as LOTUS), the Virtual Test Bed was launched by the National Reactor Innovation Center in collaboration with the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program to support the advanced reactor community. This collaborative effort, which involves multiple teams at both Idaho National Laboratory and Argonne National Laboratory, aims to use state-of-the-art simulation tools to model a wide range of reactor designs. These models are automatically tested to ensure their continued functionality as the tools are further developed. Examples are extensively documented, each acting as a tutorial for applying the relevant NEAMS tools to that reactor design. Currently, five advanced reactor types (with a total of 12 specific design subvariants) are simulated by a variety of models. These models range from steady-state, core multiphysics simulations to integrated plant analysis during loss-of-flow transients. To our knowledge, this is the first publicly available library of multiphysics advanced reactor models distributed with extensive documentation and maintained through continuous integration.
Article
Cardinal is an open-source application that couples OpenMC Monte Carlo transport and NekRS computational fluid dynamics to the Multiphysics Object-Oriented Simulation Environment (MOOSE), closing neutronics and thermal-fluid gaps in conducting high-resolution multiscale and multiphysics analyses of nuclear systems. We provide an introduction to Cardinal’s software design, data mapping, and multiphysics coupling strategy to highlight our approach to overcoming common challenges in multiphysics simulation. We then describe an application of Cardinal to prismatic High Temperature Gas Reactors (HTGRs) with various combinations of NekRS, OpenMC, BISON, and THM. A high-resolution coupling of NekRS, OpenMC, and BISON provides a reference solution at the unit cell level and shows excellent agreement with a lower-resolution coupling of THM, OpenMC, and BISON. A full core coupling of THM, OpenMC, and BISON resolving the three-dimensional conjugate heat transfer and sub-pin power distribution then provides detailed predictions of HTGR temperatures and the fission distribution.
Preprint
Coupled multi-physics calculations of nuclear fuel are of interest in design and operation of nuclear reactors. The present study aims on investigation of irradiated nuclear fuel using neutronics and thermal-hydraulics coupled codes. Neutronics calculations were done in Monte Carlo code Serpent and CFD software OpenFOAM was used for thermal-hydraulic calculations. The coupled calculations were done on one VVER-440 fuel pin in a hexagonal coolant flow area. Three computational grids with 0.15 mil., 0.39 mil. and 1.1 million of computational cells and three variants of axial material refinement with 42, 21 and 10 material layers to investigate influence of spatial refinement on a multiplication factor, power flux and temperature fields. Several relaxation factors in Picard iterations were compared in order to accelerate convergence of the coupling procedure.
Conference Paper
Full-text available
This paper introduces a new application, Cardinal, that couples OpenMC Monte Carlo transport and NekRS computational fluid dynamics to the MOOSE framework, closing the neutronics and thermal-fluid gaps in conducting tightly-coupled, high-resolution multiscale and multiphysics analyses of nuclear systems. This coupling specifically aims to address and overcome challenges encountered in earlier multiphysics coupling works such as file-based communication , overly-restrictive mesh mappings, or rigid limitations to pin-type fuels. In addition, coupling within the MOOSE framework enables a broad range of applications by leveraging the efforts and progress of a diverse user community. This work describes the data transfers and solution algorithms in Cardinal and demonstrates the analysis framework via a tightly coupled simulation of a 7-pin fast reactor bundle.
Article
Full-text available
This paper presents an overview of Pronghorn, a multiscale thermal-hydraulic (T/H) application developed by Idaho National Laboratory and the University of California, Berkeley. Pronghorn, built on the open-source finite element Multiphysics Object-Oriented Simulation Environment (MOOSE), leverages state-of-the-art physical models, numerical methods, and nonlinear solvers to deliver fast-running advanced reactor T/H simulation capabilities within a modern software engineering environment. This work summarizes the physical models, multiphysics and multiscale coupling, and numerical discretization in Pronghorn with emphasis on our initial target application to pebble bed reactors (PBRs). A diverse set of applications are shown to depressurized natural circulation in the SANA experiments, forced convection in the Pebble Bed Modular Reactor, three-dimensional (3-D)/one-dimensional coupling of Pronghorn and RELAP-7 systems T/H for loop analysis in the High Temperature Reactor Power Module, and forced convection in the Mark-1 Pebble Bed Fluoride-Salt-Cooled High-Temperature Reactor. A multiphysics coupling of Pronghorn, RELAP-7, and Griffin deterministic neutronics for a gas-cooled PBR demonstrates the capability of the MOOSE framework for reactor design calculations. These applications highlight the verification and validation underlying Pronghorn's software development while emphasizing features that improve upon capabilities offered by legacy tools in areas such as 3-D unstructured meshing, physics modeling, and multiphysics coupling.
Article
Full-text available
Harnessing modern parallel computing resources to achieve complex multiphysics simulations is a daunting task. The Multiphysics Object Oriented Simulation Environment (MOOSE) aims to enable such development by providing simplified interfaces for specification of partial differential equations, boundary conditions, material properties, and all aspects of a simulation without the need to consider the parallel, adaptive, nonlinear, finite element solve that is handled internally. Through the use of interfaces and inheritance, each portion of a simulation becomes reusable and composable in a manner that allows disparate research groups to share code and create an ecosystem of growing capability that lowers the barrier for the creation of multiphysics simulation codes. Included within the framework is a unique capability for building multiscale, multiphysics simulations through simultaneous execution of multiple sub-applications with data transfers between the scales. Other capabilities include automatic differentiation, scaling to a large number of processors, hybrid parallelism, and mesh adaptivity. To date, MOOSE-based applications have been created in areas of science and engineering such as nuclear physics, geothermal science, magneto-hydrodynamics, seismic events, compressible and incompressible fluid flow, microstructure evolution, and advanced manufacturing processes.
Technical Report
Full-text available
The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios. RELAP-7 is a new project started in Fiscal Year 2012. It will become the main reactor systems simulation toolkit for the LWRS (Light Water Reactor Sustainability) program’s RISMC (Risk Informed Safety Margin Characterization) effort and the next generation tool in the RELAP reactor safety/systems analysis application series (the eventual replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP- 7 utilizes well-posed governing equations for compressible two-phase flow, which can be strictly verified in a modern verification and validation effort. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades and provide a basis for the closure relations that will be required in RELAP-7. RELAP-7 uses modern numerical methods, which allow implicit time integration, second-order schemes in both time and space, and strongly coupled multi-physics. RELAP-7 is written with object oriented programming language C++. By using the MOOSE development environment, the RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve efficiently with time. MOOSE is an HPC development and runtime framework for solving computational engineering problems in a well planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE reduces the expense and time required to develop new applications. MOOSE provides numerical integration methods and mesh management for parallel computation. Therefore RELAP-7 code developers have been able to focus more upon the physics and user interface capability. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multiphysics and multi-dimensional analysis capabilities, such as radiation transport and fuel performance, can be obtained by coupling RELAP-7 and other MOOSE-based applications through MOOSE and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis type simulations and gives priority to retain and significantly extend RELAP5’s capabilities. During the Fiscal Year 2012, MOOSE was extended to better support system analysis code development. The software structure for RELAP-7 had been designed and developed. Numerical stability schemes for single-phase flow, which are needed for continuous finite element analysis, have been developed. Major physical components have been completed (designed and tested) to support a proof of concept demonstration of RELAP-7. The case selected for initial demonstration of RELAP-7 was the simulation of a two-loop, steady state PWR system. During Fiscal Year 2013, both the homogeneous equilibrium twophase flow model and the seven-equation two-phase flow model have been implemented into RELAP-7. A number of physical components with two-phase flow capability have been developed to support the simplified boiling water reactor (BWR) station blackout (SBO) analyses. The demonstration case includes the major components for the primary system of a BWR, as well as the safety system components for reactor core isolation cooling (RCIC) and the wet well of a BWR containment. The homogeneous equilibrium two-phase flow model was used in the simplified BWR SBO analyses. During Fiscal Year 2014, more detailed implementation of the physical models as well as the code performance improvements associated with the seven-equation two-phase flow model are being carried out in order to demonstrate more refined BWR SBO analyses with more realistic geometries. In summary, the MOOSE based RELAP-7 code development is a new effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.
Article
The development of NekRS, a GPU-oriented thermal-fluids simulation code based on the spectral element method (SEM) is described. For performance portability, the code is based on the open concurrent compute abstraction and leverages scalable developments in the SEM code Nek5000 and in libParanumal, which is a library of high-performance kernels for high-order discretizations and PDE-based miniapps. Critical performance sections of the Navier–Stokes time advancement are addressed. Performance results on several platforms are presented, including scaling to 27,648 V100s on OLCF Summit, for calculations of up to 60B grid points (240B degrees-of-freedom).
Article
The complex core geometry of Pebble Bed Reactors (PBRs) necessitates multiscale techniques for fast-turnaround design and analysis. This paper describes the multiscale model implemented in the Pronghorn PBR simulation tool and demonstrates application to steady-state analysis of the Mark-1 Pebble Bed Fluoride-Salt-Cooled High-Temperature Reactor (PB-FHR). Verification of the pebble model with fully-resolved heat conduction shows that material-wise pebble temperatures are predicted to within 10°C over a wide range in thermal conditions. Anisotropic drag models are correlated for the outer reflector blocks using COMSOL, providing closures for modeling of bypass flows. With a porous media model of the outer reflectors, the core bypass fraction and fuel, reflector, and structural material temperatures are predicted for a number of different inflow conditions. This work demonstrates the full-core analysis capabilities of the Pronghorn application and enables comprehensive reactor analysis with the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework.
Article
This paper gives an overview of OpenMC, an open source Monte Carlo particle transport code recently developed at the Massachusetts Institute of Technology. OpenMC uses continuous-energy cross sections and a constructive solid geometry representation, enabling high-fidelity modeling of nuclear reactors and other systems. Modern, portable input/output file formats are used in OpenMC: XML for input, and HDF5 for output. High performance parallel algorithms in OpenMC have demonstrated near-linear scaling to over 100,000 processors on modern supercomputers. Other topics discussed in this paper include plotting, CMFD acceleration, variance reduction, eigenvalue calculations, and software development processes.
Highly Optimized Full-Core Reactor Simulations on Summit
  • P Fischer
P. Fischer et al. "Highly Optimized Full-Core Reactor Simulations on Summit." (2021).
Development of Innovative Overlapping-Domain Coupling Between SAM and nekRS
  • A Huxford
A. Huxford et al. "Development of Innovative Overlapping-Domain Coupling Between SAM and nekRS." In Proceedings of NURETH (2022).