PreprintPDF Available

Equivariant geometric learning for digital rock physics: estimating formation factor and effective permeability tensors from Morse graph

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

We present a SE(3)-equivariant graph neural network (GNN) approach that directly predicting the formation factor and effective permeability from micro-CT images. FFT solvers are established to compute both the formation factor and effective permeability, while the topology and geometry of the pore space are represented by a persistence-based Morse graph. Together, they constitute the database for training, validating, and testing the neural networks. While the graph and Euclidean convolutional approaches both employ neural networks to generate low-dimensional latent space to represent the features of the micro-structures for forward predictions, the SE(3) equivariant neural network is found to generate more accurate predictions, especially when the training data is limited. Numerical experiments have also shown that the new SE(3) approach leads to predictions that fulfill the material frame indifference whereas the predictions from classical convolutional neural networks (CNN) may suffer from spurious dependence on the coordinate system of the training data. Comparisons among predictions inferred from training the CNN and those from graph convolutional neural networks (GNN) with and without the equivariant constraint indicate that the equivariant graph neural network seems to perform better than the CNN and GNN without enforcing equivariant constraints.
Computer Methods in Applied Mechanics and engineering manuscript No.
(will be inserted by the editor)
Equivariant geometric learning for digital rock physics: estimating
formation factor and effective permeability tensors from Morse graph
Chen Cai ·Nikolaos Vlassis ·Lucas Magee ·Ran Ma ·
Zeyu Xiong ·Bahador Bahmani ·Teng-Fong Wong ·Yusu
Wang ·WaiChing Sun
April 13, 2021
Abstract
We present a SE(3)-equivariant graph neural network (GNN) approach that directly predicting the
formation factor and effective permeability from micro-CT images. FFT solvers are established to compute
both the formation factor and effective permeability, while the topology and geometry of the pore space
are represented by a persistence-based Morse graph. Together, they constitute the database for training,
validating, and testing the neural networks. While the graph and Euclidean convolutional approaches
both employ neural networks to generate low-dimensional latent space to represent the features of the
micro-structures for forward predictions, the SE(3) equivariant neural network is found to generate more
accurate predictions, especially when the training data is limited. Numerical experiments have also shown
that the new SE(3) approach leads to predictions that fulfill the material frame indifference whereas the
predictions from classical convolutional neural networks (CNN) may suffer from spurious dependence on
the coordinate system of the training data. Comparisons among predictions inferred from training the CNN
and those from graph convolutional neural networks (GNN) with and without the equivariant constraint
indicate that the equivariant graph neural network seems to perform better than the CNN and GNN without
enforcing equivariant constraints.
Keywords effective permeability, graph neural network, equivariance, objectivity, deep learning
1 Introduction
Formation factor and effective permeability are both important hydraulic and engineering properties of
porous media. While formation factor measures the relative electric resistance of the void space of the porous
media to the reference fluid, the effective permeability measures the ability of the fully fluid-saturated
porous media to transmit fluid under a given pressure gradient. Predicting effective permeability and
conductivity from micro-structures has been a crucial task for numerous science and engineering disciplines.
Ranging from biomedical engineering (cf. Cowin and Doty [2007]), geotechnical engineering (e.g. Terzaghi
et al. [1996]), nuclear waste disposal to petroleum engineering, the knowledge on permeability may affect
the economical values of reservoirs, the success of tissue regeneration, and the likelihood of landslides and
earthquakes [White et al.,2006,Sun et al.,2011a,2014,Kuhn et al.,2015,Paterson and Wong,2005,Suh and
Sun,2021].
Effective permeability can be estimated via the establishment of porosity-formation-factor-permeability
relationship (cf. Jaeger et al. [2009], Bosl et al. [1998]). This relationship can be completely empirical and
established via regression analysis or they can be based on theoretical justification such as fractal geometry
Chen Cai, Lucas Magee, Yusu Wang Halicioglu Data Science Institute, University of California, San Diego, California
Nikolaos N. Vlassis, Ran Ma, Zeyu Xiong, Bahador Bahmani, WaiChing Sun
Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, New York
Teng-fong Wong Department of Geosciences, Stony Brook University, Stony Brook, New York
arXiv:2104.05608v1 [cs.LG] 12 Apr 2021
2 Chen Cai et al.
assumption Costa [2006] and percolation threshold [Mavko and Nur,1997]. The recent advancement of
tomographic imaging techniques has made it possible to obtain 3D images of microstructures of various
porous media [Arns et al.,2004,Fredrich et al.,2006,Sun and Wong,2018,Sun et al.,2011b]. These newly
available images provide a new avenue to establish a fuller and precise picture of the relationship between
the pore geometry and macroscopic transport properties. If image segmentation is conducted properly, the
pore space inferred from micro-CT images can be directly used to obtain formation factor and effective
permeability through inverse problems. Finite volume, finite element as well as recently lattice Boltzmann
and Fast Fourier transform solvers can all be employed to solve Poisson’s and Stokes’ equation to obtain
formation factor and effective permeability. These image-based simulations and inverse problems have led
to an emerging new research area often referred to as Digital Rock Physics (DRP) [Andr
¨
a et al.,2013a,b]. By
using images to predict physical properties, the digital rock physics framework could be viewed as a digital
twin that enables a non-destructive way to infer material properties in a more repeatable and cost-efficient
manner compared to physical tests of which the results are regarded as the ground truth.
Nevertheless, conducting 3D simulations on images can be quite costly, as the computational time and
memory requirement both scale up with the domain size and become the technical barriers for the digital
rock physics approach. Alternatively, the recent advancements on the convolutional neural networks have
led to many attempts to directly inferring relationships between the 3D images and the resultant effective
permeability.
Among the earlier works, Srisutthiyakorn* [2016] shows the application of CNN and multi-layer
perceptron (MLP) frameworks for predicting scalar-valued permeability of rock samples directly from the
2D and 3D images of rock samples, instead of detailed numerical simulations. Wu et al. [2018] proposed
a novel CNN architecture that utilizes other material descriptors such as porosity in the CNN’s fully
connected layer. They show that extra physical or geometrical features may enhance the prediction capacity.
Sudakov et al. [2019] study the effect of feature engineering among various geometrical descriptors on
the accuracy of the permeability regression task. They empirically show that the conventional 3D CNN
outperforms 2D CNN and MLP for permeability prediction. The diffusivity of synthetic 2D porous rock
samples is successfully predicted for a wide range of porosity values via a CNN architecture enhanced by
incorporating field knowledge (the same idea as Wu et al. [2018]), pre-processing of the image input, and
customizing the loss function [Wu et al.,2019]. These earlier studies focus on developing surrogate models
for the scalar-valued quantities, while the permeability is usually an anisotropic tensor-valued quantity for
heterogeneous rock samples.
Santos et al. [2020] introduce a convolutional neural network framework called PoreFlow-Net that
directly predicts flow velocity field at each voxel of the input image then infer the effective permeability via
inverse problems. They incorporate additional geometrical and physical features, e.g., Euclidean distance
map, in their proposed CNN pipeline to inform the neural network about local and global boundary
conditions. They use the L1-norm loss function that makes prediction more robust to outliers compared
to the mean squared root loss function. They show that the proposed model generalizes well in dealing
with complex geometries not seen in the training stage. While the proposed work shows good matches
between predicted scalar permeability and the benchmark values, the framework has yet to extend to
predict ansiotropic permeability which may require more simulation data and predictions of more complex
flow patterns. Furthermore, as the classical CNN architecture cannot admit rotational symmetry groups to
conform with the frame-indifference property of the flow field, an intractable amount of rotated data set
may be required to augmented in 3D applications.
Despite the aforementioned progress, training classical convolutional neural networks that leads to
robust property predictions is not trivial. First, a fundamental issue that may lead to erroneous prediction is
that the classical CNN is not designed to give predictions independent of coordinate frames. For instance, if
the micro-CT image is rotated by a rotation tensor
R
, then the same permeability tensor represented by the
old and new coordination systems should be related by
k0=RkRT(1)
whereas the eigenvalues and the three invariants of the permeability tensors should be independent of
the frame. Furthermore, the effective permeability is expected to remain constant when the images are
in rigid body translation. However, while the earlier designs of convolutional neural network do exhibit
translational equivariant behavior, the predictions may suffer from spurious dependence due to the rotation
Equivariant geometric learning for digital rock physics 3
of the observer [Cohen and Welling,2016]. As such, the resultant predictions on effective permeability and
formation factor tensors may therefore exhibit sensitivity on the frame of reference and hence reduce the
quality of the predictions. Another important issue that deserves attention is the representation of the pore
structures. While it is common to use the binary indicator function the Euclidean voxel space of the binary
image to represent the pore structures, such an approach may not be most efficient and convenient. For
example, the Euclidean representation may incorrectly incorporate ring artifacts into the encoded feature
vector. Furthermore, the micro-CT images are often sufficiently large that the 3D convolutional layer may
demand significant memory and GPU time and hence make the training expensive [Vlassis et al.,2020,
Frankel et al.,2019].
The goal of this research is to overcome these two major technical barriers to enable faster and more
accurate and rigorous predictions on the hydraulic properties (effective permeability and formation factor).
In particular, we introduce three new techniques that have yet to be considered for geomechanics predictions,
(1) the representation of pore geometry via Morse graph, (2) the equivariant neural network that generates
predictions equivariant with respect to 3D rotation, and (3) the graph convolutional neural network that
generates low-dimensional features to aid predictions for the formation factor and effective permeability.
Our results show that the graph neural network (GNN) consistently performs better than the classical
convolutional neural network counterpart and the SE(3) equivariant neural network performs the best
among other options.
This paper is organized as follows. We first provide a brief account of how the database is generated.
Our goal is to introduce supervised learning where the inputs are the pore space represented by either a
binary indicator function in the Euclidean physical domain or a Morse graph that represents the topology of
the pore space, and the outputs are the formation factor and permeability.
2 Database generation via direct numerical simulations with FFT solvers
To generate the database, we use Fast Fourier Transform Solver to infer both effective permeability and
formation factor from the micro-CT images. For completeness, the procedures to obtain the formation factor
and effective permeability are outlined in the two sub-sections. The results are summarized in Figure 1.
Fig. 1: Eigenvalues distribution for permeability and formation factor inferred from FFT simulations
performed on sub-volumes of a sandstone microCT image. The image voxel resolution is 8-micron and the
sub-volume is of 150X150X150 voxels.
4 Chen Cai et al.
2.1 Effective permeability inferred from micro-CT images
Consider a unit cell
=α[Yα
,
Yα]
defined in the Euclidean space
Rd
with
eα
being the
α
-th basis vector.
A scalar-valued function f:RdRis called a periodic function if
f x+
d
α=1
Yαkαeα!=f(x),xRdand kZd. (2)
Furthermore, the Sobolev space of periodic functions Hs
#(Rd)is defined as
Hs
#(Rd) = {f|fHs(Rd),fis periodic}. (3)
The unit cell is further divided into the solid skeleton
s
and the fluid-filled void
l
, with
fs=
and
fs=
. The solid skeleton is a closed set with a periodic pattern, i.e.
¯
s=s
. The local pressure
p:Rwithin the unit cell is split into the average part and a fluctuation term
xp(x) = G+xp(x),h∇xp(x)i=1
VZxp(x)dV=0(4)
where
GRd
is the macroscopic pressure gradient and
p(x):R
is the microscopic pressure
perturbation. Suppose that the incompressible viscous fluid flows within the void region with the flow
velocity
v:Rd
, such that the velocity field and the pressure field fulfill the incompressible Stokes
equation µf2v(x) + xp(x) = 0, x·v=0 , xf
v(x) = 0 , xs,(5)
where µfis the viscosity. Then, the permeability tensor κof the porous media is computed as
hvi=κ
µf
G. (6)
An FFT-based method is utilized to solve the Stokes equation
(5)
within the porous media [Monchiet
et al.,2009]. A fourth order compliance tensor M(x)is first introduced
sym (xv)=M(x):σ,M(x) = (1
2µfI1
3II,xf
0,xs,(7)
where
I
is the fourth order identity tensor,
I
is the second order identity tensor, and a small value
ζ
is
introduced to maintain the stress continuity within the solid skeleton. The incompressible Stokes equation
(5) is equivalently written as
Γ(x)[M(x):(σ0(x) + δσ(x))] =0Γ(x)(M(x):δσ(x))=Γ(x)(M(x):σ0(x))(8)
where
denotes the convolution operator. The fourth order Green’s tensor
Γ
is defined in the frequency
space as [Nguyen et al.,2013]:
ˆ
Γ=2µ0(ββ+ββ),β=I1
|ξ2|ξξ,(ab)ijkl =1
2aik bjl +ail bjk . (9)
where
ξα=kα/
2
Yα
,
kZd
, and
µ0
is a reference viscosity. For
ξ=0
,
ˆ
Γ=0
because of the zero-mean
property. For an arbitrary second-order tensor field
T(x)
,
ΓT
is divergence free and its spatial average
vanishes. Also,
Γε=
0 for any compatible strain field
ε(x) = sym(xu)
. The macroscopic pressure
gradient
G
is introduced through the initial guess of the stress distribution
σ0
[Monchiet et al.,2009,Nguyen
et al.,2013]:
σ0=Λ(x)f(x), (10)
Equivariant geometric learning for digital rock physics 5
where
f(x)
extends the macroscopic pressure gradient
G
from the void
s
to the unit cell
, and
Λ(x)
projects
f
to a divergence-free stress field. The third order tensor field
Λ(x)
and the vector field
f(x)
is
derived as
Λijk (ξ) = (i
|ξ|4hδijξk+δik ξj+δjk ξi|ξ|22ξiξjξki,ξ6=0
0 , ξ=0, (11)
f(x) = (G,xf
1cs
csG,xs.(12)
where csdefines the volume fraction of the solid phase.
Equation
(8)
is linear and is solved by the conjugate gradient method. Figure 2shows the streamlines of
flow velocity obtained four FFT simulations performed on RVE with porosity ranging from 0.208 to 0.257.
To aid the visualization, the color map of the streamline is plotted on the logarithmic scale. Note that the
true flow velocity is equal to Darcy’s velocity divided by the porosity (assuming that the area and volume
fraction of void are equal) [Bear,2013]. Hence, if two microstructures are of the same effective permeability
but with different porosities, the one with the lower porosity is expected to have higher flow velocity. What
we observed in the streamlines plotted in Figure 2nevertheless indicates that the flow speed is lower in the
RVE of lower porosity. This observation is consistent with the porosity-permeability relationship established
in previous work such as Andr¨
a et al. [2013b].
2.2 Formation factor inferred from micro-CT images
The formation factor
F
, which is defined as the ratio between the fluid conductivity and the effective
conductivity of the saturated porous rock, is used to quantify the electric resistivity of a porous media
relative to that of water. The formation factor is considered as a physical property that solely depends on
the geometry and topology of the pore space in the porous media [Archie et al.,1942]. Previous studies have
shown that the formation factor is significantly influenced by the electric double layer between the solid
phase and the liquid phase, and the surface conductivity should be taken into consideration to perform a
micro-CT image-based formation factor prediction at the pore scale [Zhan et al.,2010].
Suppose that all the related physical quantities, including the local conductivity
A
, the current density
J
,
and the electrostatic potential gradient
e
, are defined within the periodic unit cell
, the charge conservation
equation reads:
x×e(x) = 0,J(x) = A(x)·e(x),x·J(x) = 0. (13)
The local electrostatic potential gradient field
e(x)
is further split into its average
E
and a fluctuation term
e(x)H1
#(Rd):
e(x) = E+e(x),he(x)i=0. (14)
The charge conservation equation
(13)
defined within the periodic domain
is equivalently written as
the periodic Lippmann-Schwinger equation:
G(x)[A(x)·(E+δe(x))] =0G(x)[A(x)·δe(x)]=G(x)[A(x)·E]. (15)
The fourth order Green’s operator Gis provided in the frequency space as [Vondˇ
rejc et al.,2014]:
ˆ
G(ξ) = ξξ
ξ·A0·ξ. (16)
where
A0
is a reference conductivity. For
ξ=0
,
ˆ
G=0
because of the zero-mean property. By applying the
averaging electric potential gradient
E
upon the unit cell, the local current density
J
is obtained by solving
Equation (15) with conjugate gradient method.
It has been demonstrated that the anisotropic conductivity within the interfacial region has a strong
influence on the formation factor due to the presence of electric double layers between the solid and liquid
constituents [Weller et al.,2013,Bussian,1983]. However, the thickness of the double layer is usually smaller
than the voxel length. As such, running the FFT electric conductance simulations on the three-phase porous
media consisting of solid, double layer and water is not feasible. In this work, we extend the upscaling
6 Chen Cai et al.
(a) porosity=0.257 (b) porosity=0.228
(c) porosity=0.223 (d) porosity=0.208
Fig. 2: Streamline of the flow velocity for four RVE with porosity = 0.257 (UPPER LEFT), 0.228 (UPPER
RIGHT), 0.223 (LOWER LEFT) and 0.208 (LOWER RIGHT) obtained from FFT direct numerical simulations
in which the effective permeabilities are extracted.
approach in [Zhan et al.,2010] to first compute the tensorial effective conductivity of the surface voxel such
that the voxels at the solid-fluid interfacial regions are upscaled to effective values analytically. Then, the
FFT simulation is run to determine the effective conductivity of the entire RVE (see Figure 3).
The interface was first identified from the three-dimensional binary image, which was labeled as yellow
voxels with the unit normal vectors in Figure 3(LEFT). To extract the interface, each voxel was analyzed
along with the 26 closest neighboring voxels in the 3x3x3 window centered around it. The isolated voxels,
whose neighbors are all in the opposite phase, were eliminated. Then the solid voxels neighboring the fluid
were selected to represent the interface if they have at least 9 fluid neighboring voxels. Furthermore, as
shown in Figure 3(RIGHT), the interface voxels were modeled as electric double layers, consisting of a layer
of fluid and a layer with conductivity
σsur f
. The normal component
σn
of the interface conductivity was
calculated as two electrical resistors in series while the tangential component
σt
was computed by resistors
in parallel as shown in (17).
σt=Lχd
Lσf+χd
Lσsur f ,σn= [( Lχd
Lσf)1+ ( χd
Lσsur f )1]1(17)
where
L
is the voxel length,
χd
is the thickness of the surface layer,
σf
is the fluid conductivity. To determine
the normal vectors of the interface, the signed distance function was generated by an open-source software
ImageJ (cf. Abr
`
amoff et al. [2004]), in which the value of each voxel represents its shortest distance to the
Equivariant geometric learning for digital rock physics 7
Fig. 3: Surface and unit normal vectors between solid and fluid phases (LEFT) and structure of electric
double layer (RIGHT).
interface. The normal vectors
n1
were then determined by the gradient of the signed distance function as
shown in Figure 3(LEFT). The upscaled electric conductivity tensors
σ3
of the interface voxel are calculated
via
σ3=σnn1n1+σtn2n2+σtn3n3. (18)
where
n1
is the normal vector of the solid surface and
n2
and
n3
are two orthogonal unit vectors that
spans the tangential plane of the surface. This approach is different from Zhan et al. [2010] in the sense
that the analytical electric conductivity at the solid-fluid boundary is tensorial and anisotropic instead of
volume-averaged and isotropic. Such a treatment enables more precise predictions of the anisotropy of the
surface effect on the electric conductivity that is important for estimating the conductivity tensor.
Once all voxels in the spatial domain are assigned with the corresponding electrical conductivity, we
obtain a heterogeneous domain where electric conductivity may vary spatially. The effective conductivity
tensor can be obtained through computational homogenization via the FFT solver described above. The
effective conductivity
σeff
is defined as a second-order tensor which projects the applied electric potential
gradient Eto the homogenized current density as:
hJ(x)i=σeff ·E. (19)
Since the charge conservation equation
(13)
is linear, the effective conductivity
σeff
is constant for each
micro-CT image. Then, the formation factor
F
is defined as the ratio between the fluid conductivity
σf
and
the effective conductivity σeff as:
F=σfσ1
eff . (20)
Figure 4shows the simulated streamlines of the electric current along the x-axis for the same RVEs
plotted in Figure 2. As expected, the RVEs with larger porosity tend to be more conductive. However, it
is also obvious that the streamlines in Figures 2and 4also exhibit significantly different patterns. This
difference in the streamline pattern could be attributed to the surface effect of the electric conductivity and
could be affected by the periodic boundary condition enforced by the FFT solver.
Regardless of the reasons, the streamline patterns of the inverse problems may dictate the difficulty of
the neural network predictions of the formation factor and permeability and will be explored further in
Section 5.
Remark 1
The homogenized permeability
κ
is non-symmetric in general since the macroscopic pressure
gradient
G
and the homogenized velocity
hvi
are not power conjugate to each other. In fact, the homogenized
stress
hσi
and the homogenized velocity gradient
h∇xvi
are power conjugate and their inner product
equals to the input power due to the Hill-Mandel condition. Similar results are also reported previously
[White et al.,2006,Sun and Wong,2018]. On the other hand, the formation factor
F
is symmetric, since the
homogenized current density
hJi
and the applied electric potential gradient
E
are power conjugate to each
other.
8 Chen Cai et al.
(a) porosity=0.257 (b) porosity=0.228
(c) porosity=0.223 (d) porosity=0.208
Fig. 4: Streamline of the gradient of electric current for four RVE with porosity = 0.257 (UPPER LEFT), 0.228
(UPPER RIGHT), 0.223 (LOWER LEFT), and 0.208 (LOWER RIGHT) obtained from FFT direct numerical
simulations in which the effective conductivities are extracted.
3 Discrete Morse Graph Generation
To predict permeability, one could directly apply a CNN-type architecture on the input 3D images. However,
recognizing that the effective permeability is in fact determined by the pores among grains, it is thus intuitive
that a representation capturing such a pore network is more informative and provides better inductive bias
for the learning algorithm.
Our framework will first construct a graph-skeleton representation of the “pore” networks from the 3D
images, using the discrete Morse-based algorithm of [Dey et al.,2018], which we refer to as DM-algorithm
herein. Below we first briefly provide the intuition behind this algorithm. We then describe the enrichment
of such a DM-graph representation, which we will be used as input for a GNN architecture. Note that our
experimental results (presented in Tables 1and 2) show that such a graph skeleton + GNN framework
indeed leads to much better prediction accuracy compared to an image + CNN-based framework.
3.1 Discrete Morse based graph skeletonization
To provide the intuition of the DM-algorithm of Wang et al. [2015], Dey et al. [2018], first consider the
continuous case where we have a density function
ρ:RdR
. In our setting, an image can be thought of
Equivariant geometric learning for digital rock physics 9
Fig. 5: An example of discrete Morse graph reconstruction on 2D data. (A) A single image from our 3D stack.
(B) The signed distance from the boundary of (A). (C) The image (B) is converted into a triangulation and
density function. The discrete Morse graph reconstruction then captures the mountain ridges of the density
function. (D) These ridges capture maximal distance from the boundary of (A).
as a discretization of a function on
R3
. If we view the graph of this density function as a terrain in
Rd+1
(see Figure 5A), then its graph skeleton can be captured by the “mountain ridges” of this density terrain,
as intuitively, the density on such ridges is higher than the density off the ridges. Mathematically, such
mountain ridges can be described by the so-called 1-stable manifolds from Morse theory, which intuitively
correspond to curves connecting maxima (mountain peaks) to saddles (of the index (
d
1)) and to other
maxima, forming boundaries separating different basins/valleys in the terrain. To simplify the terrain
and only capture “important” mountain ridges, we use the persistent homology theory [Zomorodian and
Carlsson,2005], one of the most important developments in the field of topological data analysis in the past
two decades, and simplify those less important mountain peaks/valleys (deemed as noise). As the graph is
computed via the global mountain ridge structures, this approach is very robust to gaps in signals, as well
as non-uniform distribution of signals.
In the discrete setting, our input is a cell complex as a discretization of the continuous domain. For
example, our input is a 3D image, which is a cubic complex decomposition of a subset of
R3
, consisting of
vertices, edges, square faces, and cube cells. The 1-stable manifolds in Morse theory are differential objects,
and sensitive to discretization. Instead, one can use the
discrete Morse theory
developed by Robin Forman
[Forman,1995], which is not a discretization of the classical Morse theory, but rather a combinatorial analog
of it. Due to the combinatorial nature, the resulting algorithm is very stable and simplification via persistent
homology can easily be incorporated. The final algorithm is conceptually clean and easy to implement.
Some theoretical guarantees of this algorithm are presented in [Dey et al.,2018] and it has already been
applied to applications, e.g, in geographic information system and neuron-image analysis [Wang et al.,2015,
Dey et al.,2017,2019,Banerjee et al.,2020].
3.2 Morse Graph representation of pore space
For each simulation, we have a corresponding 150 x 150 x 150 binary image stack representing the rock
boundary. We first compute the signed distance to the boundary on this domain, shift all non-zero values
by 255 minus the maxima signed distance in the domain (this assures the new maxima signed distance is
255), and apply a Gaussian filter with
σ=
1. We then run the discrete Morse algorithm with a persistence
threshold δ=48.
On the output Morse graph for each simulation, we assign features used later by the GNN. The features
assigned to each node are its coordinates
(x
,
y
,
z)
, its (gaussian smoothed) signed distance value, its original
signed distance value, whether or not the vert is a maximum, and three node flow values. Node flow value
is calculated from the underlying vector field computed during the Morse graph calculation. Every node
in the domain will be connected to a single node in the Morse output in the vector field. The first node
10 Chen Cai et al.
Fig. 6: The Morse graph pipeline we use on our data. (A) A single image from our 3D stack. (B) The 3D
volume of the entire image stack. (C) The signed distance function from the boundary in (B). (D) The signed
distance function is shifted by 255 - the max value of the signed distance function at all locations is nonzero.
Then a Gaussian filter with sigma equal to 1 is applied. This is the input function for discrete Morse graph
reconstruction. (E) The discrete Morse graph reconstruction output (persistence threshold = 48) overlayed
on the original signed distance function. (F) The discrete Morse graph reconstruction output on the first
octant of the original signed distance function. By zooming in we see that the graph is accurately capturing
the signed distance function.
flow feature simply counts the number of nodes that flow into that node the second counts the number of
non-boundary nodes that flow into the node, and the thirds sums the function values of nodes that flow
into the node. The features assigned to each edge are whether or not it is a saddle, and its persistence value
if it is a saddle (all non-saddle edges have a persistence value
δ
, and we simply assign -1), length, and
minima, average, and total function value of vertices along the edge for both the input Morse function and
the original signed distance function. Prior to removing degree two nodes discussed in the next paragraph,
each edge has length 1, and zero is assigned to the minima, average, and total function features. These
values are properly updated at every removal of a degree two node.
Finally, for the sake of computational efficiency in training the GNN, we reduce the number of nodes
and edges in the graph by removing nodes of degree two that are not maxima or adjacent to a saddle edge.
The only node features from the original graph that need to be updated are the node flow feature. For each
node removed in the simplified graph, it lies along a unique edge connecting to vertices in the simplified
graph. We add the removed node’s flow features to the node flow features of the node in the direction of the
maxima on this edge. The edge features mentioned previously are also updated to capture the information
lost by removing nodes.
4 Equivariant GNN
We now introduce the notation of equivariance and strategies to build equivariant neural networks. The
equivariant neural networks we designed here take the Morse graph described in Section 3.2 as input and
Equivariant geometric learning for digital rock physics 11
outputs the formation factor and effective permeability tensors. One of the key challenges of predicting
property tensor lies in the frame indifference. Given a frame
B1
, we can represent the position of each graph
node with respect to
B1
and obtain the prediction from the model with respect to
B1
. If we change the frame
from
B1
to a different frame
B2
, the model prediction with respect to
B2
will certainly be different in terms
of numbers but in principle, two predictions under different frames represent the same geometric object.
The requirement that model prediction is independent of arbitrary-chosen frames is natural and can be
mathematically formulated as equivariance.
However, the traditional neural network has no such inductive bias built-in, and therefore not ideal for
permeability prediction. To achieve this goal of frame indifference, we will adapt the recent progress on the
equivariant neural network, a set of neural networks that is equivariant to symmetry transformations. We
first outline the notion of the group, its representation, and feature types, and then introduce equivariance
and strategies of building an equivariant neural network.
Group, its representation, and feature types:
Let
G
be a set. We say that
G
is a group with law of
composition
?
if the following axioms hold: 1
)
closure of
?
: the assignment
(g
,
h)g?hG
, defines a
function
G×GG
. We call
g?h
the product of
g
and
h
. 2
)
existence of identity: there exists a
eGG
such that for every
gG
, we have
g?eG=g=eG?g
. We call
eG
an identity of
G
. 3
)
existence of inverse:
for every
gG
, there exists an
hG
such that
h?g=eG=g?h
. We call such
h
an inverse of
g
. 4
)
associativity: for any g,h,kGwe have (g?h)?k=g?(h?k).
A group representation
ρ:GGL(N)
is a map from a group
G
to the set of
N×N
invertible matrices
GL(N)
. Critically
ρ
is a group homomorphism; that is, it satisfies the following property
ρ(g1?g2) =
ρ(g1)ρ(g2)
for any
g1
,
g2G
where the multiplication on the right side of equality denotes the matrix
multiplication. Specifically for 3D rotations group
SO(
3
)
, we have a few interesting properties: 1) its
representations are orthogonal matrices, 2) all representations can be decomposed as
ρ(g) = Q>"M
`
D`(g)#Q, (21)
where
Q
is an orthogonal,
N×N
change-of-basis matrix called Clebsch-Gordan coefficients.
Dl
for
l=
0, 1, 2, ... is a
(
2
l+
1
)×(
2
l+
1
)
matrix known as a Wigner-D matrix.
is the direct sum of matrices
along the diagonal. Features transforming according to
Dl
are called type-
l
features. Type-0 features (scalars)
are invariant under rotations and type-1 features (vectors) rotate according to 3D rotation matrices. A rank-2
tensor decompose into representations of dimension 1 (trace), 3 (anti-symmetric part), and 5 (traceless
symmetric part). In the case of symmetric tensors such as permeability and formation factor, they are 0
2
features.
Equivariance:
Given a set of transformations
Tg:V → V
for
gG
where
G
is a group. a function
φ:V → Y is equivariant if and only if for every gGthere exists a transformationSg:Y → Y such that
Sg[φ(v)] = φTg[v]for all g G,v∈ V
In this paper, we are interested in building neural networks that are equivariant with respect to the 3D
rotation group SO(3).
Efforts have been made to build equivariant neural networks for different types of data and symmetry
groups. [Thomas et al.,2018,Weiler et al.,2018,Fuchs et al.,2020,Cohen et al.,2019,Finzi et al.,2020,Weiler
and Cesa,2019,Cohen et al.,2019,Worrall and Welling,2019]. In the case of 3D roto-translations tensor field
network (TFN) [Thomas et al.,2018] is a good choice of parametrizing map
φ
. It by construction satisfies the
equivariance constraint and recently has been shown universal [Dym and Maron,2020], i.e., any continuous
equivariant function on point clouds can be approximated uniformly on compact sets by a composition of
TFN layers.
Equivariant Neural Network
: Just as convolution neural network (CNN) is composed of linear, pooling
and nonlinear layers, one also has to follow the same principle to build an equivariant neural network. The
major challenge is to characterize the equivariant linear map and design the architecture to parametrize such
maps. We sketch how to build linear equivariant layers below. Note that the input of equivariant neural
network is a Morse graph constructed from 3D voxel images. For simplicity of presentation, we follow the
original paper Tensor Field Network (see below) and use the point cloud as our input. Adding connection
between point clouds can be easily done by modifying Equations 22 and 23.
12 Chen Cai et al.
Tensor Field Network
: Tensor Field Network (TFN) is one of the equivariant neural networks targeted
for point clouds. Tensor Field Network maps feature fields on point clouds to feature fields on point clouds
under the constraint of SE(3) equivariance, the group of 3D rotations and translations. For the point clouds
{xi}N
i=1of size N, the input is a field f:R3Rdof the form
f(x) =
N
j=1
fjδxxj(22)
where
{fj}
are node features such as atom types or degrees of Morse graphs,
δ
is the Dirac delta function,
and
{xj}
are the 3D point coordinates. In order to satisfy the equivarance constraint, each
fjRd
has to
be a concatenation of vectors of different types, where a subvector of type-
l
is denoted as
fl
j
. A TFN layer
takes type-
k
feature to type-
l
feature via the learnable kernel
W`k:R3R(2l+1)(2k+1)
. The type
l
output at
position xiis
Fig. 7: We visualize the real-valued spherical harmonics
Y1
,
Y2
,
Y3
in each row, where the color and density
indicates the sign and absolute value of the functions. Spherical harmonics are solutions of Laplace equations
for functions on the sphere. They form a complete set of orthogonal functions and are used to parametrize
the weights of an equivariant neural network.
f`
out,i=
k0ZW`kx0xifk
in x0dx0
| {z }
k`convolution
=
k0
n
j=1
W`kxjxifk
in,j
| {z }
node jnode imessage
(23)
It has been shown that kernel
W`k
[Weiler et al.,2018,Kondor,2018,Thomas et al.,2018] has to lie in the
span of an equivariant basis nW`k
Jok+`
J=|k`|. Mathematically,
W`k(x) =
k+`
J=|k`|
ϕ`k
J(kxk)W`k
J(x), where W`k
J(x) =
J
m=J
YJm (x/kxk)Q`k
Jm . (24)
Each basis kernel
W`k
J:R3R(2`+1)×(2k+1)
is formed by taking the linear combination of Clebsch-
Gordan matrices
Q`k
Jm
of shape
(
2
`+
1
)×(
2
k+
1
)
, where the
J
,
mth
linear combination coefficient is the
Equivariant geometric learning for digital rock physics 13
mth
dimension of the
Jth
spherical harmonic. Note that the only learnable part in
W`k
is the radial function
ϕ`k
J(kxk)
.
Q`k
Jm
and
YJm
(and therefore
W`k
J(x)
) are precomputed and fixed. See [Thomas et al.,2018,Fuchs
et al.,2020] for more details.
CNN
: The non-Euclidean graph neural network architectures’ performance is tested against a classically
used Euclidean 3D convolutional neural network. The 3D convolutional network has been previously
employed in the identification of material parameters [Santos et al.,2020]. Convolutional layers have
successfully been implemented for feature extraction from both 2D and 3D images. However, they can be
prone to noise, grid resolution issues and do not guarantee frame indifference in general.
The architecture employed in this control experiment predicts the formation factor and permeability
tensors directly from the 3D microstructure image. The input of this architecture is a 3D binary voxel image
(150
×
150
×
150 pixels) and the output is either the formation factor tensor or the permeability tensor. The
architecture consists of five 3D convolution layers with ReLU activation functions. Each convolutional layer
is followed by a 3D max pooling layer. The output of the last pooling layer is flattened and then fed into
two consecutive dense layers (50 neurons each) with ReLU activations for final prediction.
GNN
: We also build a baseline GNN to compare against equivariant GNN. There are various choices
of Graph Convolution layers, and we experiment popular choices such at GCN [Kipf and Welling,2016],
GraphSage [Hamilton et al.,2017], GAT [Veli
ˇ
ckovi
´
c et al.,2017], CGCNN [Xie and Grossman,2018], and
GIN [Xu et al.,2018]. Empirically we find GIN works the best. The building block of our graph neural
networks is based on the modification of Graph Isomorphism Network (GIN) that can handle both node
and edge features. In particular, we first linear transform both node feature and edge feature to be vectors of
the same dimension. At the k-th layer, GNNs update node representations by
h(k)
v=ReLU
MLP(k)
u∈N (v)∪{v}
h(k1)
u+
e=(v,u):u∈N (v)∪{v}
h(k1)
e
(25)
where
N(v)
is a set of nodes adjacent to
v
, and
e= (v
;
v)
represents the self-loop edge. MLP stands for
multilayer perceptron. Edge features h(k1)
eis the same across the layers.
We use average graph pooling to obtained the graph representation from node embeddings, i.e.,
hG=
MEAN nh(K)
v|vGo. We set the number of layers to be 20 and the embedding dimension to be 50.
Other experiment details
: The metric used for training and test to measure the agreement of prediction
and true permeability is loss(
ˆ
y
,
y
) =
||yˆ
y||F
||y||F
, where
y
,
ˆ
y
stands for the true permeability and predicted
permeability tensor and
||y||F
stands for Frobenius norm, i.e.,
||y||F=q3
i=13
j=1yij 2=ptrace(yy)
.
Note that
||y||F=||Uy||F=||yU ||F
for any unitary matrix
U
. We use 67% data for training and the rest for
test. We try 5 different random split to obtain more accurate result. To tune the hyperparameters (learning
rate, number of layers...) of different models, We take 1/3 of training data for validation and pick the best
hyperparameter combinations on validation data. We use software e3nn [Geiger et al.,2020] for Tensor Field
Network and pytorch geometric [Fey and Lenssen,2019] for baseline graph neural network.
Equivaraince Error
: Following Fuchs et al. [2020], we use
EQ =kLsΦ(f)ΦLs(f)k2/kLsΦ(f)k2
to
measure equivariance error where
Ls
,
Φ
and
f
denotes the group action on output feature field, neural
network, and input feature field respectively. we measure the exactness of equivariance by applying
uniformly sampled SO(3)-transformations to input and output. The distance between the two, averaged
over samples, yields the equivariance error. The
EQ
for TFN is 1.1
10
7
, indicating TFN is equivariant
with respect to SO(3) up to the small error.
Visualization
: Given a positive semi-definite matrix
M
, we can visualize it as an ellipsoid located at
the origin, where the distance away from boundary in direction
v
(unit length) is
vTMv
. We visualize the
prediction of equivariant neural network for Morse graphs of different orientations, shown in figure 8and 9.
It can be seen that as we rotate the Morse graph, the output of equivariant neural network , visualized as an
ellipsoid, also rotates accordingly.
Results on the augmented dataset
: As the dataset is small, and may not play an advantage to the
equivariant neural network, we augmented each graph with property
P
by randomly rotating the input by
R
, and take the
RPRT
as the ground truth property of the rotated input. Results are shown in Figure 10. We
14 Chen Cai et al.
Fig. 8: Rotating Morse graph along y-axis (see top figures) will make the prediction of equivariant neural
networks rotate accordingly (see bottom figures). The second-order tensor predicted by the equivariant
neural network is visualized as an ellipsoid.
Fig. 9: Rotating Morse graph along z-axis (see top figures) will make the prediction of equivariant neural
networks rotate accordingly (see bottom figures). The second-order tensor predicted by the equivariant
neural network is visualized as an ellipsoid
tested two ways of augmentation 1) augment all datasets and 2) only augment training dataset. The results
are shown in Figures 10 and 11.
We find for the equivariant neural network, the test error is significantly lower than the non-equivariant
ones. As the argumentation multiplier increase from 2 to 5, the error reduces significantly to nearly perfect
performance. For non-equivariant neural networks, although the data augmentation improves the perfor-
mance measured by the metric, the result is still far away from (one order of magnitude worse) the results
obtained from the equivariant neural network.
In the case of augmenting only training data, we find that the performance slowly improves as the
number of copies increases for non-equivariant GNN, but the equivariant neural network without data
argumentation is still better than the GNN. This demonstrates the data efficiency of equivariant neural
network.
Equivariant geometric learning for digital rock physics 15
Fig. 10: Augment all dataset. The loss after augmenting all data by
k
copies (shown in x-axis as augmentation
multiplier). The blue/orange curve indicates the performance of GNN under different number of copies
(k=2, 5, 10, 20, 50). Equivariant neural network outperforms GNN by a large margin.
Fig. 11: Augment only training data. The loss after augmenting all data by
k
copies (shown in x-axis as
augmentation multiplier). We can see as
k
increases, the loss of GNN on test data slightly decreases, but
still not better than the equivariant neural network without data augmentation, whose result is shown in
dashed line.
5 Prediction results compared with CNN
The permeability and formation factor predictions made by the trained CNN, GNN, and Equivariant GNN
are shown in Tables 1and 2respectively. Generally speaking, one may observe that GNN outperforms CNN,
and Equivariant GNN outperforms GNN. We believe the Morse graph representation, although only being
a rough representation of 3D images, captures the essence of the geometrical and topological information
that aids downstream tasks. This trend is more obvious in the formation factor calculation than that of the
effective permeability.
Another advantage of GNN over CNN is the computational efficiency: performing convolution on 3D
images takes about 100s per epoch while GNN takes only 4s. Even though using Morse graph comes with
the overhead of computing Morse graphs (takes roughly 3 minutes per graph), the overall computational
time of working with graph representation is still desirable.
Previously published works that employ convolutional neural networks for effective permeability
predictions are often trained with data set way smaller than those used in other scientific disciplines. For
instance, the convolutional neural network predictions on velocity field employ only 1080 3D images with
80
3
voxels in Santos et al. [2020] whereas similar research work on predicting effective permeability by
Srisutthiyakorn* [2016] and Sudakov et al. [2019] employs database that consists of 1000 images with 100
3
16 Chen Cai et al.
Table 1: Permeability. Comparison between three models. The error for baseline CNN, baseline GNN,
equivariant GNN are shown for five different splits of data. The improvement indicates the improvement of
equivariant GNN over GNN.
seed CNN GNN Equivariant GNN Improvement
1 0.312 0.246 0.218 11.3%
2 0.354 0.247 0.221 10.5%
3 0.339 0.251 0.226 10.0%
4 0.379 0.247 0.228 7.7%
5 0.382 0.298 0.252 15.4%
mean 0.353 0.258 0.229 11.2%
Table 2: Formation factor. Comparison between three models. The error for baseline CNN, baseline GNN,
equivariant GNN are shown for five different splits of data. The improvement indicates the improvement of
equivariant GNN over GNN.
seed CNN GNN Equivariant GNN Improvement
1 0.081 0.048 0.039 17.6%
2 0.091 0.049 0.044 9.9%
3 0.129 0.050 0.043 14.6%
4 0.127 0.051 0.042 18.3%
5 0.151 0.047 0.039 17.4%
mean 0.116 0.049 0.041 15.6%
voxels and 9261 3D images with 100
3
voxels respectively. In our cases, we employ 300 3D images with 150
3
voxel data for the training and test where only 2/3 of the data are used for training the neural network and
1/3 of the data are used in the test cases that measure the performance reported in Tables 1and 2. As a
comparison, the common datasets showcasing the advantage of equivariant neural network is QM9 are
much larger (133,885 small molecule graphs) and more diverse than the dataset employed in this study.
However, since both the pore-scale numerical simulations and experiments are expensive [Arns et al.,
2004,Sun et al.,2011b,Andr
¨
a et al.,2013b,Sun et al.,2011a,Vlassis et al.,2020,Wang et al.,2021,Fuchs
et al.,2021,Heider et al.,2021], one may argue that the ability of the neural network to function properly
with a relatively smaller dataset is crucial for practical purposes. Although the data set may not play to
our advantage, equivariant neural network still outperforms GNN. The advantage of equivariant neural
network over GNN is much more prominent in the case of the augmented dataset, shown in Figure 10. In
addition, equivariance with respect to SE(3) group guarantees the material frame indifference.
Figures 12 and 13 showcase results of 100
blind predictions
on formation factor and effective perme-
ability on binary micro-CT images that are excluded from the training database. In both cases, we compare
the ratio of the major, immediate and minor principle values of the effective permeability predicted by
CNN, GNN, and equivariant GNN over the benchmark calculation done by the FFT solver. If the results are
perfect, the ratio is 1. In both formation factor and effective permeability predictions, there are few trends
that worth noticing. First, the formation factor predictions are generally more accurate than that of the
effective permeability. Second, the benefits of using the GNN and equivariant GNN are more obvious in the
formation factor predictions than that of the effective permeability. This might be attributed to the fact that
the surface conductivity leads to the topology of the flux in the inverse problem correlates more strongly
with the Morse graph.
Another common trend we noticed is that the predictions on the two graphs neural network tend to
underestimate the largest eigenvalues of the formation factor and more likely to overestimate the middle
and smallest eigenvalues, whereas the CNN generally leads to underestimation of all eigenvalues with the
smallest eigenvalues being the most underestimated. This trend, however, is not observed in the effective
permeability predictions in Figure 13.
Equivariant geometric learning for digital rock physics 17
Fig. 12: For each test data, we plot the ratio of predicted eigenvalues over true eigenvalues of the formation
factor tensor. From left to right: prediction of CNN, GNN, and equivariant GNN.
On the other hand, the CNN predictions on effective permeability do not exhibit a similar trend. While
the errors for the CNN permeability predictions do not exhibit a clear pattern, the equivariant GNN and,
to a lesser extent, the GNN predictions both show that the largest eigenvalues tend to be more likely to
be underestimated whereas the smallest one is likely to be overestimated. These results suggest that the
prediction seems to be less accurate when the anisotropic effect is strong.
To examine this issue further, we compare the spectrum of the eigenvalues of both formation factor
and effective permeability in Figure 14. It can be seen that formation factor has a much smaller eigenvalue
ratio compared to permeability. While the ratio of the eigenvalues of the effective permeability may vary by
about an order of magnitude, that of formation factor is generally within a factor of 2 and hence suggesting
the nearly isotropic nature of the formation factor. Since the formation factor predictions are about one order
more accurate than the effective permeability predictions, these results suggest that the degree of anisotropy
may play an important role in the accuracy of the GNN and equivariant predictions. This limitation might be
circumvented by including more anisotropic data in the training dataset or different graph representations
that better captures the anisotropy of the pore space. Researches on both topics are ongoing but are out of
the scope of this study.
In addition to the eigenvalues, we also compare the predictions of the principal directions of both
formation factor and effective permeability. Notice that the permeability tensors obtained from the inverse
problems are often non-symmetric (see White et al. [2006] and Sun and Wong [2018]) whereas the formation
factors we obtained from the FFT are very close to symmetric despite machine precision error. Given the
true and predicted permeability/formation factor tensor, we compute its eigenvectors (unit length) and
show distribution (in boxplot) of angle error. Angle error measures how close the predicted eigenvector
ˆ
v
is to true eigenvector
v
, measured in terms of 1
− |v·ˆ
v| ∈ [
0, 1
]
. Equivariant GNN is better at predicting
the eigenvector corresponding to the largest eigenvalue (but not necessarily best for other eigenvectors),
which might be the reason that it performs better than the other two models. Notice that the spectrum of the
18 Chen Cai et al.
Fig. 13: For each test data, we plot the ratio of predicted eigenvalues over true eigenvalues of the effective
permeability tensor. From left to right: prediction of CNN, GNN, and equivariant GNN.
Fig. 14: Let
v0
,
v1
,
v2
denote the eigenvalues of true permeability/formation factor from small to large. We
plot the distribution of eigenvalue ratios over all data.
eigenvalues for the formation factor is very narrow and hence predicting the eigenvectors of the formation
factor could be more difficult but of less importance. On the other hand, since the permeability exhibits
a higher degree of anisotropy, the eigenvector predictions are more crucial for practical purposes. Closer
examination of the eigenvector predictions performed by CNN, GNN, and the equivariant GNN indicates
that the equivariant neural network may lead to modest improvements in the accuracy of the orientation of
the principal directions (see Figure 15). Presumably, one may expect that this improvement can be more
significant with more data. A systematic study on the relationship between the amount of training of data
and performance may provide a fuller picture on this issue but is out of the scope of this research.
Equivariant geometric learning for digital rock physics 19
Fig. 15: The plot of angle error for different property and models. First/second/third eigenvector denotes
the eigenvectors corresponding to the smallest, intermediate, and largest eigenvalues.
6 Conclusions
This paper introduces the equivariant GNN and Morse graph representation to enable a direct image-
to-prediction workflow that provides formation factor and effective permeability images of a series of
sandstone images with improved accuracy. There are several departures from the standard convolutional
neural network that may attribute to the improved performance. First, the introduction of Morse graphs to
represent the pore topology and geometrical information provide a more efficient way to represent and store
the microstructures. Second, incorporating SE(3) equivariance constraint into the neural network is the key
to enforce the material frame indifference and ensure that the machine learning predictions are not affected
by the observer’s frame. This work demonstrates how graph and group representations can be leveraged to
enforce physics principles for mechanistic predictions relevant to image-based engineering analysis.
7 Acknowledgements
Chen Cai would like to thank the helpful discussion on theory and implementation of equivariant neural
network with Maurice Weiler, Pim de Haan, Fabian Fuchs, Tess Smidt and Mario Geiger. This collaborative
work is primarily supported by grant contracts OAC-1940203 and OAC-1940125. The additional efforts and
labor hours of the Columbia Research team members are supported by the NSF CAREER grant from the
Mechanics of Materials and Structures program at National Science Foundation under grant contract CMMI-
1846875, the Dynamic Materials and Interactions Program from the Air Force Office of Scientific Research
under grant contracts FA9550-19-1-0318 and Army Research Office under grant contract W911NF-18-2-0306.
These supports are gratefully acknowledged.
References
Michael D Abr
`
amoff, Paulo J Magalh
˜
aes, and Sunanda J Ram. Image processing with imagej.
Biophotonics
international, 11(7):36–42, 2004.
Heiko Andr
¨
a, Nicolas Combaret, Jack Dvorkin, Erik Glatt, Junehee Han, Matthias Kabel, Youngseuk Keehm,
Fabian Krzikalla, Minhui Lee, Claudio Madonna, et al. Digital rock physics benchmarks—part i: Imaging
and segmentation. Computers & Geosciences, 50:25–32, 2013a.
Heiko Andr
¨
a, Nicolas Combaret, Jack Dvorkin, Erik Glatt, Junehee Han, Matthias Kabel, Youngseuk
Keehm, Fabian Krzikalla, Minhui Lee, Claudio Madonna, et al. Digital rock physics benchmarks—part ii:
Computing effective properties. Computers & Geosciences, 50:33–43, 2013b.
Gustave E Archie et al. The electrical resistivity log as an aid in determining some reservoir characteristics.
Transactions of the AIME, 146(01):54–62, 1942.
20 Chen Cai et al.
Christoph H Arns, Mark A Knackstedt, W Val Pinczewski, and Nicos S Martys. Virtual permeametry on
microtomographic images. Journal of Petroleum Science and Engineering, 45(1-2):41–46, 2004.
S. Banerjee, L. Magee, D. Wang, X. Li, B. Huo, J. Jayakumar, K. Matho, M. Lin, K. Ram, M. Sivaprakasam,
J. Huang, Y. Wang, and P. Mitra. Semantic segmentation of microscopic neuroanatomical data by
combining topological priors with encoder-decoder deep networks.
Nature Machine Intelligence
, 2:
585–594, 2020.
Jacob Bear. Dynamics of fluids in porous media. Courier Corporation, 2013.
William J Bosl, Jack Dvorkin, and Amos Nur. A study of porosity and permeability using a lattice boltzmann
simulation. Geophysical Research Letters, 25(9):1475–1478, 1998.
AE Bussian. Electrical conductance in a porous medium. Geophysics, 48(9):1258–1268, 1983.
Taco Cohen and Max Welling. Group equivariant convolutional networks. In
International conference on
machine learning, pages 2990–2999. PMLR, 2016.
Taco S Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional
networks and the icosahedral cnn. arXiv preprint arXiv:1902.04615, 2019.
Antonio Costa. Permeability-porosity relationship: A reexamination of the kozeny-carman equation based
on a fractal pore-space geometry assumption. Geophysical research letters, 33(2), 2006.
Stephen C Cowin and Stephen B Doty. Tissue mechanics. Springer Science & Business Media, 2007.
T. K. Dey, J. Wang, and Y. Wang. Graph reconstruction by discrete morse theory. In
Proc. Internat. Sympos.
Comput. Geom., pages 31:1–31:15, 2018.
Tamal Dey, Jiayuan Wang, and Yusu Wang. Road network reconstruction from satellite images with machine
learning supported by topological methods. In
Proc. 27th ACM SIGSPATIAL Intl. Conf. Adv. Geographic
Information Systems (GIS), pages 520–523, 2019.
Tamal K. Dey, Jiayuan Wang, and Yusu Wang. Improved road network reconstruction using discrete morse
theory. In
Proc. 25th ACM SIGSPATIAL Intl. Conf. Adv. Geographic Information Systems (GIS)
, pages
58:1–58:4, 2017.
Nadav Dym and Haggai Maron. On the universality of rotation equivariant point cloud networks.
arXiv
preprint arXiv:2010.02449, 2020.
Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric.
arXiv
preprint arXiv:1903.02428, 2019.
Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural
networks for equivariance to lie groups on arbitrary continuous data.
arXiv preprint arXiv:2002.12880
,
2020.
R. Forman. A discrete Morse theory for cell complexes.
Geometry, Topology and Physics for Raoul Bott.
,
1995.
Ari L Frankel, Reese E Jones, Coleman Alleman, and Jeremy A Templeton. Predicting the mechanical
response of oligocrystals with deep learning. Computational Materials Science, 169:109099, 2019.
JT Fredrich, AA DiGiovanni, and DR Noble. Predicting macroscopic transport properties using microscopic
image data. Journal of Geophysical Research: Solid Earth, 111(B3), 2006.
Alexander Fuchs, Yousef Heider, Kun Wang, WaiChing Sun, and Michael Kaliske. Dnn2: A hyper-parameter
reinforcement learning game for self-design of neural network based elasto-plastic constitutive descrip-
tions. Computers & Structures, 249:106505, 2021.
Fabian B Fuchs, Daniel E Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation
equivariant attention networks. arXiv preprint arXiv:2006.10503, 2020.
Mario Geiger, Tess Smidt, Benjamin K. Miller, Wouter Boomsma, Kostiantyn Lapchevskyi, Maurice Weiler,
Micha? Tyszkiewicz, and Jes Frellsen. github.com/e3nn/e3nn, May 2020. URL
https://doi.org/10.
5281/zenodo.3723557.
William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs.
arXiv
preprint arXiv:1706.02216, 2017.
Yousef Heider, Hyoung Suk Suh, and WaiChing Sun. An offline multi-scale unsaturated poromechanics
model enabled by self-designed/self-improved neural networks.
International Journal for Numerical
and Analytical Methods in Geomechanics, 2021.
John Conrad Jaeger, Neville GW Cook, and Robert Zimmerman.
Fundamentals of rock mechanics
. John
Wiley & Sons, 2009.
Equivariant geometric learning for digital rock physics 21
Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.
arXiv
preprint arXiv:1609.02907, 2016.
Risi Kondor. N-body networks: a covariant hierarchical neural network architecture for learning atomic
potentials. arXiv preprint arXiv:1803.01588, 2018.
Matthew R Kuhn, WaiChing Sun, and Qi Wang. Stress-induced anisotropy in granular materials: fabric,
stiffness, and permeability. Acta Geotechnica, 10(4):399–419, 2015.
Gary Mavko and Amos Nur. The effect of a percolation threshold in the kozeny-carman relation.
Geophysics
,
62(5):1480–1482, 1997.
Vincent Monchiet, Guy Bonnet, and Guy Lauriat. A fft-based method to compute the permeability induced
by a stokes slip flow through a porous medium. Comptes Rendus M ´
ecanique, 337(4):192–197, 2009.
Trung-Kien Nguyen, Vincent Monchiet, and Guy Bonnet. A fourier based numerical method for computing
the dynamic permeability of periodic porous media.
European Journal of Mechanics-B/Fluids
, 37:90–98,
2013.
Mervyn S Paterson and Teng-fong Wong.
Experimental rock deformation-the brittle field
. Springer Science
& Business Media, 2005.
Javier E Santos, Duo Xu, Honggeun Jo, Christopher J Landry, Ma
ˇ
sa Prodanovi
´
c, and Michael J Pyrcz.
Poreflow-net: A 3d convolutional neural network to predict fluid flow through porous media.
Advances
in Water Resources, 138:103539, 2020.
Nattavadee Srisutthiyakorn*. Deep-learning methods for predicting permeability from 2d/3d binary-
segmented images. In
SEG technical program expanded abstracts 2016
, pages 3042–3046. Society of
Exploration Geophysicists, 2016.
Oleg Sudakov, Evgeny Burnaev, and Dmitry Koroteev. Driving digital rock towards machine learning:
Predicting permeability with gradient boosting and deep neural networks.
Computers & geosciences
,
127:91–98, 2019.
Hyoung Suk Suh and WaiChing Sun. An immersed phase field fracture model for microporomechanics
with darcy–stokes flow. Physics of Fluids, 33(1):016603, 2021.
Wai Ching Sun, Jose E Andrade, and John W Rudnicki. Multiscale method for characterization of porous mi-
crostructures and their impact on macroscopic effective permeability.
International Journal for Numerical
Methods in Engineering, 88(12):1260–1279, 2011a.
WaiChing Sun and Teng-fong Wong. Prediction of permeability and formation factor of sandstone with
hybrid lattice boltzmann/finite element simulation on microtomographic images.
International Journal
of Rock Mechanics and Mining Sciences, 106:269–277, 2018.
WaiChing Sun, Jos
´
e E Andrade, John W Rudnicki, and Peter Eichhubl. Connecting microstructural at-
tributes and permeability from 3d tomographic images of in situ shear-enhanced compaction bands using
multiscale computations. Geophysical Research Letters, 38(10), 2011b.
WaiChing Sun, Qiushi Chen, and Jakob T Ostien. Modeling the hydro-mechanical responses of strip and
circular punch loadings on water-saturated collapsible geomaterials.
Acta Geotechnica
, 9(5):903–934,
2014.
Karl Terzaghi, Ralph B Peck, and Gholamreza Mesri.
Soil mechanics in engineering practice
. John Wiley &
Sons, 1996.
Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor
field networks: Rotation-and translation-equivariant neural networks for 3d point clouds.
arXiv preprint
arXiv:1802.08219, 2018.
Petar Veli
ˇ
ckovi
´
c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.
Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
Nikolaos N Vlassis, Ran Ma, and WaiChing Sun. Geometric deep learning for computational mechanics part
i: Anisotropic hyperelasticity.
Computer Methods in Applied Mechanics and Engineering
, 371:113299,
2020.
Jaroslav Vond
ˇ
rejc, Jan Zeman, and Ivo Marek. An fft-based galerkin method for homogenization of periodic
media. Computers & Mathematics with Applications, 68(3):156–173, 2014.
Kun Wang, WaiChing Sun, and Qiang Du. A non-cooperative meta-modeling game for automated
third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks.
Computer Methods in Applied Mechanics and Engineering, 373:113514, 2021.
22 Chen Cai et al.
S. Wang, Y. Wang, and Y. Li. Efficient map reconstruction and augmentation via topological methods. In
Proc. 23rd ACM SIGSPATIAL, page 25. ACM, 2015.
Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. In
Advances in Neural
Information Processing Systems, pages 14334–14345, 2019.
Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning
rotationally equivariant features in volumetric data. In
Advances in Neural Information Processing
Systems, pages 10381–10392, 2018.
Andreas Weller, Lee Slater, and Sven Nordsiek. On the relationship between induced polarization and
surface conductivity: Implications for petrophysical interpretation of electrical measurements.
Geophysics
,
78(5):D315–D325, 2013.
Joshua A White, Ronaldo I Borja, and Joanne T Fredrich. Calculating the effective permeability of sandstone
with multiscale lattice boltzmann/finite element simulations. Acta Geotechnica, 1(4):195–209, 2006.
Daniel Worrall and Max Welling. Deep scale-spaces: Equivariance over scale. In
Advances in Neural
Information Processing Systems, pages 7366–7378, 2019.
Haiyi Wu, Wen-Zhen Fang, Qinjun Kang, Wen-Quan Tao, and Rui Qiao. Predicting effective diffusivity of
porous media from images by deep learning. Scientific reports, 9(1):1–12, 2019.
Jinlong Wu, Xiaolong Yin, and Heng Xiao. Seeing permeability from images: fast prediction with convolu-
tional neural networks. Science bulletin, 63(18):1215–1222, 2018.
Tian Xie and Jeffrey C Grossman. Crystal graph convolutional neural networks for an accurate and
interpretable prediction of material properties. Physical review letters, 120(14):145301, 2018.
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?
arXiv preprint arXiv:1810.00826, 2018.
Xin Zhan, Lawrence M Schwartz, M Nafi Toks
¨
oz, Wave C Smith, and F Dale Morgan. Pore-scale modeling
of electrical and fluid transport in berea sandstone. Geophysics, 75(5):F135–F142, 2010.
Afra Zomorodian and Gunnar Carlsson. Computing persistent homology.
Discrete & Computational
Geometry, 33(2):249–274, 2005.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Supervised machine learning via artificial neural networks (ANN) has gained significant popularity for many geomechanics applications that involves multi-phase flow and poromechanics. For unsaturated poromechanics problems, the multi-physics nature and the complexity of the hydraulic laws make it difficult to design the optimal setup, architecture, and hyper-parameters of the deep neural networks. This paper presents a meta-modeling approach that utilizes deep reinforcement learning (DRL) to automatically discover optimal neural network settings that maximize a pre-defined performance metric for the machine learning constitutive laws. This meta-modeling framework is cast as a Markov Decision Process (MDP) with well-defined states (subsets of states representing the proposed neural network (NN) settings), actions, and rewards. Following the selection rules, the artificial intelligence (AI) agent, represented in DRL via NN, self-learns from taking a sequence of actions and receiving feedback signals (rewards) within the selection environment. By utilizing the Monte Carlo Tree Search (MCTS) to update the policy/value networks, the AI agent replaces the human modeler to handle the otherwise time-consuming trial-and-error process that leads to the optimized choices of setup from a high-dimensional parametric space. This approach is applied to generate two key constitutive laws for the unsaturated poromechanics problems: (1) the path-dependent retention curve with distinctive wetting and drying paths. (2) The flow in the micropores, governed by an anisotropic permeability tensor. Numerical experiments have shown that the resultant ML-generated material models can be integrated into a finite element (FE) solver to solve initial-boundary-value problems as replacements of the hand-craft constitutive laws. Keywords Deep reinforcement learning, neural network settings, unsaturated porous media, retention curve, anisotropic permeability.
Article
Full-text available
This paper presents an immersed phase field model designed to predict the fracture-induced flow due to brittle fracture in vuggy porous media. Due to the multiscale nature of pores in vuggy porous material, crack growth may connect previously isolated pores which lead to flow conduits. This mechanism has important implications for many applications such as disposal of carbon dioxide and radioactive materials, hydraulic fracture and mining. To understand the detailed microporomechanics that causes the fracture-induced flow, we introduce a new phase field fracture framework where the phase field is not only used as an indicator function for damage of the solid skeleton, but also as an indicator of the pore space. By coupling the Stokes equation that governs the fluid transport in the voids, cavities and cracks, and the Darcy's flow in the deformable porous media, our proposed model enables us to capture the fluid-solid interaction of the pore fluid and solid constituents during the crack growth. Numerical experiments are conducted to analyze how presence of cavities affects the accuracy of the predictions based on homogenized effective medium during crack growth.
Article
Full-text available
The evaluation of constitutive models, especially for high-risk and high-regret engineering applications, requires efficient and rigorous third-party calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. We introduce an automated meta-modeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness, in order to improve experiment design and model robustness through competition. The two agents automatically search for the Nash equilibrium of the meta-modeling game in an adversarial reinforcement learning framework without human intervention. In particular, a protagonist agent seeks to find the more effective ways to generate data for model calibrations, while an adversary agent tries to find the most devastating test scenarios that expose the weaknesses of the constitutive model calibrated by the protagonist. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AI-based third-party validation. Numerical examples are given to demonstrate the wide applicability of the proposed meta-modeling game with adversarial attacks on both human-crafted constitutive models and machine learning models.
Article
Full-text available
Understanding of neuronal circuitry at cellular resolution within the brain has relied on neuron tracing methods that involve careful observation and interpretation by experienced neuroscientists. With recent developments in imaging and digitization, this approach is no longer feasible with the large-scale (terabyte to petabyte range) images. Machine-learning-based techniques, using deep networks, provide an efficient alternative to the problem. However, these methods rely on very large volumes of annotated images for training and have error rates that are too high for scientific data analysis, and thus requires a substantial volume of human-in-the-loop proofreading. Here we introduce a hybrid architecture combining prior structure in the form of topological data analysis methods, based on discrete Morse theory, with the best-in-class deep-net architectures for the neuronal connectivity analysis. We show significant performance gains using our hybrid architecture on detection of topological structure (for example, connectivity of neuronal processes and local intensity maxima on axons corresponding to synaptic swellings) with precision and recall close to 90% compared with human observers. We have adapted our architecture to a high-performance pipeline capable of semantic segmentation of light-microscopic whole-brain image data into a hierarchy of neuronal compartments. We expect that the hybrid architecture incorporating discrete Morse techniques into deep nets will generalize to other data domains.
Article
Full-text available
We present a machine learning approach that integrates geometric deep learning and Sobolev training to generate a family of finite strain anisotropic hyperelastic models that predict the homogenized responses of polycrystals previously unseen during the training. While hand-crafted hyperelasticity models often incorporate homogenized measures of microstructural attributes, such as the porosity or the averaged orientation of constitutes, these measures may not adequately reflect the topological structures of the attributes. We fill this knowledge gap by introducing the concept of the weighted graph as a new high-dimensional descriptor that represents topological information, such as the connectivity of anisotropic grains in an assemble. By leveraging a graph convolutional deep neural network in a hybrid machine learning architecture previously used in Frankel et al. 2019, the artificial intelligence extracts low-dimensional features from the weighted graphs and subsequently learns the influence of these low-dimensional features on the resultant stored elastic energy functionals. To ensure smoothness and prevent unintentionally generating a non-convex stored energy functional, we adopt the Sobolev training method for neural networks such that a stress measure is obtained implicitly by taking directional derivatives of the trained energy functional. Results from numerical experiments suggest that Sobolev training is capable of generating a hyperelastic energy functional that predicts both the elastic energy and stress measures more accurately than the classical training that minimizes L2 norm. Verification exercises against unseen benchmark FFT simulations and phase-field fracture simulations using the geometric learning generated elastic energy functional are conducted to demonstrate the quality of the predictions.
Article
Full-text available
We report the application of machine learning methods for predicting the effective diffusivity (De) of two-dimensional porous media from images of their structures. Pore structures are built using reconstruction methods and represented as images, and their effective diffusivity is computed by lattice Boltzmann (LBM) simulations. The datasets thus generated are used to train convolutional neural network (CNN) models and evaluate their performance. The trained model predicts the effective diffusivity of porous structures with computational cost orders of magnitude lower than LBM simulations. The optimized model performs well on porous media with realistic topology, large variation of porosity (0.28–0.98), and effective diffusivity spanning more than one order of magnitude (0.1 ≲ De < 1), e.g., >95% of predicted De have truncated relative error of <10% when the true De is larger than 0.2. The CNN model provides better prediction than the empirical Bruggeman equation, especially for porous structure with small diffusivity. The relative error of CNN predictions, however, is rather high for structures with De < 0.1. To address this issue, the porosity of porous structures is encoded directly into the neural network but the performance is enhanced marginally. Further improvement, i.e., 70% of the CNN predictions for structures with true De < 0.1 have relative error <30%, is achieved by removing trapped regions and dead-end pathways using a simple algorithm. These results suggest that deep learning augmented by field knowledge can be a powerful technique for predicting the transport properties of porous media. Directions for future research of machine learning in porous media are discussed based on detailed analysis of the performance of CNN models in the present work.
Conference Paper
Full-text available
Automatic Extraction of road network from satellite images is a goal that can benefit and even enable new technologies. Methods that combine machine learning (ML) and computer vision have been proposed in recent years which make the task semi-automatic by requiring the user to provide curated training samples. The process can be fully automatized if training samples can be produced algorithmically. In this work, we develop such a technique by infusing a persistence-guided discrete Morse based graph reconstruction algorithm into ML framework. We elucidate our contributions in two phases. First, in a semi-automatic framework, we combine a discrete-Morse based graph reconstruction algorithm with an existing CNN framework to segment input satellite images. We show that this leads to reconstructions with better connectivity and less noise. Next, in a fully automatic framework, we leverage the power of the discrete-Morse based graph reconstruction algorithm to train a CNN from a collection of images without labelled data and use the same algorithm to produce the final output from the segmented images created by the trained CNN. We apply the discrete-Morse based graph reconstruction algorithm iteratively to improve the accuracy of the CNN. We show experimental results on datasets from SpaceNet Challenge. Full version of the paper appears in [8].
Article
Full-text available
We present a research study aimed at testing of applicability of machine learning techniques for prediction of permeability of digitized rock samples. We prepare a training set containing 3D images of sandstone samples imaged with X-ray microtomography and corresponding permeability values simulated with Pore Network approach. We also use Minkowski functionals and Deep Learning-based descriptors of 3D images and 2D slices as input features for predictive model training and prediction. We compare predictive power of various feature sets and methods. The later include Gradient Boosting and various architectures of Deep Neural Networks (DNN). The results demonstrate applicability of machine learning for image-based permeability prediction and open a new area of Digital Rock research.
Article
This contribution presents a meta-modeling framework that employs artificial intelligence to design a neural network that replicates the path-dependent constitutive responses of composite materials sampled by a numerical testing procedure of Representative Volume Elements (RVE). A Deep Reinforcement Learning (DRL) combinatorics game is invented to automatically search for the optimal set of hyper-parameters from a decision tree. Besides the typical hyper-parameters for ANN training, such as the network topology, the size and composition of the considered training data are incorporated as additional hyper-parameters to help investigate the amount of data necessary for training and validation. The proposed meta modeling framework is able to identify hyper-parameter configurations with a weighted trade-off between prediction accuracy and computational cost. The capabilities and limitations of the introduced framework are shown and discussed via several numerical examples. Moreover, the possibility of transferring the gained knowledge of hyper-parameters among different RVE is explored in numerical experiments.
Article
We present the PoreFlow-Net, a 3D convolutional neural network architecture that provides fast and accurate fluid flow predictions for 3D digital rock images. We trained our network to extract spatial relationships between the porous medium morphology and the fluid velocity field. Our workflow computes simple geometrical information from 3D binary images to train a deep neural network (the PoreFlow-Net) optimized to generalize the problem of flow through porous materials. Our results show that the extracted information is sufficient to obtain accurate flow field predictions in less than a second, without performing expensive numerical simulations providing a speed-up of several orders of magnitude. We also demonstrate that our model, trained with simple synthetic geometries, is able to provide accurate results in real samples spanning granular rocks, carbonates, and slightly consolidated media from a variety of subsurface formations, which highlights the ability of the model to generalize the porous media flow problem. The workflow presented here shows the successful application of a disruptive technology (physics-based training of machine learning models) to the digital rock physics community.