Content uploaded by Júlio Hoffimann

Author content

All content in this area was uploaded by Júlio Hoffimann on Oct 14, 2017

Content may be subject to copyright.

Accepted Manuscript

Stochastic Simulation by Image Quilting of Process-based Geological ModelsI

J´ulio Hoﬃmanna,∗, C´eline Scheidta, Adrian Barfodb, Jef Caersc

aDepartment of Energy Resources Engineering, Stanford University

bGeological Survey of Denmark and Greenland

cDepartment of Geological Sciences, Stanford University

Abstract

Process-based modeling oﬀers a way to represent realistic geological heterogeneity in subsurface models. The main

limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as

training images and address the data conditioning problem. In this work, we further develop image quilting as a method

for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling

eﬀort (i. e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a

new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary

variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot

for continuous training images. The criterion is based on the new concept of voxel reuse—a stochastic and quilting-aware

function of the training image. We compare our proposed method with other established simulation methods on a set

of process-based training images of varying complexity, including a real-case example of stochastic simulation of the

buried-valley groundwater system in Denmark.

Keywords: Voxel reuse, Shannon entropy, Relaxation, Tau model, Multiple-point statistics, FFT, GPGPU

1. Introduction

Process-based geological models such as ﬂume experi-

ments Paola et al. (2009); Straub et al. (2009); Kim et al.

(2010); Tal and Paola (2010); Paola et al. (2011); Paola

(2000) and advanced computer simulations of ﬂow and

sediment transport Elias et al. (2001); Giri et al. (2008);

Lesser et al. (2004) are now widely used to study the eﬀects

of geological processes in the sedimentary record. These

models are known for providing more insight into phys-

ical realism compared to rule-based models Xu (2014);

Lopez (2003a), and are the de facto standard for address-

ing fundamental questions in sedimentary geology. One of

the major drawbacks with the application of process-based

models in practice is that they cannot be easily matched

with the data acquired after deposition such as drilled

wells or geophysical data. This limitation is inherent to

any and all forward models, which are fully determined

given well-posed boundary conditions (e. g. sea level rise,

sediment supply). Furthermore, process-based geological

models are complex as demonstrated by Figure 1, demand

superb modeling expertise, great amount of time (com-

putational or laboratorial), and can be quite laborious to

design Briere et al. (2004).

ISoftware is available at https://github.com/juliohm/

ImageQuilting.jl.

∗Corresponding author

Email addresses: juliohm@stanford.edu (J´ulio Hoﬃmann),

scheidtc@stanford.edu (C´eline Scheidt), adrianbarfod.geo.au.dk

(Adrian Barfod), jcaers@stanford.edu (Jef Caers)

Figure 1: Flume experiment of a delta with low Froude number per-

formed by John Martin, Ben Sheets, Chris Paola and Michael Kel-

berer. Image source: https://www.esci.umn.edu/orgs/seds/Sedi_

Research.htm

In geostatistics, the process of conditioning 3D models

to data has been actively investigated Matheron (1963);

Mariethoz and Caers (2014). Although the research com-

munity has developed various modern algorithms in the

past 15 years Strebelle (2002); Arpat and Caers (2007);

Zhang et al. (2006,2015); Honarkhah and Caers (2010);

El Ouassini et al. (2008); Faucher et al. (2014); Tah-

masebi et al. (2012); Mahmud et al. (2014); Yang et al.

Preprint submitted to Computers & Geosciences May 25, 2017

(2016); Mariethoz et al. (2010), most still have problems

in handling the complexity of process-based models, suf-

fer from low computational performance, and/or depend

on non-intuitive input parameters that lack clear geolog-

ical meaning. The most recent algorithms developed for

geostatistical (or stochastic) simulation rely on training

images from which multiple-point statistics (MPS) are re-

produced Mariethoz and Caers (2014). Compared to alter-

native approaches such as object-based Maharaja (2008)

and surface-based or event-based Xu (2014) simulation,

training-image-based approaches have more ﬂexible con-

ditioning capabilities. In order to exploit process-based

models as training images and condition them to data, we

ﬁrst need to eﬃciently manage their non-stationarity and

arbitrary landforms.

The term non-stationarity refers to the concept that

statistics vary with location and time. For example, the

channel morphology in the deltaic system of Figure 1 is a

function of the distance to the delta apex. It is expected

that channels by the sea present diﬀerent characteristics

compared to those evolving near the discharge point up-

stream in the high lands. Previous successful attempts to

model non-stationarity in MPS simulation utilize auxiliary

variables Chugunova and Hu (2008). Although eﬀective,

these attempts incorporate the variables by ad-hoc weight-

ing; therefore, they do not scale to the complexity of 3D

geological models.

Among the most used MPS simulation algorithms that

model non-stationarity, we list Sequential Normal Equa-

tion Simulation (SNESIM) Strebelle (2002), Direct Sam-

pling (DS) Mariethoz et al. (2010) and Cross-correlation

Simulation (CCSIM) Tahmasebi et al. (2012). In SNESIM,

probability maps that indicate the occurrence of rock fa-

cies in the subsurface are incorporated in the simulation

via a probabilistic model known as the Tau model Jour-

nel (2002); Allard et al. (2012). Although more scientiﬁc

than ad-hoc weighting, the SNESIM algorithm does not

support auxiliary variables that are not probability maps.

Even if adapted to handling arbitrary variables, SNESIM

will still perform poorly with process-based training im-

ages because of its underlying tree structure originally de-

veloped for processing categorical values.

In DS and CCSIM, auxiliary variables are incorporated

with ad-hoc weighting. As previously mentioned, this

technique does not scale with complex 3D process-based

models. Nevertheless, both algorithms support continuous

training images and present a remarkable computational

speedup compared to previous alternatives in pixel-based

and patch-based stochastic simulation, respectively.

In DS, the speedup can be explained by the direct sam-

pling of the ﬁrst pattern for which the distance to the data

is below a pre-speciﬁed threshold. If the threshold is large,

the algorithm is fast but suboptimal. If the threshold is

small, the simulation of 3D models is unfeasible. Given

the resolution of process-based training images, an appro-

priate threshold is hardly available.

In CCSIM, the speedup can be explained by the pasting

of many voxels (or pixels in 2D) at once. In this case, the

choice of a threshold is less important and can be ﬁxed

to a very small value for process-based models of order

102×102×102voxels or larger. This quality of CCSIM is

inherited from the original, seminal paper “Image Quilting

for Texture Synthesis and Transfer” by Efros and Freeman

(2001) who came up with the idea of quilting images in

computer vision.

Efros and Freeman introduce a novel, simple, and ef-

ﬁcient algorithm for sampling 2D images from arbitrary

reference (a. k. a. training) images. In its simplest form,

image quilting simulation (IQSIM) consists of 1) a raster

path over which patterns (i. e. sub-images of ﬁxed size) are

pasted together with some overlap; 2) a similarity measure

between patterns already pasted in the simulation grid and

patterns in the training image; and 3) a boundary cut al-

gorithm Boykov and Jolly (2001); Boykov and Kolmogorov

(2001); Kwatra et al. (2003) applied in order to minimize

the overlap error of the paste operation.

The Efros-Freeman algorithm addresses the texture syn-

thesis problem. In the same paper, the authors apply im-

age quilting for texture transfer by iterating the proce-

dure until a mismatch with a background image is below

a pre-speciﬁed threshold. The texture transfer problem

is closer to the problem that is addressed in this paper,

and is closer to geostatistics in general because it involves

(spatial) data that needs to be honored. Their proposed

iteration technique utilized by CCSIM and other variants,

however; becomes computationally burdensome with 3D

geological models.

Based upon the advances made by the computer vision

community, Mahmud et al. (2014) extend 2D image quilt-

ing to 3D grids and attempt to incorporate hard data (or

simply point data) along the raster path. The authors in-

troduce a distance to the data and propose a weighting

scheme with the distance computed in the overlap with

previously pasted patterns. This scheme has two major

limitations: 1) Distances must be normalized before they

can be weighted and summed and 2) The weights are case-

dependent and are obtained by trial and error. Although

ﬂexible, the weighting scheme proposed by Mahmud et al,

and the template splitting procedure described therein, are

unfeasible in real 3D applications.

In a similar attempt, Faucher et al. (2014) formulate a

patch-based stochastic simulation as an unconstrained op-

timization where the objective function has penalty terms

for hard data and local-mean histograms. In this formula-

tion, the weights appear directly in the objective function

and are chosen under a set of simplifying assumptions.

Despite the very good analysis, Faucher et al assump-

tions may be considered too strong for arbitrary process-

based training images and ﬁeld data. Furthermore, there

is no theoretical result that proves the existence of global

weights for conditioning arbitrary random ﬁelds.

Conditioning image quilting to hard data is particularly

challenging as demonstrated by all previously published

attempts. The raster path is suboptimal for this task as it

2

does not sense the data ahead in the simulation domain.

In the extreme case, the data is clustered near the end of

the path and is invisible to the algorithm until the very

last iteration. Tahmasebi et al. (2014) alleviate the raster

path issue by incorporating data ahead of the path. The

proposed solution comes with an extra unknown parame-

ter, there called the “co-template”, that is not trivial to

set, and yet determines the data conditioning performance.

Co-templates add an unnecessary layer of complexity to

grids with arbitrary landforms, and as it will be discussed

in the next sections, there exists a much simpler and more

eﬀective solution.

Besides the unknown weights for combining diﬀerent

variables and data deﬁned over the domain, MPS simu-

lation algorithms usually depend on a non-trivial list of

input parameters that do not convey geological nor physi-

cal understanding. In particular, the Efros-Freeman image

quilting algorithm requires a window (or template) size for

scanning the training image. The choice of this window

can greatly aﬀect the quality of the realizations and there

is still no good criterion for its design.

In this paper, we propose a systematic probabilistic pro-

cedure for data aggregation in the original Efros-Freeman

algorithm. Our proposed algorithm is faster than any

other MPS simulation algorithm previously published, by-

passes the ad-hoc weighting limitation, and produces vi-

sually realistic images conditioned to data. The paper is

organized as follows. In Section 2, we introduce a new

method for data aggregation and other minor modiﬁca-

tions to the original Efros-Freeman algorithm to accom-

modate hard data (e. g. wells). In Section 3, we apply the

proposed algorithm to 2D process-based and 3D process-

mimicking models with real-ﬁeld complexity. In Section 4,

we discuss the choice of the template size in image quilt-

ing and introduce a novel criterion for template design. In

Section 5, we conclude the work pointing to future research

directions.

2. Data aggregation in image quilting

In this section, we introduce a new method for data

aggregation in image quilting as an alternative to ad-hoc

weighting. This method is introduced with auxiliary vari-

ables and is extended later to conditioning with hard data.

2.1. Efros-Freeman algorithm

The original Efros-Freeman image quilting for uncondi-

tional simulation is illustrated in Figure 2. In iteration

1, a pattern “A” is randomly selected from the training

image and placed in the top left corner of the simulation

domain. In iteration 2, the sliding window leaves an over-

lap region highlighted in red. This region is compared to

all regions of equal size in the training image using an Eu-

clidean distance as measure of similarity; the next pattern

“B” is drawn at random from a uniform distribution over a

set of candidates colored in red (e. g. the most similar pat-

terns). The two patterns are stitched together by means of

a cut that maximizes continuity Boykov and Jolly (2001);

Boykov and Kolmogorov (2001); Kwatra et al. (2003). Af-

ter the ﬁrst row is ﬁlled, the second row is simulated sim-

ilarly except that there are two overlap regions instead of

one. Tile by tile the puzzle is solved. Resulting images

and all the cuts performed along the path are shown in

Figure 3.

Figure 2: Efros-Freeman algorithm. Patches are extracted from the

training image and pasted in the simulation domain in raster path

order. A cut is performed in the overlap with the previously pasted

patch to maximize continuity. Black pixels are copied from pattern

A whereas white pixels are copied from pattern B.

Figure 3: Image quilting realizations of two training images and their

corresponding cut masks. Texture is reproduced in both examples.

Template size for binary training image is 62 ×62 ×1 and template

size for continuous training image is 48 ×48 ×1 in the example.

2.2. Incorporation of auxiliary variables

Consider the setup of the problem in Figure 4 with the

introduction of an auxiliary variable. A training image

T I , an auxiliary variable AUX D deﬁned over the simula-

tion domain, and a forward operator G∗:T I →AUX T I

3

are given. The goal is to generate multiple realizations

that honor the relationship established by the auxiliary

variables AUX D and AU X T I. The operator G∗approxi-

mates the mapping Gused to generate the auxiliary vari-

able AUXD.Gmay be a simple mathematical expression

G=G(i, j, k) in terms of the spatial indices of the grid or

may consist of a series of elaborated engineering workﬂows

that produce a property cube over the domain of interest.

Figure 4: Problem setup. Training image in the upper left is used

to simulate the domain in the bottom left. An auxiliary variable

AUXD is provided over the domain as well as a proxy G∗of the

forward operator Gused to create AUXD.

Our method for data aggregation is illustrated in 2D

for clarity. We start by placing a small window in the

simulation domain along any overlapping path (e. g. raster

path). As illustrated in Figure 5, this placement deﬁnes

a local variable AUXD(it, jt) for every location (it, jt) in

the path.

At a current location (it, jt), the local variable

AUXD(it, jt) is compared to all local variables

AU XT I (ip, jp) in the auxiliary training image. The

subscript tin (it, jt) refers to the few tile locations in

the simulation domain whereas the subscript pin (ip, jp)

refers to the many pixel locations in the training image.

Although there are as many variables AU XT I (ip, jp) as

there are pixels (or voxels in 3D), these local comparisons

are simple Euclidean distance calculations that can be

implemented very eﬃciently with Fast Fourier Transforms

(FFTs) and Graphics Processing Units (GPUs).

Therefore, the auxiliary distances

Daux(p)def

=kAUXD(it, jt)−AU X T I(ip, jp)k2

2(1)

are computed with a convolution pass on the auxiliary

training image, similar to the procedure introduced in the

Figure 5: Proposed method (part I). Euclidean distance with “FFT

trick” between current tile location (it, jt) in the domain and all

pixel locations (ip, jp) in the training image. Pattern AUX D(it, jt)

is compared to all patterns AU XT I (ip, jp) in a single pass.

original Efros-Freeman algorithm for computing overlap

distances

Dov(p)def

=kDomain(it, jt)−T I (ip, jp)k2

2(2)

at the location (it, jt). While Daux(p) is a distance be-

tween -shaped (i. e. rectangular-shaped) auxiliary vari-

ables, Dov(p) is a distance between L-shaped overlap re-

gions.

In order to address unit and scaling issues, the dis-

tances Dov(p) and Daux(p) are converted into ranks. For

a training image with Npat patterns, ranks are permu-

tations of the integers (1,2, . . . , Npat). A permutation

(p1, p2, . . . , pNpat ) is a valid rank for the distance D(p) if

D(pi)≤D(pj) for all 1 ≤i≤j≤Npat. Two such permu-

tations exist, one for Dov(p) and another for Daux(p). In

order to guarantee a smooth transition from the previous

pattern simulated in the domain and the pattern being

pasted, we introduce a tolerance for the overlap distance

and use it to deﬁne an initial subset of Nbest best candi-

date patterns according to the overlap information. Such

tolerance is not a sensitive parameter of the algorithm and

can be made arbitrarily small. In Figure 6 we illustrate

the two ranks on the training image and the reduced set of

Nbest Npat best candidate patterns based on the overlap

information.

Next, we introduce a relaxation technique whereby a

subset of the Nbest best candidate patterns is selected.

This subset S contains patterns that are in agreement with

both the overlap information and the auxiliary variable

deﬁned at the location (it, jt). We deﬁne a chain of sets

A1⊆A2⊆ · · · ⊆ Akwith Aifor i= 1,2, . . . , k containing

the ﬁrst Nibest candidate patterns according to the aux-

iliary variable, N16= 0 and Nk=Npat. By denoting Othe

4

Figure 6: Proposed method (part II). Ranking of patterns based

on overlap and auxiliary distances followed by successive relaxation

of auxiliary information. Given a tolerance, the best patterns are

selected according to the overlap (e.g. 2,3,7,1) and the set is in-

tersected with a growing set of patterns (e. g. 8,1,3,...) until the

intersection is non-empty.

set of Nbest best candidate patterns according to the over-

lap, the relaxation technique consists of iterating ifrom 1

to kuntil the intersection Si=O∩Aiis non-empty. Let

Sbe the ﬁrst non-empty intersection.

The patterns in Shave two ranks, one associated to

Dov(p) and another associated to Daux(p). In order to

draw a pattern at random we convert the ranks into prob-

abilities with a simple linear transformation. The condi-

tional probability of a pattern in Sgiven its overlap rank

rov is given by

P rob (pattern |rov )=(|S| − rov + 1)/kov (3)

with |S|the cardinality of Sand kov a normalization con-

stant. kov is the sum of |S| − rov +1 over all patterns in S.

Similarly, the conditional probability of the same pattern

given the auxiliary rank raux is given by

P rob (pattern |raux )=(|S| − raux + 1)/kaux (4)

These two probabilities are combined into

P rob (pattern |rov , raux ) with the Tau model assum-

ing no information redundancy (i. e. τ= 1). In Figure 7,

all the patterns in Sare assigned a color representing their

probability (e. g. |S|= 985). After a pattern is drawn, the

entire procedure is repeated for the next location in the

overlapping path.

The relaxation technique can be applied to multiple aux-

iliary variables. In this case, multiple chains A(c)

1⊆A(c)

2⊆

· · · ⊆ A(c)

kfor c= 1,2, . . . , Ncare run in parallel instead

of one. The intersection Si=O∩A(1)

i∩A(2)

i∩ · · · ∩ A(Nc)

i

is guaranteed to be non-empty for some index iand the

subset Sis deﬁned as before. Taking intersections of large

sets is a CPU demanding operation in general, however;

we exploit the fact that the maximum rank possible for

Figure 7: Proposed method (part III). Conditional probability of

pasting a pattern given both overlap and auxiliary information com-

puted from the Tau model over all patterns in the non-empty set

obtained from relaxation.

a pattern is Npat and implement a fast intersection al-

gorithm for bounded sets with O(Npat) time complexity.

In fact, the algorithm is a simple element-wise logical &

(AND) comparison between two vectors of size Npat. In

Figure 8, we compare the traditional weighting scheme

with the proposed relaxation technique. Our method pro-

duces realizations that honor the auxiliary variable with-

out the speciﬁcation of weights.

Figure 8: Comparison of ad-hoc weighting and proposed method.

Diﬀerent weight conﬁgurations A, B and C leading to diﬀerent con-

ditioning results. Our method shown at the bottom left does not

require speciﬁcation of weights and produces the most likely out-

comes given the data. Training image size: 400 ×400 ×1, Domain

size: 300 ×260 ×1, Template size: 27 ×27 ×1.

2.3. Incorporation of hard data

We apply the same relaxation technique to conditioning

with hard data HD(it, jt). Besides the distance to the

overlap and to the auxiliary variables, we deﬁne a distance

Dhard(p)def

=kHD(it, jt)−WT I (ip, jp)k2

2(5)

to the point data that may exist at the current location

(it, jt) in the simulation domain. In Equation 5, the matrix

(or tensor in 3D) Wis a mask that is only active at the

pixels with datum in HD(it, jt), and is the element-wise

multiplication. The ranking induced by the hard data is

5

combined with the other rankings through the same Tau

model used for incorporating auxiliary variables.

We introduce two additional modiﬁcations to the Efros-

Freeman algorithm to increase the quality of the hard data

match. The ﬁrst modiﬁcation is the replacement of the

raster path by a data-ﬁrst path illustrated in Figure 9. In

this path, locations that have data are visited ﬁrst and

the rest of the simulation domain is ﬁlled outwards from

the data using successive morphological dilations, a well

known operation in image processing. We stress that this

path is not related to the data-driven path described by

Abdollahifard (2016), that was originally introduced by

Criminisi et al. (2003).

Figure 9: Data-ﬁrst path. Tiles are ﬁrst pasted where hard data

exists and outwards until the entire domain is ﬁlled.

The data-ﬁrst path when applied together with the re-

laxation technique leads to perfect match in most data

conﬁgurations. There are still two scenarios in which data

is not honored: 1) the data conﬁguration is not present in

the training image and 2) the conﬁguration is present in

the training image but not in Sdue to conﬂicting ranks.

We propose a simple restoration of the data (i. e. we enforce

values at hard data locations) at the end of the simula-

tion in a post-processing step. Although this construction

may introduce local discontinuities under very complex

settings, it is eﬀective with many realistic process-based

training images.

3. Image quilting of deterministic process-based

geological models

In this section, we apply the proposed method with 2D

process-based and 3D process-mimicking models. Four ap-

plications of varying complexity are presented: 1) Stochas-

tic simulation of meandering rivers constrained to thick-

ness maps, 2) Spatial variability analysis with ﬂume ex-

periments as proposed by Scheidt et al. (2016,2015), 3)

Subsurface modeling with moderately dense well conﬁgu-

rations, and 4) Completion of buried valley models with

SkyTEM and partial interpretation.

Applications 1) and 2) serve to illustrate the eﬃciency

of the relaxation technique on large 3D grids and with

complex process-based training images, respectively. Ap-

plication 3) highlights a known limitation of the method

in the case where hard data is moderately dense. Finally,

application 4) illustrates a real project in Denmark where

both hard data and auxiliary variables are available.

3.1. Stochastic simulation of meandering rivers

In this application, models of a meandering river gen-

erated with the FLUMY software Lopez S., Cojan I.,

Rivoirard J. (2008); Lopez (2003b) are used as training

images. Our goal is to assess the performance of the re-

laxation technique with the Tau model on large 3D grids.

We focus on a single training image with 200 ×300 ×45

cells and utilize the thickness of the basin as an auxiliary

variable. This variable is introduced to minimize the ap-

pearance of channels in areas of low sediment transport.

In our method, the quality of the realizations is still a

function of the template size, and because the choice of this

parameter is complex, we discuss it in details in Section 4

where we propose a novel criterion for template design. By

using this criterion, we select a template size of 49×49×14

and run IQSIM to obtain 50 realizations. In Figure 10, we

observe that the thickness map constrains the placement of

channels to the center of the basin as intended. However,

we also observe illegitimate patterns near the boundary of

the realizations caused by the arbitrary landform of the

model. Artifacts like these can be easily pruned with a

post-processing step for a speciﬁc geometry, but the prob-

lem is still unsolved for arbitrarily shaped training images

and simulation domains.

Figure 10: Image quilting realizations of a meandering river. Real-

izations conditioned to thickness map have channels in the center.

Artifacts observed near the boundary of the basin. Training image

size: 200 ×300 ×45, Domain size: 200 ×300 ×45, Template size:

49 ×49 ×14.

A conditional simulation of the model is generated in

6 minutes on an integrated Intel R

HD Graphics Skylake

6

ULT GT2 GPU of a Dell XPS 13 laptop. Our algorithm

and implementation are orders of magnitude faster than

most (and probably all) other MPS simulation software in

the literature. Besides the FFT on the GPU, we exploit

the shape of the basin to save computation. For reference,

alternative methods like SNESIM require many hours to

handle grids of this size.

3.2. Spatial variability analysis with ﬂume experiments

In the ﬂume experiment provided by the St. An-

thony Falls Laboratory (http://www.safl.umn.edu), we

are given 136 overhead shots of a delta. Our goal is to com-

pare the spatial variability of the given snapshots with that

of image quilting realizations. We rely on the deﬁnition of

a distance between these 2D models in order to quantify

variability. In this work, the modiﬁed Hausdorﬀ distance

Dubuisson and a.K. Jain (1994); Huttenlocher et al. (1993)

is investigated that only takes into account the shape of

geobodies deposited in the delta.

We select a template size of 26 ×26 ×1 via the crite-

rion discussed in Section 4 and run IQSIM with overhead

shots constrained to two auxiliary variables as illustrated

in Figure 11.

Figure 11: Image quilting realizations of an overhead shot from the

ﬂume experiment with two auxiliary variables incorporated by pro-

posed method. Training image size: 300 ×260 ×1, Domain size:

300 ×260 ×1, Template size: 26 ×26 ×1.

The simulation is performed with 13 such snapshots (or

training images) previously selected by clustering points in

a multidimensional scaling projection Scheidt et al. (2015,

2016); Borg and Groenen (2005). For performance rea-

sons, the modiﬁed Hausdorﬀ distance is computed between

point sets that represent the edges of the corresponding

geobodies as illustrated in Figure 12. Because distances

are ultimately computed between black & white images,

we further run DS with the 13 intermediate binary images

of the delta in order to compare the proposed algorithm

with an existing software that requires ﬁne parameter tun-

ing.

Figure 12: Distance calculation between images. First, images are

threshold to wet/dry binary images. Second, an edge ﬁlter is applied

to produce a reduced set of points. Finally, the Modiﬁed Hausdorﬀ

distance is computed between the resulting point clouds.

In Figure 13, we show the Q-Q plot between the dis-

tribution of distances originated from the experiment and

the distribution of distances artiﬁcially created with geo-

statistics. Although the comparison of spatial variability

with the modiﬁed Hausdorﬀ distance is limited, we observe

that both image quilting and direct sampling approximate

the natural variability in the delta reasonably well. Out-

lier images exist particularly in the upper tail, and most

importantly, we observe that spatial variability is usually

underestimated by geostatistical simulation. This under-

estimation is caused by the many auxiliary variables and

constraints imposed during simulation, and is depicted by

the reduced interquartile range in the kernel density esti-

mation plot in Figure 13.

Figure 13: Comparison of natural variability present in the ﬂume

experiment with variability created by means of geostatistical simu-

lation. Presence of outliers in the upper tail of the distribution. Un-

derestimation of spatial variability depicted by reduced interquartile

range.

3.3. Stochastic simulation with dense well conﬁgurations

In this example, we assess the performance of the pro-

posed method with moderately dense well conﬁgurations.

The training image consists of channels generated with the

7

Fluvsim software Deutsch and Tran (2002), and 9 vertical

wells are placed with equal spacing in a domain of the

same size as illustrated in Figure 14.

Figure 14: Image quilting realizations of ﬂuvial river channels con-

ditioned to 9 vertical wells. Placement of channels illustrated on

horizontal slices. Training image size: 250 ×250 ×100, Domain size:

250 ×250 ×100, Template size: 25 ×25 ×20.

After selecting a template size of 25 ×25 ×20 via the

criterion discussed in Section 4, we run image quilting and

obtain 50 realizations. Three of these realizations are il-

lustrated in Figure 14. We observe that channels are cor-

rectly placed at the wells, but we also notice discontinuity

in the generated patterns. This discontinuity is caused by

the combination of the data-ﬁrst path and the chosen tem-

plate size, and can be quantiﬁed with various metrics as

discussed in Renard and Allard (2013). We use the number

and size of geobodies as metrics in Figure 15 to illustrate

the diﬀerence in connectivity between the training image

and the IQSIM realizations for this well conﬁguration.

Figure 15: Cummulative distribution of geobody size for a mod-

erately dense well conﬁguration. Positive skewed distributions for

image quilting realizations indicate pattern discontinuity compared

to the training image.

Reducing the template size to accommodate the wells is

a valid strategy, but it increases the computational time

and can diminish the performance of the simulation to that

of alternative methods.

In Figure 16, we illustrate the ensemble average and

variance of the 50 realizations. High average and low vari-

ance at the well locations are guaranteed by design.

Figure 16: Ensemble average and variance over 50 realizations.

Channels placed where indicated in the wells and corresponding low

variance.

3.4. Completion of buried valleys with SkyTEM and par-

tial interpretation

A collection of buried valleys interpreted from SkyTEM

measurements Sørense and Auken (2004) in Denmark is

used to illustrate the application of our method in a case

with real ﬁeld complexity. In Figure 17, we show a sin-

gle 3D model with 229 ×133 ×39 voxels interpreted by

hydrologists that are working on mapping groundwater in

the country Thomsen et al. (2004); Høyer et al. (2015).

Figure 17: Single interpretation of buried valleys from SkyTEM mea-

surements. Resulting model has three categories: 0) sand & gravel—

Quaternary meltwater sand and sand till, Miocene sand, and Qua-

ternary buried valleys inﬁlled with sand, 1) coarse clay—Quaternary

clay till, meltwater clay and buried valleys inﬁlled with clay and clay

till, and 2) hemipelagic clay—Hemipelagic, ﬁne grained Paleogene

and Oligocene clays.

To test our method in this real ﬁeld case, we propose

an experiment in which we assume that half of the in-

terpretation is unavailable. In the ﬁrst case, we use the

patterns in the left half of the model to simulate the right

half “L→R”. In the second case, we revert the setup

“R→L” as illustrated in Figure 18.

In this experiment, we have hard data conditioning—

the known half of the interpretation—and the SkyTEM

measurements as an auxiliary variable. For each case, we

generate 50 realizations with a template size of 49×49×18.

8

Figure 18: Experiment setup. Half of the interpretation is discarded

and then simulated with image quilting. The known half is used as

hard data and the SkyTEM measurements are incorporated as an

auxiliary variable.

Realizations of the valleys are shown in Figure 19 for the

setup “L→R”.

Figure 19: Image quilting realizations of buried valleys conditioned

to SkyTEM measurements and known half of the basin. Training

image size: 229 ×133 ×39, Domain size: 229 ×133 ×39, Template

size: 49 ×49 ×18.

In Figure 20, we show the average of indicator variables

(a probability) deﬁned for the ﬁrst two categories of the

training image—sand & gravel and coarse clay. The third

category corresponding to the background red color—

hemipelagic clay—is omitted. We observe that many geo-

bodies are correctly recovered from the SkyTEM data, but

that a limited number of patterns in the training image can

only approximate the other half of the most likely inter-

pretation.

For the case “L→R”, we run SNESIM with a set of

tuned parameters. Similar to the comparison of IQSIM

and DS in the 2D ﬂume experiment, we want to empha-

size that our method does not require ﬁne parameter tun-

ing for producing decent results. In Figure 21, we illustrate

the distribution of modiﬁed Hausdorﬀ distances per cat-

egory computed between each of the 50 realizations and

the most likely interpretation from SkyTEM. The distri-

bution obtained with the two methods is compared on a

per-category basis.

Image quilting realizations present lower distances in

Figure 20: Ensemble average of indicator variables for categories 1

and 2. Single 3D model interpreted from SkyTEM illustrated in the

ﬁrst column for reference.

Figure 21: Distance-per-category between geostatistical realizations

and single 3D model interpreted from SkyTEM. Image quilting

(IQSIM) presents lower distances in distribution than single normal

equation simulation (SNESIM).

distribution and better reproduce the texture of the train-

ing image. For this speciﬁc setup, a single realization

is generated in 3 minutes with IQSIM on an Intel R

HD

Graphics Skylake ULT GT2 GPU versus 30 minutes with

SNESIM on an Intel R

CoreTM i7-6500U CPU. For com-

pleteness, another realization is generated in 5 minutes

with IQSIM on the same CPU.

4. Criterion for template design

In this section, we introduce a novel criterion for choos-

ing template conﬁgurations in image quilting. We start

by motivating the criterion with a simple example in 2D

where we compare image quilting realizations of two dif-

ferent training images. Next, we state the proposed cri-

terion as an optimization problem and derive an eﬃcient

approximation that is solved in low CPU time. Finally,

we compare the criterion with the traditional entropy plot

and assess its robustness with basic checks and well-known

training images.

In Figure 22 and Figure 23, we illustrate a few image

quilting realizations of 2D training images with diﬀerent

template conﬁgurations. In this example, template conﬁg-

urations are squares of the form (T, T, 1) with Tthe tem-

plate size in pixels. We observe that diﬀerent template

sizes lead to diﬀerent texture in the realizations. For the

channelized training image, increasing the template size

9

from T= 12 to T= 63 improves the results, whereas for

the Gaussian training image, the improvement is obtained

by decreasing from T= 82 to T= 32.

Figure 22: Image quilting realizations of Strebelle training image.

Texture reproduction improves by increasing template size.

Figure 23: Image quilting realizations of Gaussian training image.

Texture reproduction improves by decreasing template size.

The interesting observation is that a measure for tem-

plate selection based on a monotonically increasing mea-

sure (e. g. entropy Tahmasebi and Sahimi (2012); Journel

and Deutsch (1993); Honarkhah and Caers (2010)) is sub-

optimal. We propose a function inspired by the principle

of minimum energy from thermodynamics. This princi-

ple can be rephrased in the context of image quilting as

follows:

A good image quilting simulation pastes patterns

sequentially without overwriting what was already

pasted in previous iterations.

The motivation for this principle is better understood by

considering the boundary cuts in Figure 24. According to

the principle of minimum energy (or overwrite), the quilt-

ing algorithm should be designed to maximize the number

of black pixels in the overlap region, which is only invaded

by white pixels when there is misalignment of the pattern

coming from the training image and the patterns already

pasted along the overlapping path.

Figure 24: Zoom in 2D boundary cut mask. Voxel reuse deﬁned as

the number of black pixels divided by overlap area.

Deﬁnition (voxel reuse).The voxel (or pixel in 2D) reuse

V∈[0,1] of an image quilting realization is the number

of black voxels in the boundary cut divided by the total

number of voxels in the overlap region.

For a ﬁxed template size to overlap ratio (e.g. 6 ÷1),

the voxel reuse is a function of the template size V(T).

We seek its maximum, or alternatively, the minimum over-

write deﬁned as the complement 1 − V(T). Because the

function is stochastic we formally state the optimization

in terms of mean voxel reuse:

T∗= arg max

T

E[V(T)] (6)

We argue that, given a set of image quilting realizations

generated with template size Tand their corresponding

boundary cuts, the number E[V(T)] ∈[0,1] is a measure

of texture reproduction. Consequently, the multiple op-

tima T∗are also the solution to the template design prob-

lem. In Figure 25, we illustrate the mean voxel reuse as a

function of the template size for a few training images in

our library. We observe that the mean voxel reuse gener-

alizes the Shannon entropy to continuous training images.

The plots in Figure 25 were generated by brute force:

for each template size Twe generated 10 unconditional im-

age quilting realizations with the same size of the training

image and averaged the voxel reuse. However, an estimate

of mean voxel reuse does not require full simulation, only a

few boundary cuts performed with the training image. We

derive a fast approximation with the notion of elementary

overlapping paths as follows.

Given any 3D template conﬁguration (Tx, Ty, Tz), the

most simple path that exhibits all overlap combinations

has 2 ×2×2 tiles (or blocks), it is shown in Figure 26.

For the vast majority of the lookups in the training im-

age that consider the overlaps x,yand zseparately, there

exists a perfect pattern match. We can assume no over-

write E[Vx

] = E[Vy

] = E[Vz

] = 1 and conclude that these

boundary cuts are irrelevant to the estimate of the mean

voxel reuse. On the other hand, the combinations xy,xz,

yz and xyz, at which misalignment is likely to happen,

contain valuable information (e. g. E[Vxy

] is a function of

the texture).

10

Figure 25: Mean voxel reuse (solid line) and standard deviation (colored area) for a few training images in our library. Generalization of

Shannon entropy (dashed line) to continuous training images.

xyzyz

xzz

xyy

x

Figure 26: Elementary overlapping path. 2 ×2×2 tiles stitched

together.

We consider the average over a few Nelementary over-

lapping paths (i. e. 2×2×2 tiles) in Equation 7 and discuss

the implications of using this average instead of averaging

full image quilting realizations.

E[V]≈1

N

N

X

k=1

V, k (7)

The voxel reuse of an elementary overlapping path can

be decomposed into its diﬀerent overlap combinations:

V=fxVx

+fyVy

+fzVz

+fxyVxy

+· · · +fxyz Vxyz

(8)

where fcis the fraction of the overlap volume associated to

the combination c∈C={x, y, z, xy, xz, yz, xyz}. Denote

(Tx, Ty, Tz) the template size and (ox, oy, oz) the overlap.

There are (2Tx−ox)×(2Ty−oy)×(2Tz−oz) voxels in

the path or nx×ny×nzfor short. We can write fractions

of the overlap volume Vov in terms of these geometrical

parameters, for example:

fx=Vx

Vov

=oxTyTz

nxnynz−(nx−ox)(ny−oy)(nz−oz)(9)

Thus, the terms in the expansion V=Pc∈CfcVc

in-

troduced in Equation 8 are a product of geometric factors

fctimes texture terms Vc

. The mean voxel reuse is given

by:

E[V] = X

c∈C

fcE[Vc

]

=fx+fy+fz+X

c∈{xy,xz,yz,xyz }

fcE[Vc

](10)

We ﬁrst consider the 2D case where we have E[V] =

fx+fy+fxy E[Vxy

]. If instead of 2×2 tiles we had mx×my

tiles in the path, the derived expression would be

E[V]=(mx−1)fx+(my−1)fy+fxy

(mx−1)(my−1)

X

i=1

E[Vxy,i

]

(11)

with the variable ilooping over all tiles for which both

cuts in xand yare performed. Equation 11 can be further

simpliﬁed to

E[V]=(mx−1)fx+(my−1)fy+(mx−1)(my−1)fxy E[Vxy

]

(12)

if we assume that the texture is the same everywhere in the

training image (i. e. 1st-order stationary random process

assumption). Notice that the fractions fcare a function

of the number of tiles mx×myin the realization, but are

not a function of the template size (Tx, Ty). Equation 12

11

can be rewritten in a simpler form E[V] = a0+a1E[Vxy

]

with a0and a1functions of the overlapping path size.

The eﬀects of a0and a1in the mean voxel reuse plot

are vertical shift and scaling, respectively. These oper-

ations do not aﬀect the locations of the maxima T∗=

arg max TE[V(T)] and this proves that the use of ele-

mentary overlapping paths for template design of 2D sta-

tionary random processes is error-free. Although we do

not prove the result for non-stationary random processes

where boundary cuts are also a function of space, we ex-

pect the error to be very low in practice.

This approximation with elementary overlapping paths

cannot be extended to 3D random processes without errors

in general. By following a similar derivation we can write

E[V] = a0+a1E[Vxy

]+ a2E[Vxz

]+ · · · +a4E[Vxyz

] (13)

which is the equation of a hyperplane deﬁned by the nor-

mal vector (a1, a2, a3, a4)∈R4

+. This vector is a function

of (mx, my, mz) and there are counter-examples where the

maxima T∗is altered by the overlapping path size. If

besides stationarity we assume that the training image is

isotropic (i. e. statistics do not vary with direction), we

have E[Vxy

] = E[Vxz

] = E[Vyz

] = E[Vxyz

] = E[V=

] and

the approximation E[V] = a0+ (a1+a2+a3+a4)E[V=

]

is error-free again.

We emphasize that the mean voxel reuse criterion is a

function of both the training image and the quilting algo-

rithm itself. To our knowledge, there is no other criterion

with such property in the literature. In order to assess the

robustness of the criterion, we perform a few basic checks

with overhead shots of the ﬂume experiment.

The ﬁrst check consists of plotting the mean voxel reuse

for diﬀerent times of the experiment. In Figure 27, we

observe that the function is preserved across time with

very small ﬂuctuations. This result matches our expecta-

tion given that this is an autogenic deltaic system without

external forcing that could alter the texture.

Figure 27: Mean voxel reuse for diﬀerent overhead shots of the ﬂume

experiment. All curves match except for small ﬂuctuations.

The second and last check consists of choosing a few

template sizes Thand Tlfor which the mean voxel reuse

is high and low, respectively. The criterion states that

Thleads to good texture reproduction in image quilting,

whereas Tldoes not. In Figure 28, we illustrate the mean

voxel reuse and optimum template ranges for the Strebelle

and Gaussian training images. Figure 22 was generated

with Th= 63 and Tl= 12, and Figure 23 was generated

with Th= 32 and Tl= 82.

Figure 28: Mean voxel reuse for Strebelle and Gaussian training im-

ages with ascending and descending trends, respectively. Optimum

range for template size depicted in horizontal axis.

5. Conclusions

In this work, we proposed a systematic probabilistic pro-

cedure for data aggregation in MPS simulation. We imple-

mented the procedure within image quilting and tested it

on 2D process-based and 3D process-mimicking geological

models. Our results show that the procedure is fast, dis-

penses ﬁne parameter tuning, and produces realistically-

looking realizations conditioned to auxiliary variables and

hard data.

We introduced a novel criterion for template design that

generalizes the Shannon entropy to continuous training im-

ages. The criterion is based on the concept of voxel reuse

and is the ﬁrst in the literature that is quilting-aware.

We proposed an eﬃcient approximation of the mean voxel

reuse and proved that it is error-free under stationary as-

sumptions. We recognized artifacts in the image quilting

realizations caused by complex landforms in 3D. These

artifacts call for a better representation of incomplete pat-

terns in the training image and should be seen as a current

defect of the algorithm. Another limitation that deserves

attention is that of suboptimal texture reproduction with

dense hard data conﬁgurations. Our method can work

with dense conﬁgurations, but may lead to suboptimal

texture reproduction if speed is to be maintained. Future

developments should be concentrated on these two fronts.

Another important issue that is not addressed in this

work is that of data uncertainty. We assumed that both

hard and soft data are free of errors. For applications

12

where measurement errors are large, the proposed algo-

rithm, like most other stochastic simulation algorithms

mentioned in the paper, is not appropriate.

The accompanying software was made available as a Ju-

lia package. Documentation can be found online including

examples of use and instructions for fast simulation with

GPUs: https://github.com/juliohm/ImageQuilting.

jl.

Acknowledgments

We thank CAPES and SCRF at Stanford University for

funding this research. We also thank Anjali Fernandes

and Chris Paola for providing data and insight on ﬂume

experiments, Marco Pontiggia and Andrea Da Pra for giv-

ing feedback on the software.

References

Abdollahifard, M.J., 2016. Fast multiple-point simulation using a

data-driven path and an eﬃcient gradient-based search. Comput-

ers & Geosciences 86, 64–74. URL: http://dx.doi.org/10.1016/

j.cageo.2015.10.010, doi:10.1016/j.cageo.2015.10.010.

Allard, D., Comunian, A., Renard, P., 2012. Probability Aggregation

Methods in Geoscience. doi:10.1007/s11004-012-9396- 3.

Arpat, G.B., Caers, J., 2007. Conditional simulation with pat-

terns. Mathematical Geology 39, 177–203. doi:10.1007/

s11004-006- 9075-3.

Borg, I., Groenen, P.J., 2005. Modern Multidimen-

sional Scaling. Learning 40, 637. URL: http:

//www.springer.com/statistics/statistical+theory+

and+methods/book/978-0- 387-98134- 5?cm{_}mmc=

AD-{_}- Enews-{_}- ECS12245{_}V1-{_}-978- 0-387-98134- 5,

doi:10.1007/0-387- 28981-X,arXiv:arXiv:1011.1669v3.

Boykov, Y., Kolmogorov, V., 2001. An experimental comparison

of min-cut/max-ﬂow algorithms for energy minimization in vi-

sion, in: Lecture Notes in Computer Science (including sub-

series Lecture Notes in Artiﬁcial Intelligence and Lecture Notes

in Bioinformatics), pp. 359–374. doi:10.1007/3- 540-44745- 8_24,

arXiv:0703101v1.

Boykov, Y.Y., Jolly, M.P., 2001. Interactive graph cuts for optimal

boundary & region segmentation of objects in ND images, in: Pro-

ceedings Eighth IEEE International Conference on Computer Vi-

sion. ICCV 2001, pp. 105—-112. doi:10.1109/ICCV.2001.937505.

Briere, C., Giardino, A., Werf, J.J.V.D., 2004. Morphological Mod-

elling Of Bar Dynamics With Delft3d: The Quest For Optimal

Free Parameter Settings Using An Automatic Calibration Tech-

nique. Coastal Engineering 2010 , 1–12doi:10.9753/icce.v32.

sediment.60.

Chugunova, T.L., Hu, L.Y., 2008. Multiple-Point simulations con-

strained by continuous auxiliary data. Mathematical Geosciences

40, 133–146. doi:10.1007/s11004- 007-9142-4.

Criminisi, A., Perez, P., Toyama, K., 2003. Object re-

moval by exemplar-based inpainting, in: Proc. IEEE Com-

puter Vision and Pattern Recognition (CVPR). URL:

https://www.microsoft.com/en-us/research/publication/

object-removal- by-exemplar- based-inpainting/.

Deutsch, C.V., Tran, T.T., 2002. FLUVSIM: A program for object-

based stochastic modeling of ﬂuvial depositional systems. Com-

puters and Geosciences 28, 525–535. doi:10.1016/S0098- 3004(01)

00075-9.

Dubuisson, M.P., a.K. Jain, 1994. A modiﬁed Hausdorﬀ distance for

object matching. Proceedings of 12th International Conference on

Pattern Recognition 1, 566–568. doi:10.1109/ICPR.1994.576361.

Efros, A., Freeman, W., 2001. Image Quilting for Texture Synthesis

and Transfer. Proceedings of the 28th annual conference on Com-

puter graphics and interactive techniques , 1–6URL: http://dl.

acm.org/citation.cfm?id=383296, doi:10.1145/383259.383296.

El Ouassini, A., Saucier, A., Marcotte, D., Favis, B.D., 2008. A

patchwork approach to stochastic simulation: A route towards

the analysis of morphology in multiphase systems. Chaos, Solitons

and Fractals 36, 418–436. doi:10.1016/j.chaos.2006.06.100.

Elias, E.P.L., Walstra, D.J.R., Roelvink, J.a., Stive, M.J.F., Klein,

M.D., 2001. Hydrodynamic Validation of Delft3D with Field

Measurements at Egmond. Coastal Engineering 2000 40549,

2714–2727. URL: http://ascelibrary.org/doi/abs/10.1061/

40549(276)212, doi:10.1061/40549(276)212.

Faucher, C., Saucier, A., Marcotte, D., 2014. Corrective

pattern-matching simulation with controlled local-mean his-

togram. Stochastic Environmental Research and Risk Assessment

28, 2027–2050. doi:10.1007/s00477- 014-0864-9.

Giri, S., Vuren, S.V., Ottevanger, W., Sloﬀ, K., 2008. A preliminary

analysis of bedform evolution in the Waal during 2002- 2003 ﬂood

event using Delft3D. Marine and River Dune Dynamics , 141–148.

Honarkhah, M., Caers, J., 2010. Stochastic simulation of patterns

using distance-based pattern modeling. Mathematical Geosciences

42, 487–517. doi:10.1007/s11004- 010-9276-7.

Høyer, A.S., Jørgensen, F., Sandersen, P.B.E., Viezzoli, A., Møller,

I., 2015. 3D geological modelling of a complex buried-valley net-

work delineated from borehole and AEM data. Journal of Applied

Geophysics 122, 94–102. doi:10.1016/j.jappgeo.2015.09.004.

Huttenlocher, D.P., Klanderman, G.A., Rucklidge, W.J., 1993. Com-

paring Images Using the Hausdorﬀ Distance. IEEE Transac-

tions on Pattern Analysis and Machine Intelligence 15, 850–863.

doi:10.1109/34.232073.

Journel, A.G., 2002. Combining knowledge from diverse sources: An

alternative to traditional data independence hypotheses. Mathe-

matical Geology 34, 573–596. doi:10.1023/A:1016047012594.

Journel, A.G., Deutsch, C.V., 1993. Entropy and spatial disorder.

Mathematical Geology 25, 329–355. doi:10.1007/BF00901422.

Kim, W., Sheets, B.A., Paola, C., 2010. Steering of experimental

channels by lateral basin tilting. Basin Research 22, 286–301.

doi:10.1111/j.1365-2117.2009.00419.x.

Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A., 2003.

Graphcut textures: Image and video synthesis using graph cuts.

ACM Transactions on Graphics 22, 277–286. doi:10.1145/882262.

882264.

Lesser, G.R., Roelvink, J.A., van Kester, J.A.T.M., Stelling, G.S.,

2004. Development and validation of a three-dimensional morpho-

logical model. Coastal Engineering 51, 883–915. doi:10.1016/j.

coastaleng.2004.07.014.

Lopez, S., 2003a. Channelized Reservoir Modeling: a Stochastic

Process-based Approach. Theses. {´

E}cole Nationale Sup{´e}rieure

des Mines de Paris. URL: https://pastel.archives-ouvertes.

fr/pastel-00000630.

Lopez, S., 2003b. Mod´elisation de r´eservoirs chenalis´es

m´eandriformes : une approche g´en´etique et stochastique. Ph.D.

thesis. Centre de Geostatistique.

Lopez S., Cojan I., Rivoirard J., G.A., 2008. Process-based stochastic

modelling: meandering channelized reservoirs. Spec. Publ. Int.

Assoc. Sedimentol. 40, 139:144.

Maharaja, A., 2008. TiGenerator: Object-based training image gen-

erator. Computers and Geosciences 34, 1753–1761. doi:10.1016/

j.cageo.2007.08.012.

Mahmud, K., Mariethoz, G., Caers, J., Tahmasebi, P., Baker, A.,

2014. Simulation of Earth textures by conditional image quilt-

ing. Water Resources Research 50, 3088–3107. doi:10.1002/

2013WR015069.

Mariethoz, G., Caers, J., 2014. Multiple-point Geostatis-

tics: Stochastic Modeling with Training Images. doi:10.1002/

9781118662953.

Mariethoz, G., Renard, P., Straubhaar, J., 2010. The direct sampling

method to perform multiple-point geostatistical simulations. Wa-

ter Resources Research 46. doi:10.1029/2008WR007621.

Matheron, G., 1963. Principles of geostatistics. Economic Geology

13

58, 1246–1266. doi:10.2113/gsecongeo.58.8.1246.

Paola, C., 2000. Quantitative models of sedimentary basin ﬁlling.

doi:10.1046/j.1365-3091.2000.00006.x.

Paola, C., Straub, K., Mohrig, D., Reinhardt, L., 2009. The

”unreasonable eﬀectiveness” of stratigraphic and geomorphic ex-

periments. Earth-Science Reviews 97, 1–43. URL: http://

dx.doi.org/10.1016/j.earscirev.2009.05.003, doi:10.1016/j.

earscirev.2009.05.003.

Paola, C., Twilley, R.R., Edmonds, D.A., Kim, W., Mohrig, D.,

Parker, G., Viparelli, E., Voller, V.R., 2011. Natural processes in

delta restoration: application to the Mississippi Delta. Ann Rev

Mar Sci 3, 67–91. URL: http://www.ncbi.nlm.nih.gov/pubmed/

21329199, doi:10.1146/annurev-marine- 120709-142856.

Renard, P., Allard, D., 2013. Connectivity metrics for subsurface ﬂow

and transport. Advances in Water Resources 51, 168–196. URL:

http://dx.doi.org/10.1016/j.advwatres.2011.12.001, doi:10.

1016/j.advwatres.2011.12.001.

Scheidt, C., Fernandes, A.M., Paola, C., Caers, J., 2015. Can Geo-

statistical Models Represent Nature’s Variability? An Analysis

Using Flume Experiments. AGU Fall Meeting Abstracts .

Scheidt, C., Fernandes, A.M., Paola, C., Caers, J., 2016. Quanti-

fying natural delta variability using a multiple-pointgeostatistics

prior uncertainty model. Journal of Geophysical Research: Earth

Surface , 1–19doi:10.1002/2016JF003922.Received.

Sørense, K., Auken, E., 2004. SkyTEM ? a new high-resolution

helicopter transient electromagnetic system. Exploration Geo-

physics 35, 194. URL: https://doi.org/10.1071%2Feg04194,

doi:10.1071/eg04194.

Straub, K.M., Paola, C., Mohrig, D., Wolinsky, M.a., George,

T., 2009. Compensational Stacking of Channelized Sedimen-

tary Deposits. Journal of Sedimentary Research 79, 673–688.

doi:10.2110/jsr.2009.070.

Strebelle, S., 2002. Conditional simulation of complex geological

structures using multiple-point statistics. Mathematical Geology

34, 1–21. doi:10.1023/A:1014009426274.

Tahmasebi, P., Hezarkhani, A., Sahimi, M., 2012. Multiple-

point geostatistical modeling based on the cross-correlation func-

tions. Computational Geosciences 16, 779–797. doi:10.1007/

s10596-012- 9287-1.

Tahmasebi, P., Sahimi, M., 2012. Reconstruction of three-

dimensional porous media using a single thin section. Physical

Review E 85, 1–13. doi:10.1103/PhysRevE.85.066709.

Tahmasebi, P., Sahimi, M., Caers, J., 2014. MS-CCSIM: Accelerat-

ing pattern-based geostatistical simulation of categorical variables

using a multi-scale search in fourier space. Computers and Geo-

sciences 67, 75–88. doi:10.1016/j.cageo.2014.03.009.

Tal, M., Paola, C., 2010. Eﬀects of vegetation on channel morphody-

namics: Results and insights from laboratory experiments. Earth

Surface Processes and Landforms 35, 1014–1028. doi:10.1002/

esp.1908.

Thomsen, R., Søndergaard, V.H., Sørensen, K.I., 2004. Hydrogeolog-

ical mapping as a basis for establishing site-speciﬁc groundwater

protection zones in Denmark. Hydrogeology Journal 12, 550–562.

doi:10.1007/s10040-004- 0345-1.

Xu, S., 2014. Integration of Geomorphic Experiment Data in Surface-

Based Modeling : From Characterization To Simulation .

Yang, L., Hou, W., Cui, C., Cui, J., 2016. GOSIM: A multi-scale

iterative multiple-point statistics algorithm with global optimiza-

tion. Computers and Geosciences 89, 57–70. doi:10.1016/j.

cageo.2015.12.020.

Zhang, T., Du, Y., Huang, T., Yang, J., Li, X., 2015. Stochastic

simulation of patterns using ISOMAP for dimensionality reduc-

tion of training images. Computers and Geosciences 79, 82–93.

doi:10.1016/j.cageo.2015.03.010.

Zhang, T., Switzer, P., Journel, A., 2006. Filter-based classiﬁcation

of training image patterns for spatial simulation. Mathematical

Geology 38, 63–80. doi:10.1007/s11004- 005-9004-x.

14