Content uploaded by Saud M. Al-Fattah
Author content
All content in this area was uploaded by Saud M. Al-Fattah on Apr 12, 2017
Content may be subject to copyright.
Artificial-Intelligence Technology
Predicts Relative Permeability of Giant
Carbonate Reservoirs
Saud M. Al-Fattah, SPE, and Hamad A. Al-Naim, SPE, Saudi Aramco
Summary
Determination of relative permeability data is required for almost
all calculations of fluid flow in petroleum reservoirs. Water/oil
relative permeability data play important roles in characterizing
the simultaneous two-phase flow in porous rocks and in predicting
the performance of immiscible displacement processes in oil
reservoirs. They are used, among other applications, for determin-
ing fluid distributions and residual saturations, predicting future
reservoir performance, and estimating ultimate recovery. Un-
doubtedly, these data are considered probably the most valuable
information required in reservoir-simulation studies. Estimates
of relative permeability are generally obtained from laboratory
experiments with reservoir-core samples. In the absence of the
laboratory measurement of relative permeability data, developing
empirical correlations for obtaining accurate estimates of relative
permeability data showed limited success, and it proved difficult,
especially for carbonate reservoir rocks.
Artificial-neural-network (ANN) technology has proved suc-
cessful and useful in solving complex structured and nonlinear
problems. This paper presents a new modeling technology to
predict accurately water/oil relative permeability using ANNs.
The ANN models of relative permeability were developed using
experimental data from waterflood-core-tests samples collected
from carbonate reservoirs of giant Saudi Arabian oil fields. Three
groups of data sets were used for training, verification, and testing
the ANN models. Analysis of results of the testing data set shows
excellent agreement with the experimental relative permeability
data. In addition, error analyses show that the ANN models devel-
oped in this study outperform all published correlations.
The benefits of this work include meeting the increased de-
mand for conducting special core analysis (SCA), optimizing the
number of laboratory measurements, integrating into reservoir-
simulation and reservoir-management studies, and providing
significant cost savings on extensive laboratory work and substan-
tial required time.
Introduction
ANNs have seen a great increase of interest during the past
few years. They are powerful and useful tools for solving
practical problems in the petroleum industry (Mohaghegh
2005; Al-Fattah and Startzman 2003). Advantages of neural
network techniques (Bishop 1995; Fausett 1994; Haykin 1994;
Patterson 1996) over conventional techniques include the ability
to address highly nonlinear relationships, independence from
assumptions about the distribution of input or output variables,
and the ability to address either continuous or categorical data
as either inputs or outputs. In addition, neural networks are
intuitively appealing because they are based on crude low-level
models of biological systems. Neural networks, as in biological
systems, simply learn by examples. The neural-network user
provides representative data and trains the neural networks to
learn the behavior of the data.
Design and Development of ANN Models
In regression problems, the objective is to estimate the value of a
continuous variable given the known input variables. Regression pro-
blems can be solved using the following network types: multilayer
perceptrons (MLPs), radial basis function (RBF), generalized-regres-
sion neural network (GRNN), and linear. In this study, we experimen-
ted with the first three types: MLP, RBF, and GRNN. The linear
model is basically the conventional linear-regression analysis. Since
the problem at hand is a regression type and because of its power and
advantages, we found that GRNN performs the best for this particular
study. This is because GRNN has several advantages.
•It usually trains extremely quickly, making the large number
of evaluations required by the input-selection algorithm feasible.
•It is capable of modeling nonlinear functions quite accurately.
•It is relatively sensitive to the inclusion of irrelevant input
variables. This is actually an advantage when trying to decide
whether input variables are required.
Hence, it is worth giving a brief description of this neural-
network type. GRNN uses kernel-based approximation to perform
regression (Patterson 1996; Bishop 1995). It is one of the
so-called Bayesian networks. GRNN has exactly four layers: the
input layer, radial centers’ layer, regression nodes’ layer, and an
output layer, as shown by Fig. 1. The input layer has the same
number of nodes as there are input variables. The radial-layer
nodes represent the centers of clusters of known training data.
This layer must be trained by a clustering algorithm such as sub-
sampling, K-means, or Kohonen training. The regression layer,
which contains linear nodes, must have exactly one node more
than the output layer. There are two types of nodes: The first type
of node calculates the conditional regression for each output vari-
able, whereas the second type of node calculates the probability
density. The output layer performs a specialized function such that
each node simply divides the output of the associated first-type
node by that of the second-type node in the previous layer.
GRNNs can be used only for regression problems. A GRNN
trains almost instantly, but tends to be large and slow. Although it
is not necessary to have one radial neuron for each training data
point, the number still needs to be large. Like the RBF network,
the GRNN does not extrapolate.
There are several important procedures that must be taken into
consideration during the design and development of an ANN
model. Fig. 2 is a flowchart illustrating the ANN-development
strategies proposed and implemented in this study.
Data Preparation
Data acquisition, preparation, and quality control are considered
the most important and most time-consuming task (Fig. 2). The
quantity of data required for training a neural network frequently
presents difficulties. There are some heuristic rules, which relate
the number of data points needed to the size of the network. The
simplest of these indicates that there should be 10 times as many
data points as connections in the network. In fact, the number
needed is also related to the complexity of the underlying function
that the network is trying to model, and to the variance of the
additive noise. As the number of input variables increases, the
number of input data points required increases nonlinearly. Even
a fairly small number of input variables (perhaps 50 or less)
requires a huge number of input data points. This problem is
known as “the curse of dimensionality.” If there is a larger, but
Copyright ã2009 Society of Petroleum Engineers
This paper (SPE 109018) was accepted for presentation at Offshore Europe, Aberdeen,
4–7 September 2007, and revised for publication. Original manuscript received for review
16 September 2007. Revised manuscript received for review 14 April 2008. Paper peer
approved 10 June 2008.
96 February 2009 SPE Reservoir Evaluation & Engineering
still restricted, data set, then it can be compensated to some extent
by forming an ensemble of networks, with each network being
trained using a different resampling of the available data and then
averaging across the predictions of the networks in the ensemble.
Water/oil relative permeability measurements were collected
for all wells having SCAL of carbonate reservoirs in Saudi Arabi-
an oil fields. These reservoirs include Arab-D, Shuaibah, Arab-C,
Arab-AB, Fadhili, Upper Fadhili, Hanifa, and Hadriyah. The
major fields included in this study are the Ghawar field, which is
the largest oil field in the world; and Abqaiq; Shaybah; Qatif;
Khurais; and Berri. SCAL reports were studied thoroughly, and
each relative permeability curve was carefully screened, exam-
ined, and checked for consistency and reliability. Hence, a large
database of water/oil relative permeability data for carbonate
reservoirs was created. All relative permeability experimental data
measurements were conducted using the unsteady-state method.
Developing ANN models for water/oil relative permeability
with easily obtainable input variables was one of the objectives
of this study. Initial water saturation, residual-oil saturation,
porosity, well location, and wettability are considered the main
input variables that contribute significantly to the prediction of
relative permeability data. We made from these input variables
several transformational forms or functional links that are thought
to play a role in predicting the relative permeability. Table 1
presents a list of all input variables and functional links used in
this study. The initial water saturation, residual-oil saturation, and
porosity of each well can be obtained easily from either well logs
or routine core analysis. Wettability is an important input variable
for predicting the relative permeability data and is, thus, included in
the pool of input variables. We found that not all wells with relative
permeability measurements have wettability data. For those wells
missing wettability data, we used Craig’s rule (Craig 1971) to deter-
mine the wettability of each relative permeability curve, which is
classified as oil-wet, water-wet, or mixed-wet. It should be noted
that Craig’s rule helps to distinguish between strongly-water-wet
and -oil-wet systems on the basis of relative permeability curves. If
no information is available on the wettability of a well, it then can be
estimated by use of offset-well data, or sensitivity analysis can be
performed. The output of each network in this study is a single
variable, either water or oil relative permeability.
Because of the variety of reservoir characteristics, and using
data statistics, the database was divided into three categories of
reservoirs: the Arab-D reservoir, the Shuaibah reservoir, and all
other reservoirs having limited data. This necessitates the devel-
opment of six ANN models for predicting water and oil relative
permeability, resulting in two ANN models for each reservoir
category. The database of relative permeability that is used in this
study comprises of a total of 3,711 records or cases. Table 2
presents the distribution of these data cases in the three categories
of reservoirs (Arab-D, Shuaibah, and the others).
Data Preprocessing
Data preprocessing is an important procedure in the development
of ANN models. All input and output variables must be converted
into numerical values to be introduced to the network. Nominal
values require special handling. Since the wettability is a nominal
input variable, it is converted into a set of numerical values. Oil-
wet was represented as {1, 0, 0}, mixed-wet as {0, 1, 0}, and
water-wet as {0, 0, 1}. In this study, we applied two normalization
algorithms—mean/standard deviation, and minimax—to ensure
that the network’s input and output will be in a sensible range
(Al-Fattah and Startzman 2003). The simplest normalization
function is the minimax, which finds the minimum and max-
imum values of a variable in the data and performs a linear
transformation using a shift and a scale factor to convert the
values into the target range, which is typically [0.0, 1.0]. After
network execution, denormalizing of the output follows the
reverse procedure: subtraction of the shift factor, followed by
Fig. 2—Flowchart of procedure of ANN design and develop-
ment proposed in this study.
Fig. 1—Design of a GRNN used in this study.
February 2009 SPE Reservoir Evaluation & Engineering 97
division by the scale factor. The mean/SD technique is defined
as the data mean subtracted from the input variable value divided
by the SD. Both methods have advantages in that they process
the input and output variables without any loss of information,
and their transform is mathematically reversible.
Input Selection and Dimensionality Reduction
One of the most difficult tasks in the design of the neural network
is the decision on which of the available variables to use as inputs
to the neural network. The only guaranteed method to select the
best input set is to train networks with all possible input sets and
all possible architectures, and to select the best. Practically, this is
impossible for any significant number of candidate input vari-
ables. The problem is complicated further when there are inter-
dependencies or correlations between some of the input variables,
which means that any of a number of subsets might be adequate.
To some extent, some neural-network architectures can actual-
ly learn to ignore useless variables. Other architectures are affect-
ed adversely; and in all cases, a larger number of inputs implies
that a larger number of training cases is required to prevent over-
learning. As a consequence, the performance of a network can be
improved by reducing the number of input variables, even some-
times at the cost of losing some input information. There are
sophisticated algorithms that determine the selection of input vari-
ables. The following describes the input-selection and dimension-
ality-reduction techniques that are used in this study.
Genetic Algorithm. A genetic algorithm is an optimization algo-
rithm that can search efficiently for binary strings by processing
an initially random population of strings using artificial mutation,
crossover, and selection operators, in an analogy with the process
of natural selection (Goldberg 1989). It is applied in this study to
determine an optimal set of input variables that contribute signif-
icantly to the performance of the neural network. The method is
used as part of the model-building process, in which variables
identified as the most relevant are then used in a traditional
model-building stage of the analysis. Genetic algorithm is a
particularly effective technique for combinatorial problems of
this type, in which a set of interrelated yes/no decisions needs
to be made. For this study, it is used to determine whether the
input variable under evaluation is significantly important or not.
The genetic algorithm is therefore a good alternative where there
are large numbers of variables (e.g., more than 50), and it also
provides a valuable second opinion for smaller numbers of vari-
ables. It is particularly good at spotting interdependencies be-
tween variables located close together on the masking strings.
The genetic algorithm can sometimes identify subsets of inputs
that are not discovered by other techniques. However, the meth-
od is time consuming; it typically requires building and testing
many thousands of networks, resulting in running the program
for a couple of days.
Forward and Backward Stepwise Algorithms. These algo-
rithms (Hill and Lewicki 2006) are usually quicker than the genet-
ic algorithm if there is a reasonably small number of variables.
They are also equally effective if there are not too many complex
interdependencies between variables. Forward and backward
stepwise-input-selection algorithms work by adding or removing
variables one at a time. Forward selection begins by locating the
single input variable that, on its own, best predicts the output
98 February 2009 SPE Reservoir Evaluation & Engineering
variable. It then checks for a second variable that, when added to
the first, improves the model most, repeating this process until
either all variables have been selected or no further improvement
is made. Backward stepwise feature selection is the reverse pro-
cess; it starts with a model including all variables, and then
removes them one at a time, at each stage finding the variable
that, when it is removed, degrades the model least.
Forward- and backward-selection methods each have their
advantages and disadvantages. The forward-selection method is
generally faster. It may miss key variables if they are interdepen-
dent or correlated. The backward-selection method does not suffer
from this problem, but because it starts with the whole set of
variables, the initial evaluations are most time consuming. Fur-
thermore, the model can actually suffer strictly from the number
of variables, making it difficult for the algorithm to behave sensi-
bly if there are a large number of variables, especially if there is
only a few weakly predictive ones in the set. In contrast, because
it selects only a few variables initially, forward selection can
succeed in this situation. Forward selection is also much faster if
there are few relevant variables because it will locate them at the
beginning of its search, whereas backward selection will not whit-
tle away the irrelevant ones until the very end of its search.
In general, backward selection is to be preferred if there is a
small number of variables (e.g., 20 variables or less), and forward
selection may be better for larger numbers. All of the above-
mentioned input-selection algorithms, including the genetic
algorithm and forward and backward selection, evaluate feature-
selection masks. These are used to select the input variables for a
new training set, and a GRNN is tested on this training set.
Sensitivity Analysis. This is performed on the inputs to a neural
network to indicate those input variables that are considered most
important by that particular neural network. Sensitivity analysis
can be used purely for informative purposes or to perform input
pruning, which is removing excess neurons from input or hidden
layers. In general, input variables are not independent. Sensitivity
analysis gauges variables according to the deterioration on model-
ing performance that occurs if that variable is not available to the
model. However, the interdependence between variables means
that no scheme of single ratings per variable can ever reflect the
subtlety of the true situation. In addition, there may be interdepen-
dent variables that are useful only if included as a set. If the entire
set is included in a model, they can be accorded significant sensi-
tivity, but this does not reveal the interdependency. Worse, if only
part of the interdependent set is included, their sensitivity will be
zero because they carry no discernable information. In summary,
precautions should be exercised when drawing conclusions about
the importance of variables because sensitivity analysis does not
rate the usefulness of variables in modeling in a reliable or abso-
lute manner. Nonetheless, in practice, sensitivity analysis is quite
useful. If a number of models are studied, it is often possible to
identify variables that are always of high sensitivity, variables that
are always of low sensitivity, and ambiguous variables that change
ratings and probably carry mutually redundant information.
Another common approach to dimensionality reduction is the
principle-component analysis (Bishop 1995), which can be repre-
sented in a linear network. It can often extract a very small num-
ber of components from quite high-dimensional original data and
still retain the important structure.
Applying the above-mentioned input-selection methods in this
study, we first used the genetic algorithm to identify redundant input
variables from the 25 variables given in Table 1. For the Arab-D-
reservoir ANN model, this step identified four redundant input vari-
ables that can be removed from the input pool. In the second step,
we applied forward and backward stepwise selection on the remain-
ing input variables. Both the forward and the backward algorithms
yielded the same results by identifying six additional redundant
input variables. We then ran the network with the remaining 15 input
Fig. 3—Error ratio and ranking of the influence of input vari-
ables on ANN model for Arab-D-reservoir water relative perme-
ability.
Fig. 4—Results of ANN model compared to experimental data
for oil relative permeability.
February 2009 SPE Reservoir Evaluation & Engineering 99
variables while running the sensitivity analysis simultaneously. The
network gave a very good performance; however, sensitivity anal-
ysis indicated that two input variables can be removed from the
network without affecting the performance of the ANN model.
Fig. 3 presents the results of sensitivity analysis for the top 10 most
influential input variables for the Arab-D-reservoir ANN model.
Fig. 3 shows the error that will be given by the model if that
particular input variable is excluded from the network. Fig. 3
shows that the wettability gives the highest error if it is removed
from the network, indicating its significance to the performance of
the ANN model; thus, it was ranked the first among other vari-
ables. In this study, we determined that input variables that have
error ratios greater than one are significant and influential to the
network performance. Input variables having error ratios less than
one are removed from the network. The final ANN model con-
sisted of 13 input variables as an optimum input set.
Training, Verifying, and Testing
By exposing the network repeatedly to input data, the weights and
thresholds of the post-synaptic potential function are adjusted
using special training algorithms until the network performs very
well in predicting the output correctly. In this study, the data are
divided into three subsets: training set (50–60% of data), verifica-
tion or validation set (20–25% of data), and testing set (20–25%
of data), as presented in Table 2. Typically, the training-data
subset is presented to the network in several or even hundreds of
iterations. Each presentation of the training data to the network for
adjustment of weights and thresholds is referred to as an epoch.
The procedure continues until the overall error function has been
minimized sufficiently. The overall error is also computed for the
second subset of the data, which is sometimes referred to as the
verification or validation data. The verification data act as a
watchdog and take no part in the adjustment of weights and
thresholds during training, but the networks’ performance is
checked against this subset as training continually. The training
is stopped when the error for the verification data stops decreasing
or starts to increase. Use of the verification subset of data is
important because with unlimited training, the neural network
usually starts “overlearning” the training data. Given no restric-
tions on training, a neural network may describe the training data
almost the perfectly but may generalize very poorly to new data.
The use of the verification subset to stop training at a point when
generalization potential is best is a critical consideration in train-
ing neural networks. A third subset of testing data is used to serve
as an additional independent check on the generalization capabil-
ities of the neural network, and as a blind test of the performance
and accuracy of the network. Several neural-network architectures
and training algorithms have been attempted to achieve the best
results. The results were obtained using a hybrid approach of
genetic algorithm and neural network.
Results
All the six networks developed in this study were successfully
well trained, verified, and checked for generalization. An
important measure of the network performance is the plot of the
root-mean-square error vs. the number of iterations or epochs. A
well-trained network is characterized by decreasing errors for both
the training and verification data sets as the number of iterations
increases (Al-Fattah and Startzman 2003). Statistical analysis used
in this study to examine the performance of a network are
the output-data SD, output error mean, output error SD, output
absolute error mean, SD ratio, and Pearson-R correlation coefficient
(Hill and Lewicki 2006). The most significant parameter is the SD
ratio, which measures the performance of the neural network. It is
the best indicator of the goodness of a regression model, and it is
defined as the ratio of the prediction-error SD to the data SD. One
minus this regression ratio is sometimes referred to as the explained
variance of the model. The degree of predictive accuracy needed
Fig. 6—Results of ANN model compared to experimental data
for water relative permeability.
Fig. 7—Results of ANN model compared to experimental data
for water relative permeability of Well U-628.
Fig. 5—Results of ANN model compared to experimental data
for oil relative permeability.
Fig. 8—Results of ANN model compared to experimental data
for water relative permeability of Well SB-50.
100 February 2009 SPE Reservoir Evaluation & Engineering
varies from application to application. Generally, an SD ratio of 0.3
or lower indicates a very good regression-performance network.
Another important parameter is the standard Pearson-R correlation
coefficient between the network’s prediction and the observed
values. A perfect prediction will have a correlation coefficient of
1.0. In this study, we used the network verification-data subset to
judge and compare it with the performance of a network among
other competing networks.
Because of its large quantity of data (70% of the database), most
of the results presented in this paper belong to the ANN models
developed for the Arab-D reservoir. Tables 3 and 4 present statisti-
cal analysis of the ANN models for determining oil and water
relative permeability, respectively, for Arab-D reservoir. Table 3
shows that Arab-D-reservoir ANN models for predicting oil relative
permeability achieved excellent accuracy by having low values of
SD ratios, which are lower than 0.2 for all data subsets including
training-, verification-, and testing-data set. Table 3 also shows that
a correlation coefficient of 99% was achieved for all data subsets of
the Arab-D-reservoir model, indicating the high accuracy of the
ANN models for predicting the oil relative permeability data. Table 4
shows that the water relative permeability ANN model yielded a
correlation coefficient of 96% for all all three data subsets of the
Arab-D-reservoir model, with SD ratios less than 0.3 indicating a
high degree of accuracy and excellent performance.
Figs. 4 through 8 show that the results of ANN models are in
excellent agreement with the experimental data of oil and water
relative permeability. Crossplots of measured vs. predicted data of
oil and water relative permeability are presented in Figs. 9 and 10,
respectively. The majority of the data fall close to the perfect 45
straight line, indicating the high degree of accuracy of the ANN
models. Figs. 11 and 12 are histograms, respectively, of residual
errors of oil and water relative permeability ANN models for Arab-
D reservoir.
Sensitivity analysis was performed on all input variables to
identify significant variables that are influential on the network’s
performance. Wettability was not found to be an important input
parameter for determining oil relative permeability for all ANN
models. On the other hand, wettability was found to be the
most influential input parameter for determining water relative
permeability. Fig. 3 presents the most influential input variables
that play an important role on the network’s outcome for deter-
mining water relative permeability. As can be seen from this
figure, wettability is placed first as the most significant input
parameter for predicting water relative permeability. To study the
effect of wettability on the network predictions, we removed the
wettability from the input variables and ran the network. Statisti-
cal analysis of the network performance is presented in Table 5.
The results of accuracy are badly deteriorated after removing the
wettability, indicating the significance of the wettability on deter-
mining water relative permeability. Without using the wettability
as input, the ANN model has a correlation coefficient of 79% for
the verification subset and 51% for the testing subset. In addition,
SD ratios of more than 0.6 were given by this model, indicating
the poor performance of the ANN model for water relative perme-
ability after removing the wettability as input.
Comparison of ANN against Correlations
The newly developed ANN models for predicting water/oil
relative permeability of carbonate reservoirs were validated using
data that were not used in the training of the ANN models.
Fig. 9—Crossplot of ANN predicted vs. measured k
ro
for Arab-D
reservoir.
Fig. 10—Crossplot of ANN predicted vs. measured k
rw
for Arab-
D reservoir.
Fig. 11—Histogram of k
ro
residual error for the Arab-D-reservoir
model.
Fig. 12—Histogram of k
rw
residual error for the Arab-D-
reservoir model.
February 2009 SPE Reservoir Evaluation & Engineering 101
This step was performed to examine the applicability of the
ANN models and to evaluate their accuracy against correlations
previously published in the literature. The new ANN models were
compared with published correlations of Wyllie (1951), Pirson
(1958), Naar et al. (1962), Jones and Roszelle (1978), Land
(1968), and Honarpour et al. (1986, 1982). The relative perme-
ability data used for the comparison are for wells in the testing-
data subset. No attempt was made in this study to generate new
coefficients of the published correlations by use of the same data
used for the comparison. The correlations were used with their
original coefficients to be compared with the GRNN predictions
using the testing-data subset. Fig. 13 shows the results of compar-
ison of the ANN model against published correlations for predict-
ing oil relative permeability for one of the oil wells in the
carbonate reservoir. The results of comparison showed that the
ANN models reproduced more accurately the experimental rela-
tive permeability data than the published correlations. Although
the Honarpour et al. (1986) correlation gives the closest results
to the experimental data among other correlations, it does not
honor the oil relative permeability data at the initial water satura-
tion by yielding a value greater than one.
Fig. 14 presents a comparison of results of ANN models
against the correlations for predicting water relative permeability
data for an oil well in the Ghawar field. The results clearly show
the excellent agreement of the ANN model with the experimental
data and the high degree of accuracy achieved by the ANN model
compared to all published correlations considered in this study.
This study differs from others’ ANN work (Slipngarmlers et al.
2002) in that this study used a large database of relative permeability
for giant carbonate reservoirs, it used fewer input variables such that
the developed ANN models use mainly six input variables that can be
obtained easily without performing additional complicated experi-
ments, it used a different network type (GRNN), and it achieved a
higher degree of accuracy and performance. In addition, for the de-
velopment of the ANN models, this study implemented several input-
selection methods and also used three data subsets (training, verifica-
tion, and testing), making sure that the network trained very well
and avoiding the overlearning problem. Slipngarmlers et al. (2002)
used only two data subsets (training and testing).
Conclusion
In this study, we developed new prediction models for determining
water/oil relative permeability using ANN modeling technology for
giant and complex carbonate reservoirs. The ANN models were
developed by use of a hybrid approach of genetic algorithms and
ANNs. The models were successfully trained, verified, and tested
using the GRNN algorithm. To the authors’ knowledge, this is the
first study that uses this type of network, GRNN, in the application
of relative permeability determination. Variable-selection and
dimensionality-reduction techniques, critical procedures in the
design and development of ANN models, were also presented and
applied in this study.
Analysis of results of the blind testing-data set of all ANN
models shows excellent agreement with the experimental relative
permeability data. Results showed that the ANN models outper-
form all published empirical equations by achieving excellent
performance and a high degree of accuracy.
It is hoped that this study will have a great impact on reservoir-
simulation and reservoir-management studies. It minimizes the
cycle time of the history-matching process, which leads to
improved performance predictions. It provides best estimates
of relative permeability for existing and newly drilled wells for
which experimental data are unavailable.
Nomenclature
S
on
=normalized oil saturation
S
or
=residual-oil saturation, fraction
S
w
=water saturation, fraction
S
wi
=irreducible water saturation, fraction
S
wn
=normalized water saturation
f=porosity, fraction
Acknowledgments
The authors would like to thank Saudi Aramco management for
their permission to publish this paper. Special thanks to Ahmed A.
Al-Moosa, Faisal Al-Shuraidah, and Fawzi Al-Matar, all of Saudi
Aramco, for the great support received during the course of this
project. Thanks are extended to the Petrophysics Unit of Saudi
Aramco’s EXPEC Advanced Research Center for providing the
data used in this work.
References
Al-Fattah, S.M., and Startzman, R.A. 2003. Neural Network Approach
Predicts US Natural Gas Production. SPEPF 18 (2): 84–91. SPE-
82411-PA. DOI: 10.2118/82411-PA.
Fig. 13—Comparison of ANN model and correlations for pre-
dicting k
ro
for one well in the Ghawar field.
Fig. 14—Comparison of ANN model and correlations for pre-
dicting k
rw
for one well in the Ghawar field.
102 February 2009 SPE Reservoir Evaluation & Engineering
Bishop, C.M. 1995. Neural Networks for Pattern Recognition. Oxford,
UK: Clarendon Press.
Craig, F.F. 1971. The Reservoir Engineering Aspects of Waterflooding.
Monograph Series, SPE, Richardson, Texas 3.
Fausett, L. 1994. Fundamentals of Neural Networks Architectures, Algo-
rithms, and Applications. Englewood Cliffs, New Jersey: Prentice-
Hall Inc.
Goldberg, D.E. 1989. Genetic Algorithms in Search, Optimization, and
Machine Learning. Columbus, Ohio: Addison-Wesley.
Haykin, S. 1994. Neural Networks: A Comprehensive Foundation. New
York City: MacMillan Publishing.
Hill, T. and Lewicki, P. 2006. Statistics: Methods and Applications. Tulsa:
StatSoft.
Honarpour, M., Koederitz, L., and Harvey, A.H. 1986. Relative Perme-
ability of Petroleum Reservoirs. Boca Raton: CRC Press Inc.
Honarpour, M., Koederitz, L., and Harvey, A.H. 1982. Empirical Equa-
tions for Estimating Two-Phase Relative Permeability in Consolidated
Rock. JPT 34 (12): 2905–2908. SPE-9966-PA. DOI: 10.2118/9966-
PA.
Jones, S.C. and Roszelle, W.O. 1978. Graphical Techniques for Determin-
ing Relative Permeability from Displacement Experiments. JPT 30 (5):
807–817; Trans., AIME, 265. SPE-6045-PA. DOI: 10.2118/6045-PA.
Land, C.S. 1968. Calculation of Imbibition Relative Permeability for Two-
and Three-Phase Flow From Rock Properties. SPEJ 8(2): 149–156;
Trans., AIME, 243. SPE-1942-PA. DOI: 10.2118/1942-PA.
Mohaghegh, S.D. 2005. Recent Developments in Application of Artifi-
cial Intelligence in Petroleum Engineering. JPT 57 (4): 86–91. SPE-
89033-MS. DOI: 10.2118/89033-MS.
Naar, J., Wygal, R.J., and Henderson, J.H. 1962. Imbibition Relative
Permeability in Unconsolidated Porous Media. SPEJ 2(1): 13–17;
Trans., AIME, 225. SPE-213-PA. DOI: 10.2118/213-PA.
Patterson, D. 1996. Artificial Neural Networks. Singapore: Prentice Hall.
Pirson, S.J. 1958. Oil Reservoir Engineering. New York City: McGraw-
Hill.
Slipngarmlers, N., Guler, B., Ertekin, T., and Grader, A.S. 2002. Develop-
ment and Testing of Two-Phase Relative Permeability Predictors
Using Artificial Neural Networks. SPEJ 7(3): 299-308. SPE-79547-
PA. DOI: 10.2118/79547-PA.
Wyllie, M.R.J. 1951. A Note on the Interrelationship Between Wetting
and Nonwetting Phase Relative Permeability. Trans., AIME 192:
381–382.
Saud M. Al-Fattah is a researcher currently at Saudi
Aramco’s corporate planning in Dhahran, Saudi Arabia.
E-mail: saud.fattah@aramco.com. He worked in several
departments of E&P at Saudi Aramco including reservoir
management, production and facilities development, and
petroleum engineering applications services. Al-Fattah’s
areas of specialty include reservoir engineering, artificial
intelligence, operations research, economic evaluation,
and energy forecasting. He holds a PhD degree from Texas
A&M University, College Station, Texas, and MS and BS
degrees with honors from King Fahd University of Petroleum
and Minerals (KFUPM), Dhahran, Saudi Arabia, all in petro-
leum engineering. Al-Fattah was awarded the 2006 SPE
Saudi Arabia Technical Symposium’s Best Paper of the Year
award (first place). Al-Fattah is an active member of SPE; he
is a technical editor of SPE Reservoir Evaluation and Engi-
neering, a mentor in the SPE e-Mentoring program since
2005, vice chairman of the 2006 SPE Saudi Arabia Annual
Technical Symposium, and chairman of the 2007 SPE Saudi
Arabia Annual Technical Symposium. Hamad A. Al-Naim is
currently on assignment as a reservoir engineer in the reser-
voir management department at Saudi Aramco, Dhahran,
Saudi Arabia. E-mail: hamad.naim@aramco.com. He
worked previously as a petroleum engineering systems ana-
lyst in the petroleum engineering applications services de-
partment at Saudi Aramco. Al-Naim holds a BS degree
from KFUPM, Saudi Arabia, in computer engineering in 2001,
and an MS degree in petroleum engineering from the
University of Calgary in 2006.
February 2009 SPE Reservoir Evaluation & Engineering 103