Conference PaperPDF Available

Neural Network Approach Predicts U.S. Natural Gas Production

Authors:

Figures

Copyright 2001, Society of Petroleum Engineers Inc.
This paper was prepared for presentation at the 2001 SPE Production and Operations
Symposium, Oklahoma, OK, 24-27 March.
This paper was selected for presentation by an SPE Program Committee following review of
information contained in an abstract submitted by the author(s). Contents of the paper, as
presented, have not been reviewed by the Society of Petroleum Engineers and are subject to
correction by the author(s). The material, as presented, does not necessarily reflect any
position of the Society of Petroleum Engineers, its officers, or members. Papers presented at
SPE meetings are subject to publication review by Editorial Committees of the Society of
Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper
for commercial purposes without the written consent of the Society of Petroleum Engineers is
prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300
words; illustrations may not be copied. The abstract must contain conspicuous
acknowledgment of where and by whom the paper was presented. Write Librarian, SPE, P.O.
Box 833836, Richardson, TX 75083-3836, U.S.A., fax 01-972-952-9435.
Abstract
The industrial and residential market for natural gas produced
in the United States has become increasingly significant.
Within the past ten years the wellhead value of produced
natural gas has rivaled and sometimes exceeded the value of
crude oil. Forecasting natural gas supply is an economically
important and challenging endeavor. This paper presents a
new approach to predict natural gas production for the United
States using an artificial neural network.
We developed a neural network model to forecast U.S.
natural gas supply to the Year 2020. Our results indicate that
the U.S. will maintain its 1999 production of natural gas to
2001 after which production starts increasing. The network
model indicates that natural gas production will increase
during the period 2002 to 2012 on average rate of 0.5%/yr.
This increase rate will more than double for the period 2013 to
2020.
The neural network was developed with an initial large
pool of input parameters. The input pool included exploratory,
drilling, production, and econometric data. Preprocessing the
input data involved normalization and functional
transformation. Dimension reduction techniques and
sensitivity analysis of input variables were used to reduce
redundant and unimportant input parameters, and to simplify
the neural network. The remaining input parameters of the
reduced neural network included data of gas exploratory wells,
oil/gas exploratory wells, oil exploratory wells, gas depletion
rate, proved reserves, gas wellhead prices, and growth rate of
gross domestic product. The three-layer neural network was
successfully trained with yearly data starting from 1950 to
1989 using the quick-propagation learning algorithm. The
target output of the neural network is the production rate of
natural gas. The agreement between predicted and actual
production rates was excellent. A test set, not used to train the
network and containing data from 1990 to 1998, was used to
verify and validate the network performance for prediction.
Analysis of the test results shows that the neural network
approach provides an excellent match of actual gas production
data. An econometric approach, called stochastic modeling or
time series analysis, was used to develop forecasting models
for the neural network input parameters. A comparison of
forecasts between this study and another forecast is presented.
The neural network model has use as a short-term as well
as a long-term predictive tool of natural gas supply. The model
can also be used to examine quantitatively the effects of the
various physical and economic factors on future gas
production.
Introduction
In recent years, there has been a growing interest in applying
artificial neural networks1-4 (NN) to various areas of science,
engineering, and finance. Among other applications4 of neural
networks to petroleum engineering, NN’s have been used for
pattern recognition in well test interpretation5, and for
prediction in well logs4 and in phase behavior.6
Artificial neural networks are an information processing
technology inspired by the studies of the brain and nervous
system. In other words, they are computational models of
biological neural structures. Each NN generally consists of a
number of interconnected processing elements (PE) or neurons
grouped in layers. Fig. 1 shows the basic structure of a three-
layer network: one input layer, one hidden layer, and one
output layer. The neuron consists of multiple inputs and a
single output. Input is the values of the independent variables
and output is the dependent variables. Each input is modified
by a weight, which multiplies with the input value. The input
can be raw data or output of other PE’s or neurons. With
reference to a threshold value and activation function, the
neuron will combine these weighted inputs and use them to
determine its output. The output can be either the final product
or an input to another neuron.
This paper describes the methodology of developing an
artificial neural network model to predict U.S. natural gas
production. It presents the results of neural network modeling
approach and compares it to other modeling approaches.
SPE 67260
Neural Network Approach Predicts U.S. Natural Gas Production
S.M. Al-Fattah, SPE, Saudi Aramco, and R.A. Startzman, SPE, Texas A&M University
2 S.M. AL-FATTAH, R.A. STARTZMAN SPE 67260
Data Sources
The data used to develop the ANN model for U.S. gas
production were collected mostly from the Energy Information
Admin. (EIA).7 U.S. marketed-gas production for the period
1918-97 was obtained from the Twentieth Century Petroleum
Statistics8-9; the 1998 production data was from the EIA. Gas
discovery data from 1900 to 1998 were from Refs. 7 and 10.
Proved gas reserves for the period 1949 to 1999 was from the
Oil and Gas Journal (OGJ) database.11 EIA provides various
statistics of historical data for U.S. energy including gas
production, exploration, drilling and econometrics. These data
are available for public and can be downloaded from the
internet with ease. The following data (1949 to 1998) were
downloaded from the EIA website. 7
Gas discovery rate
Population
Gas wellhead price
Oil wellhead price
Gross domestic product (DG) using purchasing power
parity (PPP), based on 1992 U.S. dollars
Gas exploratory wells:
o Footage drilled
o Wells drilled
Oil exploratory wells:
o Footage drilled
o Wells drilled
o Percentage of successful wells drilled
Oil and gas exploratory wells:
o Footage drilled
o Wells drilled
Proved gas reserves
Other input parameters were also derived from the data
parameters above. These derived input parameters include:
Gross Domestic Product growth rate: measures the
growth rate of the gross domestic product. This input
parameter was calculated using the following formula12:
100x1
)(
1
1
1
1
=
+
+
+ii tt
i
G
i
G
i
DP D
D
G, .. .. .…………... (1)
where DG = gross domestic product, GDP = growth rate of
gross domestic product, t = time, and i = observation number.
Average depth drilled per well: calculated by dividing the
footage drilled by the number of exploratory wells drilled each
year. This is done for the gas exploratory wells, oil
exploratory wells, and oil-and-gas exploratory wells, resulting
in additional three new input variables.
Depletion rate: measures how fast the reserves are being
depleted each year at that year’s production rate. It is
calculated as the annual production divided by the proved
reserves expressed in percentage.
Data Preprocessing
Data preparation is a critical procedure in the development of
an artificial neural network system. The preprocessing
procedures used in the construction process of the NN model
of this study are input/output normalization and
transformation.
Normalization
Normalization is a process of standardizing the possible
numerical range that the input data can take. It enhances the
fairness of training by preventing an input with large values
from swamping out another input that is equally important but
with smaller values. Normalization is also recommended
because the network training parameters can be tuned for a
given range of input data; thus, the training process can be
carried over to similar tasks.
We used the mean/standard deviation normalization
method to normalize all the input and output variables of the
neural network. The mean standard deviation preprocessing is
the most commonly used method and generally works well
with almost every case. It has the advantages that it processes
the input variable without any loss of information and its
transform is mathematically reversible. Each input variable as
well as the output were normalized using the following
formula13:
(
)
i
ii
i
X
X
σ
µ
=
, ……………………………… (2)
where X
= normalized input/output vector, X = original
input/output vector,
µ
= mean of the original input/output,
σ
=
standard deviation of the input/output vector, and i = number
of input/output vector. Each input/output variable was
normalized using the equation above with its mean and
standard deviation values. This normalization process was
applied to the whole data including the training and testing
sets. The single set of normalization parameters of each
variable (i.e. the standard deviation and the mean) were then
preserved to be applied to new data during forecasting.
Transformation
Our experience found that NN performs better with normally
distributed data and unseasonal data. Having input data
exhibiting trend or periodic variations renders data
transformation necessary. There are different ways to
transform the input variables into forms, making the neural
network interpret the input data easier and perform faster in
the training process. Examples for such transformation forms
include the variable first derivative, relative variable
difference, natural logarithm of relative variable, square root
of variable, and trigonometric functions. In this study, all input
variables as well as the output were transformed using the first
SPE 67260 NEURAL NETWORK APPROACH PREDICTS U.S. NATURAL GAS PRODUCTION 3
derivative of each variable. The choice of this transform
removed the trend in each input variable, thus helping to
reduce the multicollinearity among the input variables.
Using the first derivative also results in greater fluctuation and
contrast in the values of the input variables. This improves the
ability of the NN model to detect significant changes in
patterns. For instance, if gas exploratory footage (one of the
input variables) is continuously increasing the actual level
may not be as important as the first time derivative of footage,
or the rate of change in footage from year to year.
The first-derivative transformation, however, resulted in a
loss of one data point due to its mathematical formulation.
Selection of Neural Network Inputs and Output
Gas production was selected as the output of the neural
network since it is the target for prediction. Diagnostic
techniques such as scatter plots and correlation matrix were
performed on the data to check their validity and to study the
relationships between the target and each of the predictor
variables. As an example, a scatter plot for average footage
drilled per oil and gas exploratory well versus gas production
is shown in Fig. 2. The correlation coefficients for all inputs
versus the target, gas production, are given in Table 1. The
highest correlation coefficient value is 0.924 for the input I-9,
average footage drilled per oil and gas exploratory well. This
is also shown in Fig. 2 by the high linear correlation of this
variable with gas production. The correlation matrix helps to
reduce the number of input variables by excluding those with
high correlation coefficients. Some input variables with high
correlation coefficients, however, are important and needed to
be included in the network model because of their physical
relations with the target. This problem can be alleviated by
applying transformation techniques to remove the trend and
reduce the high correlation coefficient. Fig. 3 shows a scatter
plot of the input I-9 versus gas production after performing the
normalization and the first derivative transformation. The
figure shows the data points are more scattered and fairly
distributed around the zero horizontal line. The preprocessing
procedure resulted in a 45% reduction of the correlation
coefficient of this input from 0.924 to 0.512.
Neural Network Model Design
There are a number of design factors that must be considered
in constructing a neural network model. These considerations
include the selection of neural network architecture, the
learning rule, the number of processing elements in each layer,
the number of hidden layers, and the type of transfer function.
Fig. 4 depicts an illustration of the neural network model
designed in this study.
Architecture
The neural network architecture determines the method that
the weights are interconnected in the network and specifies the
type of learning rules that may be used. Selection of network
architecture is one of the first things done in setting up a
neural network. The multilayer normal feed forward1-3 is the
most commonly used architecture and is generally
recommended for most applications; hence it is selected to be
used for this study.
Learning Algorithm
Selection of a learning rule is also an important step because it
affects the determination of input functions, transfer functions,
and associated parameters. The network used is based on a
back-propagation (BP) design,1 the most widely recognized
and most commonly used supervised-learning algorithm. In
this study, quick propagation (QP)14 learning algorithm, which
is an enhanced version of the back-propagation algorithm, is
used for its performance and speed. The advantage of QP is
that it runs faster than BP by minimizing the time required for
finding a good set of weights using heuristic rules. These rules
automatically regulate the step size and detect conditions that
accelerate learning. The optimum step size is then determined
by evaluating the trend of the weight updates over time.
The fundamental design of a backpropagation neural
network consists of an input layer, a hidden layer, and an
output layer. Fig. 4 shows the architecture of a BP neural
network. A layer consists of a number of processing elements
or neurons. The layers are fully connected, indicating that each
neuron of the input layer is connected to each hidden layer
node. Similarly, each hidden layer node is connected to each
output layer node. The number of nodes needed for the input
and output layers depends on the number of inputs and outputs
designed for the neural network.
Activation Rule
A transfer function acts on the value returned by the input
function. An input function combines the input vector with the
weight vector to obtain the net input to the processing element
given a particular input vector. Each of the transfer functions
introduces a nonlinearity into the neural network, enriching its
representational capacity. In fact, it is the nonlinearity of the
transfer function that gives a neural network its advantage
over conventional or traditional regression techniques. There
are a number of transfer functions. Among those are sigmoid,
arctan, sin, linear, Gaussian, and Cauchy. The most commonly
used transfer function is the sigmoid function. It squashes and
compresses the input function when it takes on large positive
or large negative values. Large positive values asymptotically
approach 1, while large negative values are squashed to 0. The
sigmoid is given by1
)exp(1
1
)( x
xf +
=. …………………………….. (3)
Fig. 5 is a typical plot of the sigmoid function. In essence,
the activation function acts as a nonlinear gain for the
processing element. The gain is actually the slope of the
sigmoid at a specific point. It varies from a low value at large
negative inputs, to a high value at zero input, and then drops
back toward zero as the input becomes large and positive.
4 S.M. AL-FATTAH, R.A. STARTZMAN SPE 67260
Training Procedure
In the first step of the development process, the available data
were divided into training and test sets. The training set was
selected to cover the data from 1949 to 1989 (40 year-data
points) while the testing set covers the data from 1990 to 1998
(nine year-data points). We chose to split the data based on an
80/20 rule. We first normalized all input variables and the
output using the average/standard deviation method, then, took
the first derivative of all input variables including the output.
In the initial training and testing phases, We developed the
network model using most of the default parameters in the
neural network software. Generally, these default settings
provided satisfactory results to start with. We examined
different architectures, different learning rules, different input
and transfer functions, with increasing numbers of hidden-
layer neurons, on the training set to find the optimal learning
parameters, then the optimal architecture. We used primarily
the black-box testing approach, comparing network results to
actual historical results, to verify that the inputs produce the
desired outputs. During training, we used several diagnostic
tools to facilitate understanding how the network is training.
These include:
the mean square error of the entire output,
a plot of the mean square error versus the number of
iterations,
the percentage of training or testing set samples that are
correct based on a chosen tolerance value,
a plot of the actual output and the network output, and
a histogram of all the weights in the network.
The three-layer network with initially all 15 input variables
was trained over the training samples. We chose the number of
neurons in the hidden layer on the basis of existing rules of
thumb2,3 and on experimentation. One rule of thumb states that
the number of hidden-layer neurons should be about 75% of
the input variables. Another rule suggests that the number of
hidden-layer neurons be approximately 50% of the total
number of input and output variables. One of the advantages
of the neural software used in this study is that it allows the
user to specify a range for the minimum and maximum
number of hidden neurons. Putting all this knowledge
together, along with our experimentation experience, we
specified the range of 5 to 12 hidden neurons for the single
hidden layer.
We used the input sensitivity analysis to study the
significance of each input parameter and how it is affecting
the performance of the network. This procedure helps to
reduce the redundant input parameters and to determine the
optimum number of input parameters of the neural network. In
each of the training runs, the results of the input sensitivity
analysis are examined and the least-significant input parameter
is deleted at a time, then the weights are reset and the network-
training process is restarted with the remaining input
parameters. This process is repeated until all the input
parameters are found to have significant contribution to the
network performance. The input is considered significant
when its effect normalized value equal to or greater than 0.7 in
the training set and 0.5 in the test set. We varied the number of
iterations used to train the network from 500 to 7,000 to find
the optimal number of iterations to train the network. The
number of 3,000 iterations was used for most of training runs
performed on the network. In the training process, the training
is automatically terminated when the maximum iterations are
reached or the mean square error of the network falls below
the limit set, specified as 1.0x10-5. While training the network,
the test set is also evaluated. This step enables a test pass
through the test set for each pass through the training set.
However, this step does not intervene with the training
statistics rather than evaluating the test set while training for
fine-tuning and generalizing the network parameters.
After the training has been performed, the performance of
the network was then tested. During the testing process, the
test set was used to determine how well the network
performed on data that it has not been previously seen during
training.
To evaluate the performance of the network, the
classification option of the network output as being correct
based on a specified tolerance was used. This method
evaluates the percentage of training and testing samples that
faithfully generalizes the patterns and values of the network
outputs. We used a tolerance of 0.05 (the default value is 0.5)
in this study, meaning that all outputs for a sample must be
within this tolerance for a sample to be considered correct.
Another measure that we took to examine the network
performance is the plot of the mean square error versus the
number of iterations. A well-trained network is characterized
by decreasing errors for both the training and the test sets as
the number of iterations increases.
Results of Training and Testing
We used the input sensitivity analysis technique2,14 for
gauging the sensitivity of the gas production rate (output) for
any particular input. The method makes use of the weight
values of a successfully trained network to extract the
information relevant to any particular input node. The
outcome of the method is the effect values as well as
normalized effect values for each input variable on the output
of gas production rate. These effect values represent an
assessment of the influence of any particular input node on the
output node.
The results of the input-identification process and training
procedure indicated that the network has excellent
performance with 11 input parameters. We found these input
parameters, which are described in Table 2, have significant
contributions on the network performance.
Tables 3 and 4 present the results of the input sensitivity
analysis for the training and test sets, respectively. The
normalized effect values indicate that all 11 inputs
significantly contribute to the improvement of the network
performance and to the prediction of the U.S. natural gas-
production rate for both the training and test sets. The training
set input-sensitivity analysis, Table 3, shows that the gas
SPE 67260 NEURAL NETWORK APPROACH PREDICTS U.S. NATURAL GAS PRODUCTION 5
annual depletion rate (I15) is the most significant input
parameter that contributes to the network performance, hence
in predicting the U.S. natural gas production. Although we
found it important to the improvement of the network
performance and kept it in the network model, the input of gas
wellhead prices (I3) has the least effect normalized value of
0.7 among all other inputs in the training set. Table 4 shows
that all inputs in the test set exceed the arbitrary specified
threshold value of 0.5, indicating that all inputs contribute
significantly to the network model.
The network was trained with 5,000 iterations using the
QP learning algorithm. We found that the optimum number of
hidden-layer nodes is 5. Fig. 6 shows the prediction of the
neural network model, after the training and validation
processes, superimposed on the normalized actual U.S. gas
production. The neural network prediction results show
excellent agreement with the actual production data in both the
training and testing stages. These results indicate that the
network is trained and validated very well, and the network is
ready to be used for forecasting. In addition, statistical and
graphical error analyses were used to examine the
performance of the network.
Optimization of Network Parameters
We attempted different configurations of the network to
optimize the number of hidden nodes and number of
iterations, and thus fine-tune the network performance,
running numerous simulations in the optimization process.
Table 5 presents only potential cases for illustration purposes.
The table shows that increasing the number of iterations to
more than 5,000 improves the training-set performance but
worsens the test-set performance. In addition, decreasing the
number of iterations to 3,000 yields higher errors for both the
training and test sets. The number of hidden-layer nodes was
also varied in the range of 4 to 22 nodes. Increasing the
number of hidden nodes more than 5 shows good results for
the training set but gives unsatisfactory results for the test set,
which is the most important. From these analyses, the optimal
network configuration for this specific U.S. gas production
model is a three-layer QP network with 11 input nodes, 5
hidden nodes, and 1 output node. The network is optimally
trained with 5,000 iterations.
Error Analysis
Statistical accuracy of this network performance is given in
Table 5 (Case 11a). The mean squared error (MSE) of the
training set is 0.0034 and for the test set is 0.0252. Fig. 7
shows the MSE versus the iterations for both the training and
test sets. The errors of training-set samples decrease
consistently throughout the training process. In addition, the
errors of the test-set samples decrease fairly consistently along
with the training-set samples, indicating that the network is
generalizing rather than memorizing. All the training- and test-
set samples yield results of 100% correct based on 0.05
tolerance, as shown in Fig. 8.
Fig. 9 shows the residual plot of the neural network model
for both the training and test samples. The plot shows not only
that the errors of the training set are minimal but also that they
are evenly distributed around zero, as shown by Fig. 10. As is
usually the case, the errors of the test samples are slightly
higher than the training samples. The crossplots of predicted
vs. actual values for natural gas production are presented in
Figs. 11 and 12. Almost all the plotted points of this study’s
neural network model fall very close to the perfect 45° straight
line, indicating its high degree of accuracy.
Forecasting
After the successful development of the neural network model
for the U.S. natural gas production, future gas production rates
must be forecast. To implement the network model for
prediction, forecast models should be developed for all 11
network inputs or be obtained from independent studies. We
developed forecasting models for all the independent network
inputs, except for the input of gas wellhead prices, using the
time-series analysis approach. The forecasts for the gas
wellhead prices came from the Annual Energy Outlook 2000
of EIA.15 We adjusted the EIA forecasts of gas prices, based
on 1998 U.S. dollars/Mcf, to 1992 U.S. dollars/Mcf so that the
forecasts would be compatible with the gas prices historical
data used in the network development. We developed the
forecasting models for the input variables of the neural
network using the Box-Jenkins16 methodology of time-series
analysis. Details of forecasts development for other network
inputs are described in Ref. 17.
Before implementing the network model for forecasting,
we took one additional step, taking the test set back and
adding it to the original training set. Then the network could
be trained only one time, keeping the same configuration and
parameters of the original trained network intact. The purpose
of this step is to have the network take into accounts the
effects of all the available data, since the number of data is
limited, and to ensure the generalization of the network
performance yields better forecasting.
Next, we saved data for the forecasted network inputs for
the period 1999 to 2020 as a test-set file, whereas the training
set-file contained data from 1950 to 1998. Then we ran the
network with one pass through all the training and test sets.
We retained the obtained data results to their original form by
adding the output value at a given time to its previous one.
After decoding the first difference output values, we
denormalized the obtained values for the training and test
samples using the same normalization parameters as in the
data preprocessing.
Fig. 13 shows this study’s neural network forecasting
model for the U.S. gas production to the year 2020. The figure
also shows the excellent match between the neural network
model results and the actual data of natural gas production.
The neural network-forecasting model indicates that the U.S.
gas production in 1999 is in decline at 1.8% of the 1998
production. The production will stay at the 1999 level of
production with slight decline until the year 2001, after which
gas production starts to increase. From 2002 to 2012 gas
production will increase steadily, with an average growth rate
6 S.M. AL-FATTAH, R.A. STARTZMAN SPE 67260
of approximately 0.5%/yr. The neural network model indicates
that this growth will more than double for the period 2013 to
2020, with a 1.3%/yr average growth rate. By 2019, gas
production is predicted at 22.6 Tcf/yr, approximately the same
as the 1973 level of production.
The neural network-forecasting model developed in this
study is not only dependent on the performance of the trained
data set, but also on the future performance of forecasted input
parameters. Therefore, the network model should be updated
periodically when new data become available. While it is
desirable to update the network model with new data, the
network’s architecture and its parameters need not be
necessarily changed. However, a one-time run to train the
network with the updated data is necessary.
Comparison of Forecasts
This section compares the forecasts of U.S. natural gas
production from the Energy Information Admin. (EIA)15 with
the neural network approach, and stochastic modeling
approach developed by Al-Fattah17. The EIA 2000 forecast of
U.S. gas supply is based on USGS estimates of U.S. natural
gas resources, including conventional and unconventional gas.
The main assumptions of the EIA forecast are:
Drilling, operating, and lease equipment costs are
expected to decline ranging from 0.3 to 2%.
Exploratory success rates are expected to increase by
0.5%/yr.
Finding rates will improve by 1 to 6%/yr.
Fig. 14 shows the forecast of EIA compared with the
forecasts of this study using the neural network and time series
analysis (or stochastic modeling).
The stochastic forecast modeling approach we used was
based on the Box_Jenkins time series method as described in
detail by Al_Fattah17. We studied the past trends of all input
data to determine if their values could be predicted using an
“autoregressive integrated moving average” (ARIMA) time
series model. An ARIMA model predicts a value in a time
series as a linear combination of its own past values and past
errors. A separate ARIMA model was developed for each
input variable in the NN forecasting model. Analyses of all
input time series showed that the ARIMA model was both
adequate (errors were small) and stationary (errors showed no
time trend.)
When we used the ARIMA model to directly forecast gas
production using only time-dependent gas production data we
were unable to achieve time-independent errors over the entire
production history from 1918 to 1998. However, since we had
earlier determined that both depletion rate and reserves
discovery rate were stationary time series we used these two
ARIMA models to forecast gas production by multiplying
depletion rate and gas reserves. It is the product of these two
time series that determined the stochastic gas forecast in Fig.
14.
The EIA forecast of U.S. gas supply with approximately
20 Tcf/yr for year 2000 is higher than the neural network
forecast with approximately 19.5 Tcf/yr. However, the EIA
forecast matches the neural network forecast from 2001 to
2003, after which the EIA forecast increases considerably with
annual average increases of 2.4% from 2004 to 2014 and 1.3%
thereafter.
The stochastic-derived model gives a production forecast
much higher than the forecasts of the EIA and neural network.
The forecast of U.S. gas supply by the stochastic-derived
model shows an exponential trend with an average growth rate
of 2.3%/yr.
The neural network forecast is based on the following
assumptions of its independent input forecasts:
Gas prices are expected to increase by 1.5%/yr.
The gas depletion rate is expected to increase by
1.45%/yr.
Drilling of gas exploratory wells will improve by 3.5%/yr.
Drilling of oil/gas exploratory wells will increase on
average of 2.5%/yr.
DG will have an average increase of 2.1%/yr.
The forecast of the neural network takes into account the
effects of the physical and economics factors on U.S. gas
production. These render forecasts of natural gas supply
reliable. The neural network model indicates that production
of U.S. gas will increase from 2002 to 2012 with a 0.5%/yr
average increase. Thereafter, gas production will have a higher
increase, averaging 1.3%/yr through 2020.
Conclusions
This paper presented a new approach to forecast the future
production of U.S. natural gas using a neural network. The
three-layer network was trained and tested successfully, and
comparison with actual production data showed excellent
agreement. Forecasts of the network input parameters were
developed using stochastic modeling approach to time-series
analysis. The network model includes various physical and
economic input parameters, rendering the model a useful
short-term as well as long-term forecasting tool for future gas
production.
The forecasting results of the neural network model
showed that the 1998 U.S. gas production would decline at a
rate of 1.8%/yr in 1999 extending to year 2001 at the same
1999 production level. After 2001 gas production starts to
increase steadily to year 2012 with approximately 0.5%/yr
average growth rate. This growth will more than double for
the period 2013 to 2020, with 1.3%/yr average growth rate. By
2020, gas production is predicted at 23 Tcf/yr, slightly higher
than the 1973 level of production.
The neural network model is a useful as a short-term as
wells as long-term predictive tool for future gas production. It
can also be used to quantitatively examine the effects of
various physical and economical factors on future gas
production. With the neural network model developed in this
SPE 67260 NEURAL NETWORK APPROACH PREDICTS U.S. NATURAL GAS PRODUCTION 7
study, we recommend further analysis to evaluate
quantitatively the effects of the various physical and economic
factors on future gas production.
Nomenclature
D
G = gross domestic product, US $
G
DP = growth rate of gross domestic product
t = time, 1/t, 1/yr
i = observation number
X = input/output vector
µ
= mean or arithmetic average
σ
= standard deviation
References
1. Haykin, S.: Neural Networks: A Comprehensive Foundation,
Macmillan College Publishing Co., New York City (1994).
2. Azoff, E.M.: Neural Network Time Series Forecasting of
Financial Markets, John Wiley & Sons Ltd. Inc., Chichester,
England (1994).
3. Neural Networks in Finance and Investing: Using Artificial
Intelligence to Improve Real-World Performance, revised edition,
R.R. Trippi and E. Turban (eds.), Irwin Professional Publishing,
Chicago (1996).
4. Mohaghegh, S.: “Virtual-Intelligence Applications in Petroleum
Engineering: Part I-Artificial Neural Networks,” JPT (September
2000) 64.
5. Al-Kaabi, A.U. and Lee, W.J.: “Using Artificial Neural Nets to
Identify the Well-Test Interpretation Model,” SPEFE (September
1993) 233.
6. Habiballah, W.A., Startzman, R.A., and Barrufet, M.A.: “Use of
Neural Networks for Prediction of Vapor/Liquid Equilibrium K
Values for Light-Hydrocarbon Mixtures,” SPERE (May 1996)
121.
7. EIA, Internet Home Page: http://www.eia.doe.gov/.
8. Twentieth Century Petroleum Statistics, 52nd ed., DeGolyer and
MacNaughton, Dallas, TX (1996).
9. Twentieth Century Petroleum Statistics, 54th ed., DeGolyer and
MacNaughton, Dallas, TX (1998).
10. Attanasi, E.D. and Root, D.H.: “The Enigma of Oil and Gas Field
Growth,” AAPG Bulletin (March 1994) 78, 321.
11. Energy Statistics Sourcebook, 13th ed., OGJ Energy Database,
PennWell Pub. Co., Tulsa, OK (1998).
12. “World Energy Projection System,” DOE/EIA-M050, Office of
Integrated Analysis and Forecasting, U.S. Dept. of Energy, EIA,
Washington, DC (September 1997).
13. Kutner, M.H., Nachtschiem, C.J., Wasserman W., and Neter, J.:
Applied Linear Statistical Models, fourth edition, Irwin, Chicago
(1996).
14. ThinksPro: Neural Networks Software for Windows User’s Guide,
Logical Designs Consulting Inc., La Jolla, CA (1995).
15. “Annual Energy Outlook 2000,” DOE/EIA-0383, Office of
Integrated Analysis and Forecasting, U.S. Dept. of Energy, EIA,
Washington, DC (December 1999).
16. Box, G.E., Jenkins, G.M., and Reinsel, G.C.: Time Series Analysis
Forecasting and Control, third edition, Prentice-Hall Inc.,
Englewood Cliffs, NJ (1994).
17. Al-Fattah, S.M.: “New Approaches for Analyzing and Predicting
Global Natural Gas Production,” PhD dissertation, Texas A&M
U., College Station, TX (2000).
SI Metric Conversion Factors
ft3 x 2.831 685 E-02 = m3
8 S.M. AL-FATTAH, R.A. STARTZMAN SPE 67260
Fig. 1-Basic structure of a three-layer back-propagation neural
network.
0
1000
2000
3000
4000
5000
6000
7000
8000
0 5000 10000 15000 20000 25000
Gas Production, B cf/yr
Average Footage Drilled per Oil & Gas Exploratory Well,
ft/well
Fig. 2-Scatter plot of gas production and average footage drilled
per oil and gas exploratory well.
TABLE 1-CORRELATION COEFFICIENTS OF NETWORK INPUTS
WITH GAS PRODUCTION.
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
Gas Produ ction, Bcf/y r
Average Footage Drilled per Oil & Gas Exploratory Well
,
ft/well
Fig. 3-Scatter plot of gas production and average footage drilled
per oil and gas exploratory well, after data preprocessing.
Input layer Output layerHidden layer
OUTIN
weight node Gas production
I-1 -0.6507
I-2 0.7675
I-3 0.4446
I-4 0.2132
I-5 0.6863
I-6 0.6111
I-7 0.3493
I-8 0.2875
I-9 0.9243
I-10 -0.1878
I-11 0.4211
I-12 -0.4688
I-13 0.7692
I-14 -0.3709
I-15 -0.5537
I-16 -0.3015
I-17 0.0044
I-18 0.8118
SPE 67260 NEURAL NETWORK APPROACH PREDICTS U.S. NATURAL GAS PRODUCTION 9
Gas prices
GDP growth rate
Annual depletion
Wells drilled
Footage drilled
.
.
.
.
Gas production
Input Layer
(11 nodes)
Hidden Layer
(5 nodes)
Output Layer
(one node)
Fig. 4-Neural network design of this study.
0
0.5
1
-20 -15 -10 -5 0 5 10 15 20
Input
Output
Fig. 5-Sigmoid function.
TABLE 2- INPUT VARIABLES OF TRAINED NEURAL NETWORK
No. Input Description
1 I3 Gas wellhead prices
2 I6
Average depth drilled per well in gas
exploratory wells
3 I7 Footage drilled in gas exploratory wells
4 I8
Number of wells drilled in gas exploratory
wells
5 I9
Average depth drilled per well in oil/gas
exploratory wells
6 I10 Footage drilled in oil/gas exploratory wells
7 I11
Number of wells completed in oil/gas
exploratory wells
8 I12
Average depth drilled per well in oil
exploratory wells
9 I13 Growth rate of gross domestic product
10 I14 Gas proved reserves
11 I15 Annual gas depletion rate
TABLE 3-RESULTS OF INPUT SENSITIVITY ANALYSIS FOR
TRAINING SET
Training Set Input Sensitivity Analysis
Input # Effect Effect Normalized
I3 0.145 0.699
I6 0.187 0.904
I7 0.202 0.973
I8 0.168 0.810
I9 0.212 1.025
I10 0.155 0.748
I11 0.193 0.932
I12 0.158 0.761
I13 0.264 1.276
I14 0.285 1.376
I15 0.310 1.496
10 S.M. AL-FATTAH, R.A. STARTZMAN SPE 67260
TABLE 4-RESULTS OF INPUT SENSITIVITY ANALYSIS FOR TEST
SET
Test Set Input Sensitivity Analysis
Input # Effect Effect Normalized
I3 0.142 0.856
I6 0.186 1.124
I7 0.136 0.821
I8 0.167 1.011
I9 0.192 1.163
I10 0.084 0.506
I11 0.094 0.567
I12 0.229 1.385
I13 0.222 1.340
I14 0.169 1.020
I15 0.200 1.207
-3
-2
-1
0
1
2
3
1950 1960 1970 1980 1990 2000
Time, year
Normalized Gas Production
Actual
NNET TestingTraining
Fig. 6-Performance of the neural network model with actual U.S.
gas production.
TABLE 5-NEURAL NETWORK PERFORMANCE WITH DIFFERENT
HIDDEN NODES AND ITERATIONS*
Case Hidden Iterations
number nodes number MSE Correct, % MSE Correct, %
11-9 5 3000 0.0065 100 0.0321 78
11a 5 5000 0.0034 100 0.0252 100
11b1 5 6000 0.0025 100 0.0286 89
11b2 5 7000 0.0020 100 0.0323 89
11c 6 5000 0.0012 100 0.0619 44
Training Testing
* All cases were run with the same 11 inputs.
0
0.05
0.1
0.15
0.2
0.25
0.3
0 1000 2000 3000 4000 5000
Iteration, number
Mean Squared Error
Training set
Test set
Fig. 7-Convergence behavior of the Quick Propagation three-layer
network (11,5,1) that learned from the U.S. natural gas production
data.
0
20
40
60
80
100
120
0 1000 2000 3000 4000 5000
Iteration, number
Correct, %
Training set
Test set
Fig. 8-Behavior of training and test samples classified correct.
SPE 67260 NEURAL NETWORK APPROACH PREDICTS U.S. NATURAL GAS PRODUCTION 11
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000
Time, year
Residual
Fig. 9-Residual plot of the neural network model.
0
5
10
15
20
25
30
35
40
45
50
-0.023 -0.014 -0.004 0.005 0.014 0.023 0.033 0.042
Residual
Frequency
Fig. 10-Frequency of residuals of the neural network model.
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
Actual Gas Production (1st difference), Tcf/yr
Predicted Gas Production (1st difference), Tcf/yr
Fig. 11-Crossplot of neural network prediction model and actual
gas production (first difference).
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
Actual Gas Producti on (Normalized), Tcf /yr
Predicted Gas Production (Normalized), Tcf/yr
Fig. 12-Crossplot of neural network prediction model and actual
gas production (normalized).
0
5000
10000
15000
20000
25000
30000
1950 1960 1970 1980 1990 2000 2010 2020
Time, yr
Gas Production, Bcf/yr
Actual
Nnet
Fig. 13-Neural network forecasting model of U.S. gas production.
0
5000
10000
15000
20000
25000
30000
35000
1950 1960 1970 1980 1990 2000 2010 2020
Time, year
Gas Production, Bcf/yr
Actual
NNet model
Stochastic-d erived mod el
EIA forecast
N
eural network
Stochastic
EIA forecast
Fig. 14-Comparison of U.S. gas production forecasts.
Testin
Trainin
g
... Only a few researchers have used ANN models to forecast natural gas or oil production. Al-Fattah and Startzman (2003) used a BP neural network model for predicting the future natural gas supply of the USA to year 2020. Ekweanua et al. (2014) correlated hydrocarbon production with price, import, and export. ...
... The BP neural network is a type of multilayer feedforward neural network (G� eron, 2017). It is primarily featured by forwarding information feed and backward error propagation, performing as the most widely recognized and commonly used supervised-learning algorithm (Al-Fattah and Startzman, 2003). In this study, the geological properties and engineering design/implementation characteristics are accounted for in the prediction of the FBR and PROD using the neural network model. ...
... In Eqs. (4) and (5), f(…) performs as the transfer function in terms of the Sigmoid function (Al-Fattah and Startzman, 2003): ...
Article
Good flowback management in shale gas reservoirs not only preserves the conductivity of the flow paths created by the fracture network but also assists the subsequent production by minimizing the damage of the fracturing fluid to the fluid-sensitive formation. However, a quantitative prediction of its behavior and impact on the productivity of a reservoir is not available yet. This study proposed a mathematical approach to predict the flowback behavior concerning flow back ratio (FBR), and the productivity represented by first-month production (PROD), of the Changning-Weiyuan shale gas reserve in Sichuan basin, southwest China. First, a BP (Back Propagation) neural network model was established to filter out the controlling factors given the actual FBRs and the geological and engineering characteristics of seventy-six sample wells. Secondly, a correlation was generated between the FBR, the geological index g, and the engineering index e by non-linear fitting. In subsequence, the PROD was predicted based on the field recorded FBR and the comprehensive index c that combines the influences of both geological and engineering parameters. Finally, the stimulated reservoir volumes (SRVs) of twelve wells were estimated using the K-means clustering and Delaunay triangulation to assess their impact on the FBR and PROD data. It was found that the associated FBR and PROD diagrams can provide accurate predictions for the sixteen additional wells. Meanwhile, there exists an optimum FBR range of 20–40% for the shale gas region of concern where the productivity of the formation can be maximized. It is also observed that the FBR and PROD fail to display any specific trend with the value of the SRV magnitude. In general, the proposed approach can be used in shale gas reservoirs to examine the favorable FBR and guide the engineering designs to enhance production if adequate reservoir data are accessible.
... Among other applications of neural networks to petroleum engineering, ANNs have been used as predictive tools in various petroleum-engineering applications. Such applications include the prediction of fluid properties, well logging, well testing, drilling problems, horizontal drilling, reservoir engineering and geomechanics [22][23][24]. ...
Article
Uniaxial compressive strength (UCS) of rock is of great use in drilling and stimulation of oil and gas wells such as wellbore stability analysis and fracturing operations whereas UCS is a key parameter that can be used to increase the efficiency of drilling and stimulation operations. Artificial Neural Network (ANN) is a novel approach for solving engineering problems. ANNs, like people, learn by example. They use input-output parameters to be trained to recognize the correct relationship. These methods are able to consider all effective parameters simultaneously and also develop and learn from the field data (due to existing errors and uncertainties) directly. In this study, the P-wave velocity, density and porosity were used as inputs to predict UCS as output. These input and output parameters are related to experimental studies on sandstone cores. Obtained results of artificial neural network compare with real results and their correlation has been validated. The results show that the proposed ANN methods could be applied as a new acceptable method for the prediction of UCS. By developing neural networks approaches with these input parameters, UCS can be determined in well planning which leads to optimization and cost reduction.
Article
SPE Members Abstract The objective of this paper is to present the application of a new approach to identify a preliminary well test interpretation model from derivative plot data. Our approach is based on artificial neural networks technology. In this approach, a neural nets simulator which employs back propagation as the learning algorithm is trained on representative examples of derivative plots for a wide range of well test interpretation models. The trained nets are then used to identify the well test interpretation model from new well tests. In this paper we show that using artificial neural networks technology is a significant improvement over pattern recognition techniques currently used (e.g., syntactic pattern recognition) in well test interpretation. Artificial neural networks have the ability to generalize their understanding of the pattern recognition space they are taught to identify. This implies that they can identify patterns from incomplete and distorted data. This ability is very patterns from incomplete and distorted data. This ability is very useful when dealing with well tests which often have incomplete and noisy data. Moreover, artificial neural networks eliminate the need for elaborate data preparation (e.g., smoothing, segmenting, and symbolic transformation) and they do not require writing complex rules to identify a pattern. Artificial neural networks eliminate the need for using rules by automatically building an internal understanding of the pattern recognition space in the form of weights that describe the strength of the connections between the net processing units. The paper illustrates the application of this new approach with a field example. The mathematical derivation and implementation of this approach can be found in Ref. 1. Introduction In a pressure transient test a signal of pressure vs. time is recorded. When this signal is plotted using specialized plotting functions, it produces diagnostic plots such as derivative or Horner plots which we use often in the interpretation process. The signal on these plots is deformed and shaped by some underlying mechanisms in the formation and the wellbore. These mechanisms are known as the well test interpretation model. The objective of this work is to identify these mechanisms from the signatures present on the derivative plot. The problem of identifying the well test interpretation model has been described in the literature as the inverse problem. The traditional way of solving an inverse problem is to use inverse theory techniques (e.g., regression analysis). A major disadvantage of such techniques is that we have to assume an interpretation model. The inverse theory provides estimates of the model parameters but not the model itself. Realizing that more than one interpretation model can produce the same signal. This approach can lead to misleading results. What we seek in this study is the model itself rather than its parameters. Finding the model parameter after identifying the model is a simple problem. In this study we trained a neural nets simulator to identify the well test interpretation model from the derivative plot. The neural nets simulator can be part of a well test expert system or a computer enhanced well test interpretation. The mathematical derivation and implementation of this approach are detailed in Ref 1. LITERATURE REVIEW In 1988, Allain and Horne used syntactic pattern recognition and a rule-based approach to identify the well test interpretation model automatically from the derivative plot. Their approach is based on transforming the derivative plot into a symbolic form. The symbols generated (e.g., UP, DOWN, etc.) are used by a rule. based system to construct the shapes (e.g., maxima, minima, stabilizations) present on the derivative plot and, consequently present on the derivative plot and, consequently identify the well test interpretation model. The transformation process from digital data to symbols is carried out by process from digital data to symbols is carried out by approximating the derivative curve by a sequence of straight lines. The linear approximation is assumed successful when the fit error of each straight line is within an allowable tolerance. The attributes(e.g, the slope) of each straight line are used to describe the orientation (i.e., UP, DOWN, FLAT) of the curve segments based on preselected angle thresholds. Symbolic merging (i.e., grouping similar consecutive symbols as one symbol) is executed to reduce the symbols to the least possible number. This step is necessary to arrive at a finite number of rules which identify the well test interpretation model from the derivative symbolic form. P. 213
Article
Growth in estimates of recovery in discovered fields is an important source of annual additions to United States proven reserves. This paper examines historical field growth and presents estimates of future additions to proved reserves from fields discovered before 1992. Field-level data permitted the sample to be partitioned on the basis of recent field growth patterns into outlier and common field set, and analyzed separately. The outlier field set accounted for less than 15% of resources, yet grew proportionately six times as much as the common fields. Because the outlier field set contained large old heavy-oil fields and old low-permeability gas fields, its future growth is expected to be particularly sensitive to prices. A lower bound of a range of estimates of future growth was calculated by applying monotone growth functions computed from the common field set to all fields. Higher growth estimates were obtained by extrapolating growth of the common field set and assuming the outlier fields would maintain the same share of total growth that occurred from 1978 through 1991. By 2020, the two estimates for additions to reserves from pre-1992 fields are 23 and 32 billion bbl of oil in oil fields and 142 and 195 tcf of gas in gas fields. 20 refs., 8 figs., 3 tabs.
Article
Equilibrium ratios play a fundamental role in the understanding of phase behavior of hydrocarbon mixtures. They are important in predicting compositional changes under varying temperatures and pressures conditions in reservoirs, surface separators, production and transportation facilities. In particular they are critical for reliable and successful compositional reservoir simulation. This paper presents a new approach for predicting K-values using Neural Networks (NN). The method is applied to binary and multicomponent mixtures, K-values prediction accuracy is in the order of the tradition methods. However, computing speed is significantly faster. Introduction Equilibrium rations, more commonly known as K-values, relate the vapor mole fractions (yi), to the liquid mole fraction (xi) of a component (i) in a mixture, (1) In a fluid mixture consisting of different chemical components, K-values are dependent on mixture pressure, temperature, and composition of the mixture. There are a number of methods for predicting K-values, basically these methods compute K-values explicitly or iteratively. The explicit methods correlate K-values with components parameters (i.e. critical properties), mixtures parameters (i.e. convergence pressure). Iterative methods are based on the equation of state (EOS) and are, usually, tuned with binary interaction parameters. Literature search and experience in the phase behavior of hydrocarbon systems, have shown that current explicit methods are not accurate because they neglect compositional affects. EOS approach requires extensive amount of computational time, may have convergence problems, and must be supplied with good binary interaction parameters. In compositional reservoir simulation where million of K-values are required, the method becomes time consuming and adds to the complexity of simulation studies making some of them impractical. Neural Networks (NN) are emerging technology that seems to offer two advantages, fast computation and accuracy. The objective of this paper is to show the potential of using NN for predicting K-values. Different NN where trained using the Scaled Conjugate Gradient (SCG), and where used to predict the K-values for binary and multicomponent mixtures.
New Approaches for Analyzing and Predicting Global Natural Gas Production
  • S M Al-Fattah
Al-Fattah, S.M.: "New Approaches for Analyzing and Predicting Global Natural Gas Production," PhD dissertation, Texas A&M U., College Station, TX (2000).