ArticlePDF Available

Development and Application of Back-Propagation-Based Artificial Neural Network Models in Solving Engineering

Authors:

Abstract and Figures

Artificial Neural Networks (ANNs) are computer software programs that mimic the human brain's ability to classify patterns or to make forecasts or decisions based on past experience. The development of this research area can be attributed to two factors, sufficient computer power to begin practical ANN-based research in the late 1970s and the development of back-propagation in 1986 that enabled ANN models to solve everyday business, scientific, and industrial problems. Since then, significant applications have been implemented in several fields of study, and many useful intelligent applications and systems have been developed. The objective of this paper is to generate awareness and to encourage applications development using artificial intelligence-based systems. Therefore, this paper provides basic ANN concepts, outlines steps used for ANN model development, and lists examples of engineering applications based on the use of the back-propagation paradigm conducted in Oman. The paper is intended to provide guidelines and necessary references and resources for novice individuals interested in conducting research in engineering or other fields of study using back-propagation artificial neural networks.
Content may be subject to copyright.
Science and Technology, 7 (2002) 55-70
© 2002 Sultan Qaboos University
Development and Application of Back-
Propagation-Based Artificial Neural Network
Models in Solving Engineering
Saleh Mohammed Al-Alawi
Department of Electrical Engineering, College of Engineering, Sultan Qaboos
University, P.O. Box 33, Al Khod 123, Muscat, Sultanate of Oman.
ﺔﻴﺴﺩﻨﻬﻟﺍ لﻜﺎﺸﻤﻟﺍ لﺤﻟ ﺔﻴﺒﺼﻌﻟﺍ ﺕﺎﻜﺒﺸﻟﺍ ﺔﻴﻨﻘﺘ ﻡﺍﺩﺨﺘﺴﺍ
ﻱﻭﻠﻌﻟﺍ ﺩﻤﺤﻤ ﺢﻟﺎﺼ
ﺔﺼﻼﺨ : ﺕﺍﺭﺍﺭﻗ ﺫﺎﺨﺘﺍ ﻭﺍ ﺀﺎﻴﺸﻻﺍ ﺯﻴﻴﻤﺘ ﻲﻓ ﻥﺎﺴﻨﻻﺍ ﺕﺍﺭﺩﻗ ﻲﻜﺎﺤﺘ ﻲﺘﻟﺍ ﺭﺘﻭﻴﺒﻤﻜﻟﺍ ﺞﻤﺍﺭﺒ ﺩﺤﺍ ﻲﻫ ﺔﻴﻋﺎﻨﻁﺼﻻﺍ ﺔﻴﺒﺼﻌﻟﺍ ﺕﺎﻜﺒﺸﻟﺍ
ﺙﺍﺩﺤﻻﺎﺒ ﺅﺒﻨﺘﻟﺍ ﻭﺍ ﺓﺭﻓﻭﺘﻤﻟﺍ ﺕﺎﻤﻭﻠﻌﻤﻟﺍ ﻰﻠﻋ ﺔﻴـﻨﺒﻤ . ﺯﺠ ﻡﻠﻌﻟﺍ ﺍﺫﻫ ﺭﺒﺘﻌﻴﻭ ﺔﻋﺭﺴﺒ ﺭﺸﺘﻨﺍﻭ ﺭﻬﻅ ﻱﺫﻟﺍ ﻲﻋﺎﻨﻁﺼﻟﺍ ﺀﺎﻜﺫﻟﺍ ﻡﻠﻋ ﻲﻓ
ﺍﺫﻫ ﺹﻭﺼﺨﺒ ﻲﻋﻭﻟﺍ ﺭﺸﻨ ﻰﻟﺍ ﻑﺩﻬﺘ ﺔﻴﻤﻠﻌﻟﺍ ﺔﻗﺭﻭﻟﺍ ﻩﺫﻫ ﺔﻴﻨﺎﺴﻨﻻﺍ ﺓﺎﻴﺤﻟﺍ ﺕﻻﺎﺠﻤ ﻥﻤ ﺩﻴﺩﻌﻟﺍ ﻪﺘﺎﻘﻴﺒﻁﺘ ﺕﻁﻏﻭ ﻲﻀﺎﻤﻟﺍ ﺩﻘﻌﻟﺍ ﻲـﻓ
ﺘﻭ ﺩﺎﺸﺭﺍ ﻕﺭﻁ ﺔﻗﺭﻭﻟﺍ ﻩﺫﻫ ﺯﺭﺒﺘ ﻙﻟﺫﻜﻭ ﺔﻴﺴﺩﻨﻬﻟﺍ لﻜﺎﺸﻤﻟﺍ لﺤﻟ ﻪﻘﻴﺒﻁﺘﺔﻴﻔﻴﻜﻭ ﻪﻴﻓ ﺔﻤﺩﺨﺘـﺴﻤﻟﺍ ﺔﻴـﻨﻘﺘﻟﺍﻭ ﻡـﻠﻌﻟﺍ ﻱﺫﻟﺍ ﺉﺩﺘﺒﻤﻠﻟ ﻪﻴﺠﻭ ﻭﺘ ﻙﻟﺫﻜﻭ لﺎﺠﻤﻟﺍ ﺍﺫﻫ ﻲﻓ ﺎﻬﻤﺍﺩﺨﺘﺴﺍ ﻥﻜﻤﻴ ﻲﺘﻟﺍ ﻊﺠﺍﺭﻤﻟﺍ ﻰﻟﺍ ﺭﻴﺸﺘﻭ ﺔﻴﻨﻘﺘﻟﺍ ﻩﺫﻫ ﻡﺍﺩﺨﺘﺴﺍ ﻲﻓ ﺏﻏﺭﻴ ﺔﻴﺴﺩﻨﻬﻟﺍ ﺕﺎﻘﻴﺒﻁﺘﻟﺎﺒ ﺔﻤﺌﺎﻗ ﻥﺎﻤﻋ ﺔﻨﻁﻠﺴ ﻲﻓ ﺙﺤﺎﺒﻟﺍ ﺎﻬﺒ ﻡﺎﻗ ﻲﺘﻟﺍ ﺔﻔﻠﺘﺨﻤﻟﺍ .
ABSTRACT: Artificial Neural Networks (ANNs) are computer software programs that mimic the
human brain's ability to classify patterns or to make forecasts or decisions based on past experience.
The development of this research area can be attributed to two factors, sufficient computer power to
begin practical ANN-based research in the late 1970s and the development of back-propagation in
1986 that enabled ANN models to solve everyday business, scientific, and industrial problems. Since
then, significant applications have been implemented in several fields of study, and many useful
intelligent applications and systems have been developed. The objective of this paper is to generate
awareness and to encourage applications development using artificial intelligence-based systems.
Therefore, this paper provides basic ANN concepts, outlines steps used for ANN model development,
and lists examples of engineering applications based on the use of the back-propagation paradigm
conducted in Oman. The paper is intended to provide guidelines and necessary references and
resources for novice individuals interested in conducting research in engineering or other fields of
study using back-propagation artificial neural networks.
KEYWORDS: Artificial Neural Network Applications, Engineering Problems, Back-Propagation,
Forecasting, Classification, Oman.
1. Introduction
A
rtificial Neural Networks (ANNs) is a research area that has evolved from Artificial
Intelligence (AI) research. Artificial Intelligence, on the other hand, is a branch of computer
science. This branch is concerned with designing computer systems that exhibit characteristics
associated with intelligent human behavior. Artificial Intelligence research is based on many
interrelated sciences and technologies such as engineering, management science, computer
science, psychology, philosophy, and linguistics, and covers a wide range of applications. In
addition, ANNs and AI provide the scientific foundation for many other growing commercial
technologies such as machine learning, expert systems, natural language processing, computer
vision and robotics, speech recognition systems, automatic programming, and computer-aided
instructions.
55
SALEH MOHAMMED AL-ALAWI
ANNs are computer programs that are trained in order to recognize both linear and nonlinear
relationships among the input and the output variables in a given data set. In general, ANN
applications in engineering have received wide acceptance. The popularity and acceptance of this
technique stems from ANNs features that are particularly attractive for data analysis. These
features include handling of fragmented and noisy data; speed inherent to parallel distributed
architectures, generalization capability over new data, ability to effectively incorporate a large
number of input parameters, and its capability of modeling nonlinear systems. Due to these
distinctive features, Artificial Neural Networks are used to add intelligent capabilities to computer
systems. ANN models allow computer systems to process and recognize different voices, read in
text, recognize and classify objects, sense the environment and control robotic movements, predict
future trends, and even decide whether to grant a bank loan to a specific customer or not.
2. Artificial Neural Network Concepts
2.1 Back-Propagation Paradigm
One of the most common and frequently used ANN paradigms is the Back-propagation
paradigm (Simpson, 1990). This supervised learning method was developed by Rumelhart based
on the generalization of the least mean square error (LMS) algorithm. The Back-propagation
algorithm uses the gradient descent search technique to minimize a cost function equal to the mean
square difference between the desired and the actual net output. The network is trained by
selecting small random weights and internal threshold, and then presenting all training data
repeatedly by using the supervised training technique. The weights are changed until the network
reaches the desired error level or the cost function is reduced to an acceptable value.
2.2 ANN Architecture
The major building block for any ANN architecture is the processing element or neuron.
These neurons are located in one of three types of layers: the input layer, the hidden layer, or the
output layer. The input neurons receive data from the outside environment, the hidden neurons
receive signals from all of the neurons in the preceding layer, and the output neurons send
information back to the external environment. These neurons are connected together by a line of
communication called connection. Stanley (1990) indicated that the way in which the neurons are
connected to each other in a network typology has a great effect on the operation and performance
of the network. ANN models come in a variety of typologies or paradigms. Simpson (1990)
provides a coherent description of 27 different popular ANN paradigms and presents comparative
analyses, applications, and implementations of these paradigms.
2.3 ANN Operations
In the back-propagation (BP) architecture, shown in Figure 1, each element or neuron receives
input from the real-world environment or from other processing elements, processes this input, and
produces a specific output. Generally, many of these processing elements perform their operations
at the same time. This parallelism is a unique feature of the ANN that distinguishes it from the
serial processing that is usually performed by conventional computer systems. Each neuron has a
straightforward assignment. Input coming to the neuron is associated with a weight indicating its
strength. In the neuron, the values of the input are multiplied by the corresponding weights and all
products are added to obtain a net value (neti). After summation, the net input of the neurons is
combined with the previous state of the neurons to produce a new activation value. Whether the
neurons fire or not will depend on the magnitude of this value. The activation is then passed
through an output or a transfer function (fi) that generates the actual neuron output. The transfer
function modifies the value of the output signal. This function can be either a simple threshold
function that only produces output if the combined input is greater than the threshold value, or it
can be a continuous function that changes the output based on the strength of the combined input.
56
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
Typical transfer functions employed in building ANN applications include a linear threshold
transfer function, step function, sigmoid function, and others.
Neti=
=
n
WijOj
j1
O1
INPUT O OUTPUT
O2
Activation Transfer
Function Function
W1
N
et
W2 i
Input Hidden Output
Layer Layer Layer
Figure 1. A typical back-propagation architecture.
2.4 ANN Training
The first and the most critical step in developing an effective ANN model is input and output
definition and data preparation. This includes identifying variables of interest, gathering the
relevant data and inspecting them for possible errors, missing values, and outliers. Data accuracy
is vital for the development of an efficient model that can provide accurate prediction. If incorrect
or erroneous data are fed to the model, this will result in incorrect prediction. As the saying goes,
"garbage in, garbage out".
Once the ANN model architecture is defined, data are collected and fed to the model. The
network is then trained to recognize the relationships between the input and output parameters.
The BP algorithm uses the supervised training technique. In this technique, the interlayer
connection weights and the processing elements' thresholds are first initialized to small random
values. The network is then presented with a set of training patterns, each consisting of an example
of the problem to be solved (the input) and the desired solution to this problem (the output). These
training patterns are presented repeatedly to the ANN model, and the error between actual and
predicted results is calculated. Weights are then adjusted by small amounts that are dictated by the
General Delta Rule (Rumelhart et al, 1988). This adjustment is performed after each iteration
whenever the network's computed output is different from the desired output. This process
continues until weights converge to the desired error level or the output reaches an acceptable
level. Simpson (1990) describes the system of equations that provides a generalized description of
how the learning process is performed by the BP algorithm.
2.5 ANN Testing and Validation
The ANN model can sometimes learn different features other than the relationships in the
data. It also can memorize the data or part of this data without learning the relationships between
variables or trends in the data. Hence, to insure network accuracy and the generalization capability,
the network must be tested on a continuous basis and should be monitored during the training and
testing operations. The testing operation involves passing a separate testing set to the trained ANN
model and recording the results. These results are compared to actual results. The trained model
is assumed to be successful if the model gives good results for that test set. To insure that ANN
models provide correct prediction or classifications, the prediction results produced by ANN
57
SALEH MOHAMMED AL-ALAWI
models can be validated against expert predictions for the same cases or it can be validated against
the results of other computer programs.
3. Developing an Artificial Neural Network Model
Step #1: ANN model development starts by first conducting a feasibility study and validating
the proposed application. Bailey and Thompson (1990) pointed out some common characteristics
of a successful neural network application. They suggested that the application must be data-
intensive and dependent upon multiple interacting parameters. The problem area should be rich in
historical data or examples. The data set available may be incomplete, contain errors, and describe
specific examples. The discriminator or function to determine solutions is unknown or expensive
to discover, and the problem should require qualitative or complex quantitative reasoning. Once
the application is judged to be feasible and valid, resource constraints (time, equipment, money)
should be evaluated. Data, sources, and solution requirements should also be identified and
appropriate data should be secured.
Step #2: The next step in the ANN development process is data preparation and training. The
ability of the ANN to effectively learn the training set and provide accurate results is dependent
upon the data preparation activity. Data preparation for modeling can be broadly classified into
three distinct areas: data specification, in which variables of interest are identified and collected;
data inspection, in which data is examined and analyzed; and data pre-processing, in which some
data may be restructured or transformed to make it more useful.
Data specification involves two primary activities: variable selections and determining data
sources. For example, there are many social, economical, and weather variables that could
possibly affect the demand forecast. A wish list of these variables for model building should be
generated by the planner through scanning available literature, consulting experts in the area in
question, and by conducting brainstorming sessions with colleagues. Variables in this list should
then be examined to assess whether historical data for such variables are available or not. For
variables with readily available historical data, data sources should be identified and the data
should be collected. Once data for a set of candidate variables are collected, data analysis should
then be used to weed out the potential input variables from the wish list generated so that only the
most relevant variables are used to develop the forecasting model. To do this, several statistical
methods are available for determining linear significance of variables. Some of the more popular
statistical techniques used are the coefficient of correlation (R), the coefficient of determination R2,
and the ordinary least squares (OLS) regression analysis. A detailed discussion of these statistical
techniques is beyond the scope of this article, but treatments of these techniques can be found in
Bunn and Farmer (1985), Mendenhall and Beaver (1994), and Burden and Fairies (1985).
As many people discover when they try to model real-world problems and processes, clean
data is a luxury that is all too rare. Data collected from the different sources are generally noisy,
contain many gaps and outliers, and are poorly distributed. These issues, if not properly addressed
prior to the model's development, could lead to inaccurate and unreliable prediction. Collected
data, hence, should be inspected well and analysed carefully. The first step in data inspection is to
examine individual variables for erroneous values and to remove these values from the data set
after careful analysis and only if these values prove to be erroneous. Each variable should also be
inspected for outliers as well as missing data.
Once the most significant input variables are selected and carefully inspected, the forecaster
should then examine the distribution of each of these variables. The shape of the distribution will
indicate to the planner whether a particular variable needs data pre-processing. Data pre-processing
may involve any mathematical operations. Common techniques include calculating sums,
differences, differentials, inverses, powers, roots, averages, etc. Anderson (1990) and Lawrence
(1991) provide a detailed description on how to perform this important task.
Step #3: The third step in the development process is the selection of an appropriate neural
network paradigm. ANN models come in a variety of typologies or paradigms. Simpson (1990)
provides a coherent description of 27 different popular ANN paradigms and presents comparative
58
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
analyses, applications, and implementations of these paradigms. The selection among these
paradigms should be based on the application requirements and the available neural network
software containing the specific paradigm. Some of the factors that should be considered in the
selection process include neural network size, required output type, method of training, and the
time available for model development and testing.
Step #4: Having selected the appropriate paradigm, the fourth step is to determine the
network's architecture design and to select its parameters. This process involves the selection of
the number of input nodes, hidden nodes, and output nodes. In addition, it also involves the
selection of the network parameters such as the transfer function, learning algorithm, learning rate,
momentum, and learning threshold.
Step #5: The next step in this process is training the model. Training involves presentation of
the training set to the network and periodical monitoring of the network's performance. This is a
accomplished automatically by the appropriate Back-propagation simulation software that was
selected in step #3. Based on the user choice, training cases can be presented to the network either
sequentially or by following a random process. During the training process, one or several of the
network parameters are changed to improve the network's performance. This process continues
until weights converge to the desired error level or the output reaches an acceptable level.
Step #6: After training is complete, testing and validation is the final step in the development
process. It is important to test the resulting ANN model against both the training set and the test
set. The test set should contain examples of input vectors that the network did not encounter
previously. This test is a benchmark that determines how well and how accurate the trained
network is performing. Model validation, on the other hand, deals with comparing the results of
the developed model to results obtained from common or classical models or techniques being used
by the industry. The training, testing and validation process is explained in the NeuroShell
simulation package (1991).
4. An Example of an ANN Application
The following is an illustrative example of an ANN application to forecast electrical demand
for a Muscat power system. Detailed steps of this example can be found in Islam et al. (1995).
For any utility, medium-term load and energy forecasting is useful in planning fuel
procurement, reserve margin, scheduling unit maintenance, diversity interchange, and system
expansion planning. This type of forecast is normally prepared for the range of one to five years.
5. Problem Definition and its Importance
According to the historical monthly peak load and energy data collected from 1986 to 1992,
the system's load and energy consumption appeared to be more or less cyclic, keeping in harmony
with temperature, which varies from an extreme maximum of 48ºC in summer to an extreme
minimum of 10ºC in winter. Load demand, therefore, varies considerably from hour to hour. It is
apparent that the electrical load and energy consumption pattern of this power system depends
heavily on weather. On the other hand, the growth in load and energy demand depends largely on
the number of consumers connected to the system. Variables such as temperature, humidity, wind
speed, number of connections, and other variables can be used to develop load and energy models
for the Muscat Power System. In fact, the number of such variables is large and, depending on the
type and nature of the forecast, should be carefully selected. The selection criteria could be based
on human intuition, knowledge and experience and should be validated using statistical techniques
to determine their contribution and correlation to the load or energy.
6. Data Requirements and Data Processing
In medium-term load forecasting, generally, there are two forecasts that are prepared. These
are the load forecast and the energy forecast. To develop these forecasts, the following variables
59
SALEH MOHAMMED AL-ALAWI
were identified using human intuition, brainstorming sessions, and consultation with experts in the
area: Absolute Maximum Temperature (Tmax), Average Maximum Temperature (Tavmax), Average
Maximum Relative Humidity (RHavmax), Average Relative Humidity (RHav), Wind Speed (W),
Duration of Bright Sunshine (S), Global Radiation (R), Precipitation (PR), Vapor Pressure (VP),
Degree Days (DD), Comfort Index (CI), and Number of Connections (CON). Data for all of the
above variables, with the exception of CI and CON, were collected from the historical records of
the Ministry of Housing Electricity and Water and from the records provided by an automated
weather station, while CI and CON are processed variables (Bunn and Farmer, 1985). The
collected data were then examined to remove errors and outliers and to replace missing values. In
addition, a correlation analysis was performed to select the appropriate variables suitable for the
load and energy models. As a result of this test, the variables PR and VP were eliminated from the
energy model because of their low correlation and contribution. It was also interesting to find that
although some variables like W and RHmax had strong correlation to monthly electrical energy
consumption, they had very little correlation to monthly peak load.
7. Developing the Model
Using the historical data for the selected variables, monthly peak load and energy consumption
forecasting models were developed using an artificial neural network (ANN) simulation package,
NeuroShell (1991). For the Energy Model, monthly data from 1986 to 1990 were used in model
development. For the Load Model, the input variables selected were Tpkld, RHpkld, Tmax, and CON.
The monthly historical data used for developing the model also covered the same periods as in the
energy model. Similarly, in validating the ANN models' results, Socio-Economic models (Barakat
and Al Rashad, 1993) were also generated using the same variables and historical data. These
models were particularly selected for comparison since they have demonstrated giving better
accuracy than the Box and Jenkins models, and they are more suited to high growth systems such
as the Muscat Power System.
8. Validation of Results
To test and validate the forecasts generated by Socio-Economic (SE) models and ANN
models, monthly historical data for 1991 and 1992 were used to test these models' prediction
capabilities. The resulting forecasts were then compared to the actual results, and statistical
numerical measures were then calculated. For the SE models, the mean absolute percentage error
(MAPE) for the energy model was approximately 10.969 while the load model was 10.786. The
testing set R2 was 0.946 and 0.719 for the two models, respectively. In comparison, the ANN-
based energy model's MAPE was 1.787 and the load model was 1.870. The testing set R2 was
0.996 and 0.989, respectively. Table 1 shows the comparison of these results. The monthly actual
and forecasted energy consumption and peak load for 1991 and 1992 are shown in Figures 2 and 3.
From this Figure, we can see that the SE models as well as the other models did not provide highly
accurate results as the ANN models did.
Table 1: Statistical results of the box and Jenkins and ANN model's prediction validation.
Technique ME MAD MAPE R2 Accuracy (%)
Energy Model
(Socio-Economic)
18638.17
25783.50
10.969
0.946
89.03
Energy Model
(ANN)
489.67
4839.83
1.787
0.996
98.21
Load Model
(Socio-Economic)
21.944
51.53
10.786
0.719
89.21
Load Model (ANN) -1.846 1.11 1.870 0.989 98.13
60
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
Where ME=Mean Error, MAD=Mean Absolute Deviation, MAPE=Mean Absolute Percentage
Error.
0
100
200
300
400
500
600
700
800
JFMAMJJASOND
Month
Peak Load, MW
Actual
ANN Forecast
SE Forecast
Figure 2. Actual and forecasted results for the load model.
0
50000
100000
150000
200000
250000
300000
350000
400000
JFMAMJJASOND
Month
Energy Consumption GW
Actual
ANN Forecast
SE Forecast
Figure 3. Actual and forecasted results for the energy consumption model.
9. Engineering Applications
A clear guideline on how the steps are implemented to develop ANN models for different
engineering applications can also be found in the following ANN application papers in different
engineering fields. The author and his colleagues wrote the ANN application papers.
10. Electrical Engineering
61
SALEH MOHAMMED AL-ALAWI
1. Short-term Load Forecasting Using Artificial Neural Networks (Al-Alawi and Islam, 1995).
2. Forecasting Monthly Electrical Load and Energy for a Fast Growing Utility Using an Artificial
Neural Network (Islam et al 1995).
3. Forecasting Long-term Electrical Peak Load and Energy Consumption for a Fast Growing
Utility Using Artificial Neural Networks (Islam and Al-Alawi, 1995).
4. Tuning of SVC Damping Controllers Over Wide Range of Load Models Using Artificial
Neural Network (Ellithy and Al-Alawi, 2000).
5. Tuning Power System Stabilizers over a Wide Range of Load Models Using Artificial
Neural Networks (Ellithy et al, 1997).
6. ANN-Based Load Identification and Control of AC Voltage Regulator (Gastli et al 2000).
7. Statistical Signal Characterization-Artificial Neural Network Based Hybrid System for
Electrocardiogram Interpretation (Al-Alawi et al 1998).
8. On-Line Unit Commitment for a Generation Constrained Fast Growing Utility Using Artificial
Neural Networks (Islam et al 1996).
11. Mechanical Engineering
1. Experimental Investigation and Failure Analysis of Fastened GRP Under Bending Using the
Finite Element Method and Artificial Neural Network Modeling (Seibi and Al-Alawi, 1999).
2. An ANN based approach for predicting global radiation in locations with no direct
measurement instrumentation (Al-Alawi and Al-Hinai, 1998).
3. Prediction of Fracture Toughness Using Artificial Neural Networks (Seibi and Al-Alawi,
1997).
4. Analysis and Prediction of Clearness Index Using Artificial Neural Networks (Al-Alawi and
Al-Hinai, 1996).
5. Design of Fiberglass/Copper Moulds Using Finite Element Analysis (Seibi and Al-Alawi,
1999).
6. Artificial Neural Networks: A Novel Approach for the Analysis and Prediction of Mechanical
Properties of 6063 Aluminum Alloy (Al-Alawi et al 1997).
7. Analysis and Prediction of Clearness Index Using Artificial Neural Networks (Al-Alawi and
Al-Hinai, 1996).
8. Prediction of Failure Mechanisms and Mechanical Properties of Fastened GRP Under Bending
Using Artificial Neural Networks (Al-Alawi et al 1996).
9. Effects of Joint Geometry on the Flexural Behavior of Glass Reinforced Plastics (Seibi et al
1996).
12. Petroleum and Mineral Resources Engineering
1. Matrix and Cement Effects on Residual Oil Saturation in Sandstone Formations: A Neural
Network Approach (Al-Alawi et al 1998).
2. Establishing PVT Correlation for Omani Oils (Boukadi et al 1998).
3. A Comparison between an Artificial Neural Network and Geostatistical Technique in the
Estimation of Regionalized Variables (Tawo and Al-Alawi, 1998).
4. Matrix and Cement Effects on Residual Oil Saturation in Sandstone Formations (Boukadi and
Al-Alawi, 1998).
5. Application of ANN in Mineral Resource Evaluation (Al-Alawi and Tawo, 1998).
6. Analysis and Prediction of Oil Recovery Efficiency in Limestone Cores Using Artificial Neural
Networks (Boukadi and Al-Alawi, 1997).
7. Application of ANN to Predict Wettability and Relative Permeability of Sandstone Rocks (Al-
Alawi et al 1996).
8. Preliminary Studies on Using Artificial Neural Networks to Predict Sedimentary Facies of the
Petro-Carboniferous Glacigenic Al Khlata Formation in Oman (Schuniker et al 1999).
62
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
9. Assessment of Formation Damage Using Artificial Neural Networks (Kalam et al 1996).
10. The Application of Artificial Neural Networks to Reservoir Engineering (Kalam et al 1995).
13. Civil Engineering
1. A Comparative Analysis and Prediction of Traffic Accident Casualties in the Sultanate of
Oman Using ANN and Statistical Methods (Ali et al 1998).
2. A Novel Approach for Traffic Accident Analysis and Prediction Using Artificial Neural
Networks (AL-ALAWI, S.M. and ALI, G.A. 1996).
3. Intelligent Monitoring and Control of Large Engineering Projects (Al-Alawi et al 1992).
14. Other Applications
1. Forecasting Fish Exports in the Sultanate of Oman Using Artificial Neural Networks (Luqman
and Al-Alawi 2000).
11. Water Sorption Isotherms of Dates: Modeling Using GAB Equation and Artificial Neural
Network Applications (Myhara et al 1998).
15. Conclusion
In the past ten years, the international community has given considerable attention to
developing more accurate systems and models based on Artificial Intelligence techniques such as
artificial neural networks, expert systems and fuzzy logic. These techniques have successfully been
applied in a variety of fields reporting higher accuracy compared to other classical models and
methods.
This paper provides basic ANN concepts, outlines steps used for ANN model development,
and lists examples of ANN-based engineering applications conducted in Oman. The paper is
intended to provide guidelines and necessary references and resources for individuals interested in
conducting research in engineering or other fields of study using back-propagation artificial neural
networks. It is recommended; therefore, to explore, learn and use such advanced techniques in
order to solve some of the engineering problems and to survive current economic conditions.
References
AL-ALAWI, S.M. and AL-HINAI, H. 1998. An ANN based approach for predicting global
radiation in locations with no direct measurement instrumentation. The Sixth Arab
International Solar Energy Conference (ISEC 6), March 29-April 1, 1998, Sultanate of Oman.
AL-ALAWI, S.M. and AL-HINAI, H. 1998. An ANN based approach for predicting global
radiation in locations with no direct measurement instrumentation”. Renewable Energy
Journal. Elsevier Science, U.K., 14: 199-204.
AL-ALAWI, S.M. and AL-HINAI, H.A. 1996. Analysis and Prediction of Clearness Index Using
Artificial Neural Networks, Proceedings of the World Renewable Energy Congress, 3: 2115-
2119, June 1996, Colorado, U.S.A.
AL-ALAWI, S.M. and AL-HINAI, H.A. 1996. Analysis and Prediction of Clearness Index Using
Artificial Neural Networks. Renewable Energy Journal. 3: 2115-2119.
AL-ALAWI, S.M. and ALI, G.A. 1996. A Novel Approach for Traffic Accident Analysis and
Prediction Using Artificial Neural Networks, Road & Transport Research Journal, 5: 118-
128.
AL-ALAWI, S.M., BENJAMIN, C.O., and OMURTAG, Y. 1992. Intelligent Monitoring and
Control of Large Engineering Projects, 6th Oklahoma Symposium on Artificial Intelligence,
November 11-12, 1992, pp. 115-124, Tulsa, Oklahoma, U.S.A.
63
SALEH MOHAMMED AL-ALAWI
AL-ALAWI, S.M., BOUKADI, F.H., and BEMANI, A.S. 1998. Matrix and Cement Effects on
Residual Oil Saturation in Sandstone Formations: A Neural Network Approach, the Journal
of Petroleum Science & Technology, SQU, January 24, 1998.
AL-ALAWI, S.M. and ELLITHY, K.A. 2000. Tuning of SVC Damping Controllers Over Wide
Range of Load Models Using Artificial Neural Network. Electrical Power & Energy Systems
Journal, 22: 405-420.
AL-ALAWI, S.M. and ISLAM, S.M. 1995. Short-Term Load Forecasting Using Artificial Neural
Networks. 2nd IEEE International Conference on Electronics, Circuits and Systems
(ICECS'95). December 17-21, 1995. Amman, Jordan. pp.381-384.
AL-ALAWI, S.M., JERVASE, J.A., and JAWAD, A.M. 1998. Statistical Signal Characterization –
Artificial Neural Network Based Hybrid System for Electrocardiogram Interpretation”.
Conference on Computational Aspects and their Applications in Electrical Engineering, July
22-23, 1998, Amman, Jordan.
AL-ALAWI, S.M., KALAM, M.Z., and AL-MUKHEINI, M. 1996. Application of ANN to Predict
Wettability and Relative Permeability of Sandstone Rocks, Engineering Journal of Qatar
University, 9: 29-43.
AL-ALAWI, S.M., SEIBI, A.C. and AL-ORAIMI, S.K. 1996. Prediction of Failure Mechanisms
and Mechanical Properties of Fastened GRP Under Bending Using Artificial Neural
Networks, Proceedings of the First International Conference on Composite Science and
Technology, pp. 7-12, June 18-20, 1996, Durban, South Africa.
AL-ALAWI, S.M., SIDDIQUI, R.A., and ALBALUSHI, K. 1997. Artificial Neural Networks: A
Novel Approach for the Analysis and Prediction of Mechanical Properties of 6063 Aluminum
Alloy, Al-Azhar Engineering International Conference in Cairo, Egypt, September 1997.
AL-ALAWI, S.M. and TAWO, E.E. 1998. Application of Artificial Neural Networks in Mineral
Resource Evaluation”, Journal of King Saud University for Engineering Sciences, 10: 127-
139.
ALI, G.A., AL-ALAWI, S.M., and BAKHEIT, C.S. 1998. A Comparative Analysis and Prediction
of Traffic Accident Casualties in the Sultanate of Oman Using Artificial Neural Networks and
Statistical Methods, SQU Journal for Science and Technology, 3: 11-20.
ANDERSON, J.A. 1990. Data Representation in Neural Networks. AI Expert, June, pp. 30-37.
BAILEY, D. and THOMPSON, D. 1990. How to Develop Neural Networks. AI Expert, June, pp.
38-47.
BARAKAT E.H. and AL RASHAD, S.A. 1993. Social environmental and economic constraints
affecting power and energy requirements in fast developing areas, Power Eng. J., 7(4): 177-
184.
BOUKADI, F. H. and AL-ALAWI, S.M. 1998. Matrix and Cement Effects on Residual Oil
Saturation in Sandstone Formations”, Petroleum Science and Technology Journal, U.S.A., 17:
99-113.
BOUKADI, F. and AL-ALAWI, S.M. 1997. Analysis and Prediction of Oil Recovery Efficiency in
Limestone Cores Using Artificial Neural Networks, Energy & Fuels Journal, U.S.A., 11:
1056-1060.
BOUKADI, F., AL-ALAWI, S.M., AL-BEMANI, A. and AL-QASSABI, S. Establishing PVT
Correlations for Omani Oils”, Petroleum Science and Technology, U.S.A., September 1998.
BUNN, D.W. and FARMER, E.D. 1985. Comparative Models for Electrical Load Forecasting,
John Wiley & Sons.
BURDEN, R.L. and FAIRIES, J.D. 1985. Numerical Analysis, Prindel, Webber, and Schmidt.
ELLITHY, K.A., AL-ALAWI, S.M., and ZAINALABDEEN, H.M. 1997. Tuning Power System
Stabilizers over a Wide Range of Load Models Using Artificial Neural Networks”, Journal of
Engineering and Applied Science, Cairo University, 44: 389-406.
GASTLI, A., AKHERRAZ, M. and AL-ALAWI, S.M. 2000. ANN-Based Load Identification and
Control of AC Voltage Regulator. Proceedings of the IEEE International Energy Conference
(IEC 2000), Al-Ain, UAE, May 7-9, 2000, CD-ROM.
64
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
ISLAM, S.M. and AL-ALAWI, S.M. 1995. Forecasting Long-Term Electrical Peak Load and
Energy Consumption for a Fast Growing Utility Using Artificial Neural Networks,
Proceedings of the IEE International Power Engineering Conference (IPEC’95), Singapore,
March 1995, pp. 690-695.
ISLAM, S.M., AL-ALAWI, S.M., and ELLITHY, K.A. 1995. Forecasting Monthly Electrical
Load and Energy for a Fast Growing Utility Using an Artificial Neural Network. Electric
Power Systems Research Journal, 34: 1-9.
ISLAM, M., AL-ALAWI, S.M. and LEDWICH, G. 1996. On-Line Unit Commitment for a
Generation Constrained Fast Growing Utility Using Artificial Neural Networks, The
Australian Universities Power Engineering Conference (AUPEC’96), October 2-4, 1996,
Melbourne, Australia.
KALAM, M.Z., AL-ALAWI, S.M., and AL-MUKHEINI, M. 1996. Assessment of Formation
Damage Using Artificial Neural Networks, SPE Paper #31100, Proceedings of the
International Symposium on Formation Damage Control, pp. 301-309, February 14-15,
1996, Lafayette, Louisiana, U.S.A.
KALAM, M.Z., AL-ALAWI, S.M. and AL-MUKHEINI, M. 1995. The Application of Artificial
Neural Networks to Reservoir Engineering, IATMI International Symposium on Production
Optimization, July 22-26, 1995, pp. 1-8, Bandung, Indonesia.
LAWRENCE, J. 1991. Data Preparation for a Neural Network. AI Expert, November, pp. 34-41.
LUQMAN, A. and AL-ALAWI, S.M. 2000. Forecasting Fish Exports in the Sultanate of Oman
Using Artificial Neural Networks, Jordanian Journal “Derasat” , 22(2): .
MENDENHALL, W. and BEAVER, R.J. 1994. Introduction to Probability and Statistics,
Duxbury Press.
MYHARA, R.M., SABLANI, S.S., AL-ALAWI, S.M. and TAYLOR, M.S. 1998. Water
Sorption Isotherms of Dates: Modeling Using GAB Equation and Artificial Neural Network
Applications, Lebensmittel Wissenchaft und Technologie (German Journal), Food Science
and Technology (English name), 33: 699-706.
RUMELHART, D., MCCLELLAND, J., and PDP RESEARCH GROUP. 1988. Parallel
Distributed Processing, Explorations in the Microstructure of Cognition. 11; Foundations.
Cambridge, MA. MIT Press/Bradford Books.
SCHUNIKER, O., AL-ALAWI, S.M., AL-BEMANI, A.S., and KALAM, M.Z. 1999.
Preliminary Studies on Using Artificial Neural Networks to Predict Sedimentary Facies of
the Petro-Carboniferous Glacigenic Al Khlata Formation in Oman, 11th SPE Middle East
Oil Show & Conference (MEOS’99), February 20-23, 1999, Bahrain.
SEIBI, A. and AL-ALAWI, S.M. 1999. Experimental Investigation and Failure Analysis of
Fastened GRP Under Bending Using the Finite Element Method and Artificial Neural
Network Modeling, the Journal of Science & Technology, 4: 71-78.
SEIBI, A.C. and AL-ALAWI, S.M. 1999. Design of Fiberglass/Copper Moulds Using Finite
Element Analysis. First International Conference on Composite Science and Technology,
Orlando, Florida, June 1999.
SEIBI, A. and AL-ALAWI, S.M. 1997. Prediction of Fracture Toughness Using Artificial Neural
Networks. Engineering Fracture Mechanics Journal, 56: 311-319.
SEIBI, A.C., AL-ORAIMI, S.K., and AL-ALAWI, S.M. 1996. Effects of Joint Geometry on the
Flexural Behaviour of Glass Reinforced Plastics, Proceedings of the First International
Conference on Composite Science and Technology, pp. 471-476, June 18-20, 1996, Durban,
South Africa.
SIMPSON, P.K. 1990. Artificial Neural Systems: Foundations, Paradigms, Applications, and
Implementations. (1st Edition). Pergamon Press, Inc., Elmsford, NY.
STANLEY, J. 1990. Introduction to Neural Networks. (3rd Edition). Sierra Madre.
TAWO, E.E. and AL-ALAWI, S.M. 1999. A Comparison between an Artificial Neural Network
and Geostatistical Technique in the Estimation of Regionalized Variables, Engineering
Journal of Qatar University , 12:125-149.
65
SALEH MOHAMMED AL-ALAWI
WARD SYSTEMS GROUP, INC. 1991. NeuroShell
, Neural Network Shell Program. (4th
Edition). Frederick, MD.
Received 17 June 2001
Accepted 8 September 2001
Appendix A: Artificial Neural Networks Resources
Books, journals and other resources on ANNs included in this guide are obtained from the
following WebPages:
http://www.faqs.org/faqs/ai-faq/neural-nets/part4/
Please see the above WebPages for more detailed information and full review.
Some of the best popular introduction to ANNs:
Hinton, G.E. (1992), "How Neural Networks Learn from Experience", Scientific American, 267
(September), 144-151, Author's Webpages:
http://www.cs.utoronto.ca/DCS/People/Faculty/hinton.html (official) and
http://www.cs.toronto.edu/~hinton (private) Journal WebPages: http://www.sciam.com/
Some of the best introductory books for business executives:
Bigus, J.P. (1996), Data Mining with Neural Networks: Solving Business Problems--from
Application Development to Decision Support, NY: McGraw-Hill, ISBN 0-07-005779-6, xvii +221
pages.
Fausett, L. (1994), Fundamentals of Neural Networks: Architectures, Algorithms, and
Applications, Englewood Cliffs, NJ: Prentice Hall, ISBN0-13-334186-0. Also published as a
Prentice Hall International Edition, ISBN0-13-042250-9. Sample software (source code listings in
C and Fortran) is included in an Instructor's Manual. Book WebPages (Publisher):
http://www.prenhall.com/books/esm_0133341860.html
Smith, M. (1996). Neural Networks for Statistical Modeling, NY: Van Nostrand Reinhold, ISBN 0-
442-01310-8. Apparently there is a new edition I haven't seen yet: Smith, M. (1996). Neural
Networks for Statistical Modeling, Boston: International Thomson Computer Press, ISBN 1-850-
32842-0.Book WebPages (Publisher): http://www.thompson.com/ Publisher's address: 20 Park
Plaza, Suite 1001, Boston, MA 02116, USA.
Reed, R.D., and Marks, R.J, II (1999), Neural Smithing: Supervised Learning in Feed Forward
Artificial Neural Networks, Cambridge, MA: The MIT Press, ISBN 0-262-18190-8.Author's
Webpage: Marks: http://cialab.ee.washington.edu/Marks.html Book WebPages (Publisher):
http://mitpress.mit.edu/book-home.tcl?isbn=0262181908
Weiss, S.M. and Kulikowski, C.A. (1991), Computer Systems That Learn, Morgan Kaufmann.
ISBN 1-55860-065-5. Author's WebPages: Kulikowski:
http://ruccs.rutgers.edu/faculty/kulikowski.html Book WebPages (Publisher):
http://www.mkp.com/books_catalog/1-55860-065-5.asp
66
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
Some of the best books on using and programming ANNs:
Masters, T. (1993), Practical Neural Network Recipes in C++, Academic Press, ISBN 0-12-
479040-2, US $45 incl. disks. Book WebPage (Publisher): http://www.apcatalog.com/cgi-
bin/AP?ISBN=0124790402&LOCATION=US&FORM=FORM2
Masters, T. (1995) Advanced Algorithms for Neural Networks: A C++Sourcebook, NY: John
Wiley and Sons, ISBN 0-471-10588-0 Book WebPage (Publisher): http://www.wiley.com/
Masters, T. (1994), Signal and Image Processing with Neural Networks: AC++ Sourcebook, NY:
Wiley, ISBN 0-471-04963-8.Book WebPage (Publisher): http://www.wiley.com/ Additional
Information: One has to search.
Some of the best intermediate textbooks on ANNs:
Bishop, C.M. (1995). Neural Networks for Pattern Recognition, Oxford: Oxford University Press.
ISBN 0-19-853849-9 (hardback) or 0-19-853864-2 (paperback), xvii +482 pages. Author's
WebPages: http://neural-server.aston.ac.uk/People/bishopc/Welcome.html
Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the Theory of Neural Computation.
Redwood City, CA: Addison-Wesley, ISBN 0-201-50395-6 (hardbound) and 0-201-51560-1
(paperbound) Book WebPages (Publisher): http://www2.awl.com/gb/abp/sfi/computer.html
Ripley, B.D. (1996) Pattern Recognition and Neural Networks, Cambridge: Cambridge University
Press, ISBN 0-521-46086-7 (hardback), xii +403 pages. Author's WebPages:
http://www.stats.ox.ac.uk/~ripley/ Book WebPages (Publisher): http://www.cup.cam.ac.uk/
Devroye, L., Gy "orfi, L., and Lugosi, G. (1996), A Probabilistic Theory of Pattern Recognition,
NY: Springer, ISBN 0-387-94618-7, vii +636 pages.
Some of the best books on neurofuzzy systems:
Brown, M., and Harris, C. (1994), Neurofuzzy Adaptive Modeling and Control, NY: Prentice Hall,
ISBN 0-13-134453-6. Author's WebPages: http://www.isis.ecs.soton.ac.uk/people/m_brown.html
and http://www.ecs.soton.ac.uk/~cjh/ Book WebPages (Publisher):
http://www.prenhall.com/books/esm_0131344536.html Additional Information: Additional page
at: http://www.isis.ecs.soton.ac.uk/publications/neural/mqbcjh94e.html and an abstract can be
found at: http://www.isis.ecs.soton.ac.uk/publications/neural/mqb93.html
Some of the best comparison of ANNs with other classification methods:
Michie, D., Spiegelhalter, D.J. and Taylor, C.C. (1994), Machine Learning, Neural and Statistical
Classification, Ellis Horwood. Author's Webpage: Donald Michie:
http://www.aiai.ed.ac.uk/~dm/dm.html Additional Information: This book is out of print but
available online at http://www.amsta.leeds.ac.uk/~charles/statlog/
Other notable books:
Anderson, J.A. (1995), An Introduction to Neural Networks, Cambridge, MA: The MIT Press,
ISBN 0-262-01144-1. Author's WebPages: http://www.cog.brown.edu/~anderson Book WebPages
(Publisher): http://mitpress.mit.edu/book-home.tcl?isbn=0262510812
orhttp://mitpress.mit.edu/book-home.tcl?isbn=0262011441 (hardback) Additional Information:
Programs and additional information can be found at: ftp://mitpress.mit.edu/pub/Intro-to-
NeuralNets/
67
SALEH MOHAMMED AL-ALAWI
Feedforward networks:
Fine, T.L. (1999) Feedforward Neural Network Methodology, NY: Springer, ISBN 0-387-98745-2.
Husmeier, D. (1999), Neural Networks for Conditional Probability Estimation: Forecasting Beyond
Point Predictions, Berlin: Springer Verlag, ISBN 185233095.
Time-series forecasting:
Weigend, A.S. and Gershenfeld, N.A., eds. (1994) Time Series Prediction: Forecasting the Future
and Understanding the Past, Reading, MA: Addison-Wesley, ISBN 0201626020. Book WebPages
(Publisher): http://www2.awl.com/gb/abp/sfi/complexity.html
Gately, E. (1996). Neural Networks for Financial Forecasting. New York: John Wiley and Sons,
Inc., ISBN 0-471-11212-7. Book WebPages (Publisher): http://www.wiley.com/
Fuzzy logic and neurofuzzy systems:
Kosko, B. (1997), Fuzzy Engineering, Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-124991-
6. Kosko's new book is a big improvement over his older neurofuzzy book and makes an excellent
sequel to Brown and Harris (1994).
Nauck, D., Klawonn, F., and Kruse, R. (1997), Foundations of Neuro-Fuzzy Systems, Chichester:
Wiley, ISBN 0-471-97151-0.
Optimization:
Cichocki, A. and Unbehauen, R. (1993). Neural Networks for Optimization and Signal Processing.
NY: John Wiley & Sons, ISBN 0-471-93010-5 (hardbound), 526 pages, $57.95. Book WebPages
(Publisher): http://www.wiley.com/
General Books for the Beginner:
Caudill, M. and Butler, C. (1990). Naturally Intelligent Systems. MIT Press: Cambridge,
Massachusetts. (ISBN 0-262-03156-6). Book WebPages (Publisher): http://mitpress.mit.edu/book-
home.tcl?isbn=0262531135
Chester, M. (1993). Neural Networks: A Tutorial, Englewood Cliffs, NJ: PTR Prentice Hall. Book
WebPage (Publisher): http://www.prenhall.com/
Dayhoff, J. E. (1990). Neural Network Architectures: An Introduction. Van Nostrand Reinhold:
New York.
Freeman, James (1994). Simulating Neural Networks with Mathematica, Addison-Wesley, ISBN:
0-201-56629-X. Book WebPage (Publisher): http://cseng.aw.com/bookdetail.qry?ISBN=0-201-
56629-X&ptype=0 Additional Information: Sourcecode available under:
ftp://ftp.mathsource.com/pub/Publications/BookSupplements/Freeman-1993
McCord Nelson, M. and Illingworth, W.T. (1990). A Practical Guide to Neural Nets. Addison-
Wesley Publishing Company, Inc. (ISBN 0-201-52376-0). Book WebPages (Publisher):
http://cseng.aw.com/bookdetail.qry?ISBN=0-201-63378-7&ptype=1174
Muller, B., Reinhardt, J., Strickland, M. T. (1995). Neural Networks.: An Introduction (2nd ed.).
Berlin, Heidelberg, New York: Springer-Verlag. ISBN 3-540-60207-0. (DOS 3.5" disk included.)
Book WebPages (Publisher): http://www.springer.de/catalog/html-
files/deutsch/phys/3540602070.html
68
DEVELOPMENT AND APPLICATION OF BACK-PROPAGATION-BASED
Not-quite-so-introductory literature:
Kung, S.Y. (1993). Digital Neural Networks, Prentice Hall, Englewood Cliffs, NJ. Book WebPages
(Publisher): http://www.prenhall.com/books/ptr_0136123260.html
Levine, D. S. (2000). Introduction to Neural and Cognitive Modeling. 2nd ed., Lawrence Erlbaum:
Hillsdale, N.J. Comments from readers of comp.ai.neural-nets: "Highly recommended".
Maren, A., Harston, C. and Pap, R., (1990). Handbook of Neural Computing Applications.
Academic Press. ISBN: 0-12-471260-6. (451 pages)
Pao, Y. H. (1989). Adaptive Pattern Recognition and Neural Networks Addison-Wesley Publishing
Company, Inc. (ISBN 0-201-12584-6) Book WebPages (Publisher): http://www.awl.com/
Refenes, A. (Ed.) (1995). Neural Networks in the Capital Markets. Chichester, England: John
Wiley and Sons, Inc. Book WebPages (Publisher): http://www.wiley.com/
Simpson, P. K. (1990). Artificial Neural Systems: Foundations, Paradigms, Applications and
Implementations. Pergamon Press: New York.
Wasserman, P.D. (1993). Advanced Methods in Neural Computing. Van Nostrand Reinhold: New
York (ISBN: 0-442-00461-3).
Zeidenberg. M. (1990). Neural Networks in Artificial Intelligence. Ellis Horwood, Ltd., Chichester.
Comments from readers of comp.ai.neural-nets: "Gives the AI point of view".
Zornetzer, S. F., Davis, J. L. and Lau, C. (1990). An Introduction to Neuraland Electronic
Networks. Academic Press. (ISBN 0-12-781881-2)
Subject: Journals and magazines about Neural Networks:
Title: Neural Networks, Publish: Pergamon Press Address: Pergamon Journals Inc., Fairview Park,
Elmsford, New York 10523, USA and Pergamon Journals Ltd. Headington Hill Hall, Oxford OX3,
0BW, England, Freq.: 10 issues/year (vol. 1 in 1988) Cost/Yr: Free with INNS or JNNS or ENNS
membership ($45?), Individual $65, Institution $175ISSN #: 0893-6080URL:
http://www.elsevier.nl/locate/inca/841
Title: Neural Computation, Publish: MIT Press Address: MIT Press Journals, 55 Hayward Street
Cambridge, MA 02142-9949, USA, Phone: (617) 253-2889 Freq.: Quarterly (vol. 1 in 1989)
Cost/Yr: Individual $45, Institution $90, Students $35; Add $9 Outside USAISSN #: 0899-
7667URL: http://mitpress.mit.edu/journals-legacy.tcl
Title: NEURAL COMPUTING SURVEYS Publish: Lawrence Erlbaum Associates Address: 10
Industrial Avenue, Mahwah, NJ 07430-2262, USA. Freq.: Yearly Cost/Yr: Free on-line, ISSN #:
1093-7609URL: http://www.icsi.berkeley.edu/~jagota/NCS/
Title: IEEE Transactions on Neural Networks, Publish: Institute of Electrical and Electronics
Engineers (IEEE) Address: IEEE Service Center, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ,
08855-1331 USA. Tel: (201) 981-0060 Cost/Yr: $10 for Members belonging to participating IEEE
societies, Freq.: Quarterly (vol. 1 in March 1990) URL:
http://www.ieee.org/nnc/pubs/transactions.html
Title: International Journal of Neural Systems, Publish: World Scientific Publishing, Address:
USA: World Scientific Publishing Co., 1060 Main Street, River Edge, NJ 07666. Tel: (201) 487
9655; Europe: World Scientific Publishing Co. Ltd., 57 Shelton Street, London WC2H 9HE,
England. Tel: (0171) 836 0888; Asia: World Scientific Publishing Co. Pte. Ltd., 1022 Hougang
Avenue 1 #05-3520, Singapore 1953, Rep. of Singapore Tel: 382 5663. Freq.: Quarterly (Vol. 1 in
1990) Cost/Yr: Individual $122, Institution $255 (plus $15-$25 for postage) ISSN #: 0129-0657
(IJNS).
69
SALEH MOHAMMED AL-ALAWI
Title: International Journal of Neurocomputing, Publish: Elsevier Science Publishers, Journal
Dept.; PO Box 211; 1000 AE Amsterdam, The Netherlands. Freq.: Quarterly (vol. 1 in 1989) URL:
http://www.elsevier.nl/locate/inca/505628
Title: Neural Processing Letters, Publish: Kluwer Academic publishers Address: P.O. Box 322,
3300 AH Dordrecht, The Netherlands. Freq: 6 issues/year (vol. 1 in 1994) Cost/Yr: Individuals
$198, Institution $400 (including postage) ISSN #: 1370-4621URL:
http://www.wkap.nl/journalhome.htm/1370-4621
Title: Neural Network News, Publish: AIWeek Inc. Address: Neural Network News, 2555
Cumberland Parkway, Suite 299, Atlanta, GA 30339 USA. Tel: (404) 434-2187Freq.: Monthly
(beginning September 1989) Cost/Yr: USA and Canada $249, Elsewhere $299, Remark:
Commercial Newsletter
Title: Network: Computation in Neural Systems, Publish: IOP Publishing Ltd Address: Europe:
IOP Publishing Ltd, Techno House, Redcliffe Way, Bristol BS1 6NX, UK; IN USA: American
Institute of Physics, Subscriber Services 500 Sunnyside Blvd., Woodbury, NY 11797-2999 Freq.:
Quarterly (1st issue 1990) Cost/Yr: USA: $180, Europe: 110 pounds URL:
http://www.iop.org/Journals/ne
Title: Connection Science: Journal of Neural Computing, Artificial Intelligence and Cognitive
Research Publish: Carfax Publishing Address: Europe: Carfax Publishing Company, PO Box 25,
Abingdon, Oxfordshire OX14 3UE, UK. USA: Carfax Publishing Company, PO Box 2025,
Dunnellon, Florida 34430-2025, USA Australia: Carfax Publishing Company, Locked Bag 25,
Deakin, ACT 2600, Australia, Freq.: Quarterly (vol. 1 in 1989) Cost/Yr: Personal rate: 48 pounds
(EC) 66 pounds (outside EC) US$118 (USA and Canada), Institutional rate: 176 pounds (EC) 198
pounds (outside EC) US$340 (USA and Canada)
Title: International Journal of Neural Networks, Publish: Learned Information Freq.: Quarterly
(vol. 1 in 1989), Cost/Yr: 90 pounds, ISSN #: 0954-9889
Subject: Conferences and Workshops on Neural Networks:
The journal "Neural Networks" has a list of conferences, workshops and meetings in each issue. It
is also available from http://www.ph.kcl.ac.uk/neuronet/bakker.html.
The IEEE Neural Network Council maintains a list of conferences at http://www.ieee.org/nnc.
Conferences, workshops, and other events concerned with neural networks, inductive learning,
genetic algorithms, data mining, agents, applications of AI, pattern recognition, vision, and related
fields are listed at Georg Thimm's web page http://www.drc.ntu.edu.sg/users/mgeorg/enter.epl
Subject: Freeware, Shareware, and Commercial Software for Neural Networks:
The WebPages: http://www.faqs.org/faqs/ai-faq/neural-nets/ Part 5 & 6 contains URLs for the
following:
Several source codes for ANN in C/C++ and Java
A review of 44 freeware and shareware packages for ANN simulation
A review of 40 commercial software packages for ANN simulation
Please see the above WebPages for detailed information or for full review:
70
... Many resources show that 35-45% of motor failures are caused by insulation break down in the stator windings [3], [4]. This fault could be taken into account by considering supplementary windings (B cc ) which number of turns corresponds to the short-circuited ones. ...
Conference Paper
Full-text available
In this paper, neural techniques have been profitably utilized for asynchronous machine fault classification. This method could be particularly helpful for specialists in their task of preventive maintenance, for fault recompilation and classification. For this purpose Three types of failure which could occur to the machine, have been considered: resistance rotor variation of (by heating), broken bars, and stator phase winding short-circuit. The originality of this work consists on showing the interest of considering current, torque and speed residues in the data base so to distinguish between faults which have the same current. From this observation, it has been shown that dis-correlation between faults could be obtained directly from time signals. This result is particularly interesting and the used method could be generalized other types of defects.
Article
Full-text available
A non invasive method to detect and diagnosis automatically in an early stage the broken bars fault in the rotor of the induction motor is presented. This method is based on monitoring suitable features by a feedforward Multi Layer Perceptron Neural Network. These features are extracted from the spectral component of the residual “d” current. A diagnostic robustness towards parametric variations (temperature, magnetic state) is realised by the use of a parametric identification technique. The simulated results show the efficiency of this method. A series of data collected, from a 1.1 kW three phase induction motor performed with different loads and different rotor faults, are used to verify and validate experimentally the performance of this method. The obtained experimental results prove the effectiveness of the proposed method.
Article
Full-text available
Traffic accidents are among the major causes of death in the Sultanate of Oman This is particularly the case in the age group of I6 to 25. Studies indicate that, in spite of Oman's high population-per-vehicle ratio, its fatality rate per l0,000 vehicles is one of the highest in the world. This alarming Situation underlines the importance of analyzing traffic accident data and predicting accident casualties. Such steps will lead to understanding the underlying causes of traffic accidents, and thereby to devise appropriate measures to reduce the number of car accidents and enhance safety standards. In this paper, a comparative study of car accident casualties in Oman was undertaken. Artificial Neural Networks (ANNs) were used to analyze the data and make predictions of the number of accident casualties. The results were compared with those obtained from the analysis and predictions by regression techniques. Both approaches attempted to model accident casualties using historical data on related factors, such as population, number of cars on the road and so on, covering the period from I976 to 1994. Forecasts for the years 1995 to 2000 were made using ANNs and regression equations. The results from ANNs provided the best fit for the data. However, it was found that ANNs gave lower forecasts relative to those obtained by the regression methods used, indicating that ANNs are suitable for interpolation but their use for extrapolation may be limited. Nevertheless, the study showed that ANNs provide a potentially powerful tool in analyzing and forecasting traffic accidents and casualties.
Article
Full-text available
This paper presents a novel approach that predicts the strength and failure modes of jointed Glass Reinforced Polyester (GRP) samples under bending using Finite Element Method (FEM) and Artificial Neural Network (ANN). The mechanical behavior of fastened glass fiber reinforced plastics composites under bending have been experimentally investigated. Samples were obtained from Amiantit Oman, a manufacturing company operating in Russail Industrial Zone in the Sultanate of Oman. The experimental program involved the conduct of three point bending tests as well as bending tests of mechanically fastened joints under static loads. The experimental results showed that the dimensions of the specimen such as the bending span length, specimen width, and specimen pitch affect GRP strength and stiffness. FEM and ANN results predicted accurately the types of failure modes and their locations along the specimens and compared well with the experimental results.
Chapter
In the 1960s, neural networks were one of the most promising and active areas of research. Neural networks were applied to a variety of problems in the hope of achieving major breakthroughs by discovering relationships within the data.
Article
Traffic accidents are among the major causes of death in the Sultanate of Oman, especially in the age group of 16 to 29. The fatality rate per 10, 000 vehicles appears to be one of the highest in the world Artificial Neural Network (ANN) is a powerful technique that has demonstrated remarkable success in the analysis of historical data and in predicting future trends in many engineering fields. This novel technique was used to analyse the number of car accidents in the Sultanate of Oman during the period from 1976 to 1990. Input for the model was carefully selected through examining the strength of the correlation between the number of accidents and several related variables such as population growth, gross domestic product, number of vehicles on the road, etc. Results indicate that 95.5% of the variation in the number of accidents could be explained by the model. Predictions for the years 1991-1994 showed high accuracy (92%). To further validate the model, principal component analysis (PCA) regression technique was used to fit the same data, and predictions for 1991-1994 were obtained. Statistical analysis of the results showed that the ANN model prediction gave a lower mean absolute percentage error (MAPE) compared to the PCA. In addition, it also gave a lower MSE and a higher R2 value. These results demonstrate that ANN provided closer predictions to the actual results than PCA.
Article
The book is a summary of a time series forecasting competition that was held a number of years ago. The competition used four different kinds of time series (for example, one data set was chaotic from measurements of a laser, and another was a multidimensional physiological times series of heart beats and respiration, etc.). The strength of the book lies in that it represents several ways to approach real time series prediction strategies in a concrete way - Invaluable, especially to researchers who may be just beginning.