Neuro-Fuzzy Models, BELRFS and LOLIMOT, for
Prediction of Chaotic Time Series
School of Information Science, Computer and Electrical
School of Information Science, Computer and Electrical
Abstract— This paper suggests a novel learning model for
prediction of chaotic time series, brain emotional learning-based
recurrent fuzzy system (BELRFS). The prediction model is
inspired by the emotional learning system of the mammal brain.
BELRFS is applied for predicting Lorenz and Ikeda time series
and the results are compared with the results from a prediction
model based on local linear neuro-fuzzy models with linear model
tree algorithm (LoLiMoT).
Keywords- brain emotional learning; LoLiMoT; neuro-fuzzy
mode; prediction chaotic time series;
Artificial intelligent (AI) algorithms have been proposed for
the prediction of chaotic systems. The resent advance causes a
significant increase in the prediction accuracy of chaotic
dynamic systems. Typical applications are found in: medicine,
economics, fluid mechanics,
Traditionally, neural networks and neuro-fuzzy models such as
multilayer perceptron network (MLP), radial bias function
network (RFB), adaptive neuro-fuzzy inference system
(ANFIS) and locally linear neuro-fuzzy (LLNF) have been
employed for time series prediction , . Although, black
box models, neural networks and neuro-fuzzy models have
shown good results in short term prediction, most of these
models cannot achieve accurate results for the long-term
prediction with low computational complexity. Thus,
suggesting learning models to predict chaotic time series with
low time complexity and high accuracy is still a challenging
task for long-term prediction. This paper suggests a brain
emotional learning-based recurrent fuzzy system (BELRFS) for
predicting chaotic time series. The obtained results will show
that the BELRFS model can achieve very good result for long-
term prediction with fairly low computational complexity.
and astrophysics .
Section II reviews brain emotional learning based models
and provides an overview of the BELRFS’s structure. Section
III explains another neuro-fuzzy model, LoLiMoT and
describes the learning algorithm of this model. In Section IV,
two benchmarks, chaotic time series, Lorenz and Ikeda are
used to evaluate BELRFS and LoLiMoT. Finally, concluding
remarks and the further improvement to the model are given in
BRAIN EMOTIONAL LEARNING MODELS
The Brain emotional learning based intelligent controller
(BELBIC)  can be considered as the first practical
implementation of brain emotional learning. BELBIC was
developed on the basis of a computational model of the brain
emotional system derived by Moren and Balkenius , . It
has been successfully applied for a number of application
areas: controlling the heating and air conditioning, aerospace
launch vehicles, intelligent washing machines, and micro heat
exchangers . Several brain emotional learning models (e.g
ELFIS, BEL and RRFBEL) have been developed for
prediction applications -.
A. Computational model of emotional learning
The computational model of brain emotional learning
proposed by Moren and Balkenius, is mimicking the
connection between regions of the brain that are involved in
emotional learning, i.e. the limbic system in a typical
mammalian brain. The Limbic system consists of Thalamus,
Sensory cortex, Amygdala, and Orbitofrontal cortex -.
The architecture of the Amygdala-orbitofrontal system and the
connection between its parts are depicted in fig. 1 , .
There is a bidirectional connection between the orbitofrontal
cortex and amygdala to exchange the outputs of each
subsystem. The functions of the parts of the amygdala-
orbitofrontal system are explained as follows (, ):
1) Thalamus: The input of the model, S , enters this part.
It calculates the maximum value of S using (1).
Figure 1.Graphical description of amygdala-orbitofrontal system.
978-1-4673-1448-0/12/$31.00 ©2012 IEEE
) max( ThS
2) Amygdala: The output of the model consists of several
linear nodes, is calculated using (2). Where
ith node. The number of nodes of amygdale is one more than
the number of dimensions of the input.
3) Orbitofrontal: The output of this part, which consists of
several linear nodes is calculated using (3) . Where
weight of ith node.
4) The output of the model is obtained by using (4).
5) The weights of amygdala and orbitofrontal are updated
by (5) and (6) respectively.
i V is the weight of
W is the
and REW are two learning steps and the
reward function respectively , . In the amygdala-
orbitofrontal system two subsystems learn the input-output
mapping. The simple structure and fast updating rules are the
main preference of this model. However, there is no implicit
definition of reward function and this model has not shown
reasonable results in prediction. Several types of learning
models based on the amygdala-orbitofrontal system have been
developed by defining different functions or connections ,
 and .
B. Brain emotional learning-based recurrent fuzzy system
The brain emotional learning-based recurrent fuzzy system
(BELRFS) is an enhancement of previous brain emotional
leaning models. The structure of BELRFS is based on the
internal representation  of the brain emotional system. The
BELRFS mimics the connection between the main parts of the
limbic system. It consists of four modules, which are named as
TH, CX, AMYG and ORBI. Fig. 2 depicts the structure of the
model and the connection between its sub-modules. During
the training phase, AMYG receives a recurrent signal, while
during the test phase this signal is removed (see fig. 3).
Consideringi is an input vector enters the BELRFS. The TH
provides the coarse value, the maximum value of the input
stimulus, and sends it to the AMYG. The CX receives i
which has not any coarse values and distribute it as s between
the AMYG and the ORBI.
Figure 2.The connection between BELRFS’ modules during training phase.
Figure 3.The connection between BELRFS’ modules during test phase.
The function of each module is described as follows:
1) TH: It determines the maximum value of the input
2) CX: This module provides the input for ORBI and
3) AMYG: It consists of neuro-fuzzy systems to provide
Ar and r , the overal output of the model, using (7) and (8).
3f are neuro-fuzzy functions that are calculated
i w is the product of the membership functions and
ih is calculated as (10). The number of fuzzy rules is equal
In fact, r is the output of BELRFS to the input,i. During
the test phase, the function g is calculated using the weighted
k nearest neighbor method. The AMYG calculates the
P , to send to the ORBI.
4) ORBI: The output of this module is calculated using (11).
2f is the fuzzy function defined by (9).
The nonlinear learning parameters of BELRFS are updated by
using the steepest decent algorithm. In contrast, the least
square method is utilized, for the linear learning parameters.
The number of learning parameters in BELRFS is
p is the input dimension. BELRFS can be considered as a
type of neuro-fuzzy model with the ability to predict more
accurately than previous neuro-fuzzy models. However,
similar to other neuro-fuzzy methods, an increase in the input
dimension increases the model complexity , .
Locally Linear Neuro Fuzzy Model using Model Tree
where M is the number of fuzzy rules and
III. LOCALLY LINEAR NEURO FUZZY MODEL TREE
The architecture of the local linear model tree algorithm
(LoLiMoT) is depicted in fig. 4 . The model has a neuro-
fuzzy structure with one hidden layer, which consists of fuzzy
neurons, and one linear output layer. The fuzzy neuron divides
the input space into small subspaces using a locally linear
model and a validity function, normalized Gaussian  and
. The output of local linear model is calculated using (12).
is the input vector of
The output of the model is calculated as the weighted
summation of locally linear models  according to (13).
is calculated by (14). Where the
function  that is calculated by (15).
, is the validity function of each neuron and
is the Gaussian
Local linear model tree algorithm (LoLiMoT) is an
incremental heuristic algorithm to optimize the learning
parameters, linear and nonlinear parameters which correspond
to validity functions. The algorithm consists of two loops: the
first loop updates the nonlinear parameter and the nested loop
optimizes the linear parameters. The nested loop is based on
least square method to optimize the
learning parameters , where M representing the number of
fuzzy neurons’ of hidden layer and p is the dimension of the
input vector. The following steps explain LoLiMoT :
Figure 4.The structure of LoLiMoT.
1) An initial model is selected and considered as the starting
point for the algorithm. Starting with M=1 means that the
initial model has one neuron in the hidden layer.
2) The output of the model, y ˆ , is calculated by (16).
Where X is a matrix as of validation functions and Wˆis
calculated based on least square method using (17) to provide
the vector of linear parameters as (18).
] ,...,,...,...,, ,...,[
3) The local cost function that is formulated as (19) is
calculated for each neuron. The worst local linear model
(LLM) that has maximum value for the lost function is selected
and indicated by l .
4) The loss function is calculated for all possible divisions
of l . The best one, the partition that has minimum value of loss
function, is selected and indicated byk .
5) The validity function that is corresponding to k is added
and the number of locally linear models is incremented by one.
6) The algorithm is stopped if the termination condition is
satisfied. Otherwise it goes to step 2 and continues.
The main preference of LoLiMoT is its low time
complexity because of linear growing with the number of fuzzy
neurons. However, the curse of dimensionality  is a
significant issue of this algorithm.
IV. CHAOTIC TIME SERIES PREDICTION
As previously mentioned, time series prediction is one of
the challenging applications for black box models such as:
multilayer perceptron (MLP), radial bias function network
(RFB), adaptive neuro-fuzzy inference system (ANFIS).
These models, especially neural networks have shown
reasonable results in short term prediction , , , and
. However, for long term prediction, there is a noticeable
decrease in accuracy of prediction. It has been shown that the
local linear neuro fuzzy model with local tree model algorithm
(LoLiMoT) has the ability to achieve an arbitrary accuracy for
long term prediction , . In this section, the long term
prediction of two chaotic time series, Ikeda and Lorenz are
conducted by using BELRFS and LoLiMoT. This comparison
aims to show the capability of BELRFS to accurately predict
the long term of time series. For comparison of the results, the
error index of Normailzed Mean Squre (NMSE), formulated by
(20), is considered as the error measure.
WhereY , Yˆ and Y are the observed output, the predicted
value and the average of observed value, respectively.
A. Lorenz time series
Lorenz time series, the first case study, is reconstructed by
(21). Considering the values ofa , b , c and T as (22). Thus,
the ratio of sampling to reconstruct the time series is considered
as 0.01 seconds .
sTcba 01.0 , 3/8, 28, 10
The embedded dimension is selected as three and the data
samples are selected from 32th to 44th seconds. BELRFS and
LoLiMoT are employed for long-term prediction, 10, 20 and 30
step ahead of Lorenz time series using 500 training data
samples and 700 test data samples. The studies that employed
RBF and LLNF for prediction of the 10 step ahead of Lorenz
showed that the NMSEs of RBF and LLNF are 0.4876 and
0.1682 respectively , . Table I lists the different NMSEs
for ten, twenty and thirty step ahead prediction that are
achieved by LoLiMoT and BELRFS. It can be seen that
NMSEs that are obtained from using BELRFS are lower than
the NMSEs of LoLiMoT to predict the multi-step ahead of
Lorenz. The prediction error of BELRFS and LoLiMoT for 10
step ahead prediction of Lorenz is described in fig 5. As this
figure shows, the prediction accuracy of BELRFS is higher
than the prediction accuracy of LOLIMOT.
THE NMSES OF BELRFS AND LOLIMOT TO PREDICT
MULTI-STEP AHEAD OF LORENZ
NMSE index for multi-step ahead prediction
10 step ahead 20 step ahead 30 step ahead
1.463e-4 0.1250 0.6023
LoLiMoT 0.0012 0.1509 0.7086
Figure 5. The obtained prediction errors applying BELRFS and LoLiMoT
for 10 step ahead prediction of Lorenz time series.
B. Ikeda time series
The second time series that is tested is Ikeda. It is
reconstructed by an Ikeda map using (23) .
)( )) cos()() sin()(( ) 1
)( )) sin()() cos()((1 ) 1
is the magnitude of the noise, and
random variable from a uniformly distribution [-1,1]. For
prediction of this time series, the training data set and the test
data set have 500 samples and 700 samples respectively.
Multi-step ahead predictions of Ikeda are tested by BELRFS
Table II presents the NMSEs for predicting 20, 30 and 40
step ahead of Ikeda, it can be seen that the NMSE are lower
for BELRFS than for LoLiMoT. Table III compares the
characteristics of these methods for 20 step ahead prediction
of Ikeda. The differences between time complexities of these
methods are noticeable. The CPU time of LoLiMoT for this
prediction is about 900 seconds, while BELRFS achieves it in
about 400 seconds. In addition, the number of neurons that
shows the model complexity is different for BELRFS and
LoLiMoT. BELRFS has the ability to accurately predict the 20
step ahead of Ikeda using a smaller number of neurons. It
consists of 30 neurons while LoLiMoT has 79 locally linear
neurons. These results indicate that BELRFS can be
considered as a good predictor model for long-term prediction.
TABLE II. Download full-text
NMSES OF LOLIMOT AND BELRFS FOR PREDCTING MULTI-
STEP AHAED OF IKEDA.
NMSE index for multi-step ahead prediction
20 step ahead 30 step ahead 40 step ahead
1.435e-7 1.714e-7 3.819e-7
LoLiMoT 1.505e-5 1.421e-5 6.570e-7
COMPARIOSON BETWEEN THE CHARACTERISTICS OF BELRFS
AND LOLIMOT TO PREDICT THE 20 STEP AHEAD OF IKEDA.
Characteristic of the models
Number of Neurons
SD and LSE BELRFS
30 406 Sec
79 921 Sec LoLiMoT
In this paper a new brain emotional learning model called
BELRFS is proposed and applied for predicting chaotic time
series. BELRFS is a neuro-fuzzy model that mimics the
structure of the limbic system. BELRFS is especially
considered for the prediction of long step ahead of chaotic
time series. Previous studies ,  showed that LoLiMoT is
a powerful method for long term prediction. The results
indicate a very good performance of BELRFS in long-term
prediction with a lower architectural complexity and a lower
computational complexity than for LoLiMoT.
The BELRFS model can be classified under the group of
data driven predictor models e.g. ANFIS, MLP, RFB and
LLNF. However, it has better ability for long step ahead
prediction of chaotic time series than previous models in this
group. In this paper, BELRFS was examined predicting two
chaotic time series Lorenz and Ikeda. For future works, the
model will be examined for prediction of natural time series
such as: sunspot numbers and auroral electrojet (AE) index.
Moreover, BELRFS will hopefully be possible extended to a
nonlinear identification method and classification method.
 A. Golipour, B. N. Araabi, and C. Lucas, “Predicting Chaotic Time
Series Using Neuraland Neurofuzzy Models A Comparative Study,’’
Neural Processing Letters. vol. 24, no. 3, pp. 217-239, 2006.
 A. Gholipour, C. Luca, and B. N. Araabi, M. Mirmomeni and M.
Shafiee, “Extracting the main patterns of natural time series for long-
 termneurofuzzy prediction,’’ Neural Computing & Applications Journal,
vol. 16, issue 4, pp. 383-393, 2006
 C. Lucas, D. Shahmirzadi, and N. Sheikholeslami, “ Introducing
BELBIC: brain emotional learning based intelligent controller,”
International Journal of Intelligent Automation and Soft Computing, vol.
10, no. 1, pp. 11-22, 2004.
 J. Moren, and C. Balkenius, “A computational model of emotional
learning in the amygdala,’’ Sixth International Conference on
Simulation of Adaptive Behavior, published in J-A. Mayer, A.
Berthoz., D. Floreano, H. L. Roitblat, S. W. Wilson (eds) From animals
to animats 6, MIT Press, pp. 383–391, 2000.
 J. Moren, Emotion and Learning: A computational model of the
amygdala, Ph.D. disseration, department of philosopy, Lund university,
 M. Parsapoor, Prediction the price of Virtual Supply Chain Management
with using emotional methods, M.Sc. thesise, Dept. Compter. Eng.,
Seience and research Branch, IAU., Tehran, Iran, 2008.
 M. Parsapoor, C. Lucas, S. Setayeshi, “Reinforcement Recurrent Fuzzy
Rule Based System Based on Brain Emotional Learning Structure to
Predict the Complexity Dynamic System”, in Proc, Digital Information
Management Third IEEE International Joint Conf. , pp. 22-32, 2008,
 C. Lucas, A. Abbaspour, A. Gholipour, B. Nadjar Araabi, M. Fatourechi,
“Enhancing the performance of neurofuzzy predictors by emotional
learning algorithm,’’ in procedings of Informatica (Slovenia), vol. 27,
no. 2 pp. 165–174, 2003.
 T. Babaie, R. Karimizandi and C. Lucas, “Learning based brain
emotional intelligence as a new aspect for development of an alarm
system’’, Soft Computing A Fusion of Foundations, Journals of
Methodologies and Applications, vol. 9, issue 9, pp. 857-873, 2008.
 B. Best, The Anatomical
 M. Gazzaniga, R. B. Ivry, and G. R. Mangun, Cognitive Neroscience –
The Biology of the Mind, 3 edition, W. W. Norton & Company, New
York, 3rd ed., 2009.
 D. Reisbetg and R. College, Cognition – Exploring the science of the
Mind, 4th edition, W. W. North & Company, 2009.
 F. Dadgostar and A. Sarrafzadeh, “A Formal Model of Emotional-
Response, Inspired from Human Cognition and Emotion Systems,’’. In:
Res. Lett. Inf. Math. Sci., vol. 9, pp. 89-97, 2006. Available at:
 O. Nelles, Nonlinear System Identification: From classicical Approches
to Neural Networks and Fuzzy Models, Springer-Verlag, 2001.
 R. Jang, C. Sun and E. Mizutani, Neuro-Fuzzy and Soft Computing: A
computational approach to Learning and Machine Intelligence, Prentice
Hall, pp. 342-344, 1997.
Basis of Mind, availble at: