Neurofuzzy models, BELRFS and LoLiMoT, for prediction of chaotic time series
ABSTRACT This paper suggests a novel learning model for prediction of chaotic time series, brain emotional learningbased recurrent fuzzy system (BELRFS). The prediction model is inspired by the emotional learning system of the mammal brain. BELRFS is applied for predicting Lorenz and Ikeda time series and the results are compared with the results from a prediction model based on local linear neurofuzzy models with linear model tree algorithm (LoLiMoT).

Conference Paper: An Emotional Learninginspired Ensemble Classifier (ELiEC)
[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we suggest an inspired architecture by brain emotional processing for classification applications. The architecture is a type of ensemble classifier and is referred to as ‘emotional learninginspired ensemble classifier’ (ELiEC). In this paper, we suggest the weighted knearest neighbor classifier as the basic classifier of ELiEC. We evaluate the ELiEC’s performance by classifying some benchmark datasets.2013 Federated Conference on Computer Science and Information Systems (FedCSIS); 09/2013  SourceAvailable from: Mahboobeh Parsapoor
Conference Paper: Brain Emotional Learning Based Fuzzy Inference System (Modified using Radial Basis Function)
[Show abstract] [Hide abstract]
ABSTRACT: This paper presents a modified model of brain emotional learning based fuzzy inference system (BELFIS). It has been suggested to predict chaotic time series. We modify the BELFIS model merging radial basis function network with adaptive neurofuzzy network. The suggested model is evaluated by testing on complex systems and the obtained results are compared with the results of other studies.8th IEEE International Joint Conference for Digital InformationManagement, Islamabad, Pakistan; 09/2013
Page 1
NeuroFuzzy Models, BELRFS and LOLIMOT, for
Prediction of Chaotic Time Series
Mahboobeh Parsapoor
School of Information Science, Computer and Electrical
Engineering
Halmstad University
Sweden
mahpar11@student.hh.se
Urban Bilstrup
School of Information Science, Computer and Electrical
Engineering
Halmstad University
Sweden
urban.bilstrup@hh.se
Abstract— This paper suggests a novel learning model for
prediction of chaotic time series, brain emotional learningbased
recurrent fuzzy system (BELRFS). The prediction model is
inspired by the emotional learning system of the mammal brain.
BELRFS is applied for predicting Lorenz and Ikeda time series
and the results are compared with the results from a prediction
model based on local linear neurofuzzy models with linear model
tree algorithm (LoLiMoT).
Keywords brain emotional learning; LoLiMoT; neurofuzzy
mode; prediction chaotic time series;
I.
INTRODUCTION
Artificial intelligent (AI) algorithms have been proposed for
the prediction of chaotic systems. The resent advance causes a
significant increase in the prediction accuracy of chaotic
dynamic systems. Typical applications are found in: medicine,
economics, fluid mechanics,
Traditionally, neural networks and neurofuzzy models such as
multilayer perceptron network (MLP), radial bias function
network (RFB), adaptive neurofuzzy inference system
(ANFIS) and locally linear neurofuzzy (LLNF) have been
employed for time series prediction [1], [2]. Although, black
box models, neural networks and neurofuzzy models have
shown good results in short term prediction, most of these
models cannot achieve accurate results for the longterm
prediction with low computational complexity. Thus,
suggesting learning models to predict chaotic time series with
low time complexity and high accuracy is still a challenging
task for longterm prediction. This paper suggests a brain
emotional learningbased recurrent fuzzy system (BELRFS) for
predicting chaotic time series. The obtained results will show
that the BELRFS model can achieve very good result for long
term prediction with fairly low computational complexity.
and astrophysics [1].
Section II reviews brain emotional learning based models
and provides an overview of the BELRFS’s structure. Section
III explains another neurofuzzy model, LoLiMoT and
describes the learning algorithm of this model. In Section IV,
two benchmarks, chaotic time series, Lorenz and Ikeda are
used to evaluate BELRFS and LoLiMoT. Finally, concluding
remarks and the further improvement to the model are given in
Section V.
II.
BRAIN EMOTIONAL LEARNING MODELS
The Brain emotional learning based intelligent controller
(BELBIC) [3] can be considered as the first practical
implementation of brain emotional learning. BELBIC was
developed on the basis of a computational model of the brain
emotional system derived by Moren and Balkenius [4], [5]. It
has been successfully applied for a number of application
areas: controlling the heating and air conditioning, aerospace
launch vehicles, intelligent washing machines, and micro heat
exchangers [6]. Several brain emotional learning models (e.g
ELFIS, BEL and RRFBEL) have been developed for
prediction applications [7][10].
A. Computational model of emotional learning
The computational model of brain emotional learning
proposed by Moren and Balkenius, is mimicking the
connection between regions of the brain that are involved in
emotional learning, i.e. the limbic system in a typical
mammalian brain. The Limbic system consists of Thalamus,
Sensory cortex, Amygdala, and Orbitofrontal cortex [11][13].
The architecture of the Amygdalaorbitofrontal system and the
connection between its parts are depicted in fig. 1 [4], [5].
There is a bidirectional connection between the orbitofrontal
cortex and amygdala to exchange the outputs of each
subsystem. The functions of the parts of the amygdala
orbitofrontal system are explained as follows ([4], [5]):
1) Thalamus: The input of the model, S , enters this part.
It calculates the maximum value of S using (1).
Figure 1.Graphical description of amygdalaorbitofrontal system.
9781467314480/12/$31.00 ©2012 IEEE
Page 2
)max(Th S
(1)
2) Amygdala: The output of the model consists of several
linear nodes, is calculated using (2). Where
ith node. The number of nodes of amygdale is one more than
the number of dimensions of the input.
VSA
3) Orbitofrontal: The output of this part, which consists of
several linear nodes is calculated using (3) . Where
weight of ith node.
WSO
4) The output of the model is obtained by using (4).
i
5) The weights of amygdala and orbitofrontal are updated
by (5) and (6) respectively.
max((
ii
SV
i V is the weight of
iii
(2)
i
W is the
iii
(3)
i
ii
OAE
(4)
)) , 0
j
j
AREW
(5)
j
jii
REWOSW))((
(6)
Where
,
and REW are two learning steps and the
reward function respectively [4], [5]. In the amygdala
orbitofrontal system two subsystems learn the inputoutput
mapping. The simple structure and fast updating rules are the
main preference of this model. However, there is no implicit
definition of reward function and this model has not shown
reasonable results in prediction. Several types of learning
models based on the amygdalaorbitofrontal system have been
developed by defining different functions or connections [7],
[8] and [10].
B. Brain emotional learningbased recurrent fuzzy system
(BELRFS)
The brain emotional learningbased recurrent fuzzy system
(BELRFS) is an enhancement of previous brain emotional
leaning models. The structure of BELRFS is based on the
internal representation [14] of the brain emotional system. The
BELRFS mimics the connection between the main parts of the
limbic system. It consists of four modules, which are named as
TH, CX, AMYG and ORBI. Fig. 2 depicts the structure of the
model and the connection between its submodules. During
the training phase, AMYG receives a recurrent signal, while
during the test phase this signal is removed (see fig. 3).
Consideringi is an input vector enters the BELRFS. The TH
provides the coarse value, the maximum value of the input
stimulus, and sends it to the AMYG. The CX receives i
which has not any coarse values and distribute it as s between
the AMYG and the ORBI.
Figure 2.The connection between BELRFS’ modules during training phase.
Figure 3.The connection between BELRFS’ modules during test phase.
The function of each module is described as follows:
1) TH: It determines the maximum value of the input
vector,i.
2) CX: This module provides the input for ORBI and
AMYG.
3) AMYG: It consists of neurofuzzy systems to provide
Ar and r , the overal output of the model, using (7) and (8).
Where
using (9).
),(
1
thsfrA
)),1(((
3
rkrgfr
M
w
1
Where
1f and
3f are neurofuzzy functions that are calculated
)
(7)
,
OAr
(8)
i
i
M
i
ii
hw
xf
1
)(
(9)
i w is the product of the membership functions and
ih is calculated as (10). The number of fuzzy rules is equal
toM .
j1
In fact, r is the output of BELRFS to the input,i. During
the test phase, the function g is calculated using the weighted
k nearest neighbor method. The AMYG calculates the
expected punishment,
A
P , to send to the ORBI.
4) ORBI: The output of this module is calculated using (11).
Where
p
j ijii
xabxh)(
(10)
e
2f is the fuzzy function defined by (9).
Page 3
)(
2sfrO
(11)
The nonlinear learning parameters of BELRFS are updated by
using the steepest decent algorithm. In contrast, the least
square method is utilized, for the linear learning parameters.
The number of learning parameters in BELRFS is
))1((
pMO
p is the input dimension. BELRFS can be considered as a
type of neurofuzzy model with the ability to predict more
accurately than previous neurofuzzy models. However,
similar to other neurofuzzy methods, an increase in the input
dimension increases the model complexity [15], [16].
Locally Linear Neuro Fuzzy Model using Model Tree
Algorithm
where M is the number of fuzzy rules and
III. LOCALLY LINEAR NEURO FUZZY MODEL TREE
ALGORITHM
The architecture of the local linear model tree algorithm
(LoLiMoT) is depicted in fig. 4 [15]. The model has a neuro
fuzzy structure with one hidden layer, which consists of fuzzy
neurons, and one linear output layer. The fuzzy neuron divides
the input space into small subspaces using a locally linear
model and a validity function, normalized Gaussian [1] and
[15]. The output of local linear model is calculated using (12).
Where
...,,,[
4321
uuuuU
the model.
]
p
u
is the input vector of
piiii
uuy
ˆ
p
1
10
(12)
The output of the model is calculated as the weighted
summation of locally linear models [15] according to (13).
u
i
M
ii
y
ˆ
y
ˆ
1
(13)
Where
is calculated by (14). Where the
function [15] that is calculated by (15).
u
i
, is the validity function of each neuron and
i
u
is the Gaussian
u
j
u
j
M
i
i
u
1
(14)
x
p
j
ij
ijj
i
cx
1
2
2
)
)(
(
2
1
exp(
(15)
Local linear model tree algorithm (LoLiMoT) is an
incremental heuristic algorithm to optimize the learning
parameters, linear and nonlinear parameters which correspond
to validity functions. The algorithm consists of two loops: the
first loop updates the nonlinear parameter and the nested loop
optimizes the linear parameters. The nested loop is based on
least square method to optimize the
learning parameters [15], where M representing the number of
fuzzy neurons’ of hidden layer and p is the dimension of the
input vector. The following steps explain LoLiMoT [15]:
)(1pM
linear
Figure 4.The structure of LoLiMoT.
1) An initial model is selected and considered as the starting
point for the algorithm. Starting with M=1 means that the
initial model has one neuron in the hidden layer.
2) The output of the model, y ˆ , is calculated by (16).
Where X is a matrix as of validation functions and Wˆis
calculated based on least square method using (17) to provide
the vector of linear parameters as (18).
W.Xy
ˆ
ˆ
(16)
(17)
YXXXW
T
T
1
)(
ˆ
W
],...,,...,...,,,..., [
000
2211
ppp
MM
(18)
3) The local cost function that is formulated as (19) is
calculated for each neuron. The worst local linear model
(LLM) that has maximum value for the lost function is selected
and indicated by l .
j
N
ji
jujeI
1
2
)()(
(19)
4) The loss function is calculated for all possible divisions
of l . The best one, the partition that has minimum value of loss
function, is selected and indicated byk .
5) The validity function that is corresponding to k is added
and the number of locally linear models is incremented by one.
6) The algorithm is stopped if the termination condition is
satisfied. Otherwise it goes to step 2 and continues.
The main preference of LoLiMoT is its low time
complexity because of linear growing with the number of fuzzy
neurons. However, the curse of dimensionality [15] is a
significant issue of this algorithm.
IV. CHAOTIC TIME SERIES PREDICTION
As previously mentioned, time series prediction is one of
the challenging applications for black box models such as:
multilayer perceptron (MLP), radial bias function network
(RFB), adaptive neurofuzzy inference system (ANFIS).
Page 4
These models, especially neural networks have shown
reasonable results in short term prediction [1], [2], [8], and
[10]. However, for long term prediction, there is a noticeable
decrease in accuracy of prediction. It has been shown that the
local linear neuro fuzzy model with local tree model algorithm
(LoLiMoT) has the ability to achieve an arbitrary accuracy for
long term prediction [1], [2]. In this section, the long term
prediction of two chaotic time series, Ikeda and Lorenz are
conducted by using BELRFS and LoLiMoT. This comparison
aims to show the capability of BELRFS to accurately predict
the long term of time series. For comparison of the results, the
error index of Normailzed Mean Squre (NMSE), formulated by
(20), is considered as the error measure.
i
N
ii
N
i
ii
YY
Y
ˆ
Y
NMSE
1
2
)(
2
1
)(
(20)
WhereY , Yˆ and Y are the observed output, the predicted
value and the average of observed value, respectively.
A. Lorenz time series
Lorenz time series, the first case study, is reconstructed by
(21). Considering the values ofa , b , c and T as (22). Thus,
the ratio of sampling to reconstruct the time series is considered
as 0.01 seconds [1].
cz xy
.
z
xzy bx
.
y
x) a(y
.
x
(21)
sTcba01.0 , 3/8,28, 10
(22)
The embedded dimension is selected as three and the data
samples are selected from 32th to 44th seconds. BELRFS and
LoLiMoT are employed for longterm prediction, 10, 20 and 30
step ahead of Lorenz time series using 500 training data
samples and 700 test data samples. The studies that employed
RBF and LLNF for prediction of the 10 step ahead of Lorenz
showed that the NMSEs of RBF and LLNF are 0.4876 and
0.1682 respectively [1], [2]. Table I lists the different NMSEs
for ten, twenty and thirty step ahead prediction that are
achieved by LoLiMoT and BELRFS. It can be seen that
NMSEs that are obtained from using BELRFS are lower than
the NMSEs of LoLiMoT to predict the multistep ahead of
Lorenz. The prediction error of BELRFS and LoLiMoT for 10
step ahead prediction of Lorenz is described in fig 5. As this
figure shows, the prediction accuracy of BELRFS is higher
than the prediction accuracy of LOLIMOT.
TABLE I.
THE NMSES OF BELRFS AND LOLIMOT TO PREDICT
MULTISTEP AHEAD OF LORENZ
Learning
Model
NMSE index for multistep ahead prediction
10 step ahead 20 step ahead 30 step ahead
BELRFS
1.463e4 0.1250 0.6023
LoLiMoT 0.0012 0.1509 0.7086
Figure 5. The obtained prediction errors applying BELRFS and LoLiMoT
for 10 step ahead prediction of Lorenz time series.
B. Ikeda time series
The second time series that is tested is Ikeda. It is
reconstructed by an Ikeda map using (23) [2].
)())cos()()sin()(( ) 1
(
)())sin()()cos()((1) 1
(
k
k
kykxaky
k
k
kykxakx
(23)
0,6.0,1
d
ba
Where,
d
is the magnitude of the noise, and
)}({n
d
is a
random variable from a uniformly distribution [1,1]. For
prediction of this time series, the training data set and the test
data set have 500 samples and 700 samples respectively.
Multistep ahead predictions of Ikeda are tested by BELRFS
and LoLiMoT.
Table II presents the NMSEs for predicting 20, 30 and 40
step ahead of Ikeda, it can be seen that the NMSE are lower
for BELRFS than for LoLiMoT. Table III compares the
characteristics of these methods for 20 step ahead prediction
of Ikeda. The differences between time complexities of these
methods are noticeable. The CPU time of LoLiMoT for this
prediction is about 900 seconds, while BELRFS achieves it in
about 400 seconds. In addition, the number of neurons that
shows the model complexity is different for BELRFS and
LoLiMoT. BELRFS has the ability to accurately predict the 20
step ahead of Ikeda using a smaller number of neurons. It
consists of 30 neurons while LoLiMoT has 79 locally linear
neurons. These results indicate that BELRFS can be
considered as a good predictor model for longterm prediction.
Page 5
TABLE II.
NMSES OF LOLIMOT AND BELRFS FOR PREDCTING MULTI
STEP AHAED OF IKEDA.
Learning
Model
NMSE index for multistep ahead prediction
20 step ahead 30 step ahead 40 step ahead
BELRFS
1.435e7 1.714e7 3.819e7
LoLiMoT 1.505e5 1.421e5 6.570e7
TABLE III.
COMPARIOSON BETWEEN THE CHARACTERISTICS OF BELRFS
AND LOLIMOT TO PREDICT THE 20 STEP AHEAD OF IKEDA.
20 step
ahead
Characteristic of the models
Number of Neurons
Time
Complexity
Learning
algorithm
SD and LSE BELRFS
30 406 Sec
LoLiMoT
79 921 Sec LoLiMoT
V.
CONCLUSION
In this paper a new brain emotional learning model called
BELRFS is proposed and applied for predicting chaotic time
series. BELRFS is a neurofuzzy model that mimics the
structure of the limbic system. BELRFS is especially
considered for the prediction of long step ahead of chaotic
time series. Previous studies [1], [2] showed that LoLiMoT is
a powerful method for long term prediction. The results
indicate a very good performance of BELRFS in longterm
prediction with a lower architectural complexity and a lower
computational complexity than for LoLiMoT.
The BELRFS model can be classified under the group of
data driven predictor models e.g. ANFIS, MLP, RFB and
LLNF. However, it has better ability for long step ahead
prediction of chaotic time series than previous models in this
group. In this paper, BELRFS was examined predicting two
chaotic time series Lorenz and Ikeda. For future works, the
model will be examined for prediction of natural time series
such as: sunspot numbers and auroral electrojet (AE) index.
Moreover, BELRFS will hopefully be possible extended to a
nonlinear identification method and classification method.
REFERENCES
[1] A. Golipour, B. N. Araabi, and C. Lucas, “Predicting Chaotic Time
Series Using Neuraland Neurofuzzy Models A Comparative Study,’’
Neural Processing Letters. vol. 24, no. 3, pp. 217239, 2006.
[2] A. Gholipour, C. Luca, and B. N. Araabi, M. Mirmomeni and M.
Shafiee, “Extracting the main patterns of natural time series for long
[3] termneurofuzzy prediction,’’ Neural Computing & Applications Journal,
vol. 16, issue 4, pp. 383393, 2006
[4] C. Lucas, D. Shahmirzadi, and N. Sheikholeslami, “ Introducing
BELBIC: brain emotional learning based intelligent controller,”
International Journal of Intelligent Automation and Soft Computing, vol.
10, no. 1, pp. 1122, 2004.
[5] J. Moren, and C. Balkenius, “A computational model of emotional
learning in the amygdala,’’ Sixth International Conference on
Simulation of Adaptive Behavior, published in JA. Mayer, A.
Berthoz., D. Floreano, H. L. Roitblat, S. W. Wilson (eds) From animals
to animats 6, MIT Press, pp. 383–391, 2000.
[6] J. Moren, Emotion and Learning: A computational model of the
amygdala, Ph.D. disseration, department of philosopy, Lund university,
2002.
[7] http://en.wikipedia.org/wiki/BELBIC
[8] M. Parsapoor, Prediction the price of Virtual Supply Chain Management
with using emotional methods, M.Sc. thesise, Dept. Compter. Eng.,
Seience and research Branch, IAU., Tehran, Iran, 2008.
[9] M. Parsapoor, C. Lucas, S. Setayeshi, “Reinforcement Recurrent Fuzzy
Rule Based System Based on Brain Emotional Learning Structure to
Predict the Complexity Dynamic System”, in Proc, Digital Information
Management Third IEEE International Joint Conf. , pp. 2232, 2008,
[10] C. Lucas, A. Abbaspour, A. Gholipour, B. Nadjar Araabi, M. Fatourechi,
“Enhancing the performance of neurofuzzy predictors by emotional
learning algorithm,’’ in procedings of Informatica (Slovenia), vol. 27,
no. 2 pp. 165–174, 2003.
[11] T. Babaie, R. Karimizandi and C. Lucas, “Learning based brain
emotional intelligence as a new aspect for development of an alarm
system’’, Soft Computing A Fusion of Foundations, Journals of
Methodologies and Applications, vol. 9, issue 9, pp. 857873, 2008.
[12] B. Best, The Anatomical
http://www.benbest.com/science/anatmind/anatmind.html
[13] M. Gazzaniga, R. B. Ivry, and G. R. Mangun, Cognitive Neroscience –
The Biology of the Mind, 3 edition, W. W. Norton & Company, New
York, 3rd ed., 2009.
[14] D. Reisbetg and R. College, Cognition – Exploring the science of the
Mind, 4th edition, W. W. North & Company, 2009.
[15] F. Dadgostar and A. Sarrafzadeh, “A Formal Model of Emotional
Response, Inspired from Human Cognition and Emotion Systems,’’. In:
Res. Lett. Inf. Math. Sci., vol. 9, pp. 8997, 2006. Available at:
http://iims.massey.ac.nz/research/letters
[16] O. Nelles, Nonlinear System Identification: From classicical Approches
to Neural Networks and Fuzzy Models, SpringerVerlag, 2001.
[17] R. Jang, C. Sun and E. Mizutani, NeuroFuzzy and Soft Computing: A
computational approach to Learning and Machine Intelligence, Prentice
Hall, pp. 342344, 1997.
Basis of Mind, availble at: