Novel tracking function of moving target using chaotic dynamics in a recurrent neural network model

Article (PDF Available)inCognitive Neurodynamics 2(1):39-48 · April 2008with19 Reads
DOI: 10.1007/s11571-007-9029-6 · Source: PubMed
Abstract
Chaotic dynamics introduced in a recurrent neural network model is applied to controlling an object to track a moving target in two-dimensional space, which is set as an ill-posed problem. The motion increments of the object are determined by a group of motion functions calculated in real time with firing states of the neurons in the network. Several cyclic memory attractors that correspond to several simple motions of the object in two-dimensional space are embedded. Chaotic dynamics introduced in the network causes corresponding complex motions of the object in two-dimensional space. Adaptively real-time switching of control parameter results in constrained chaos (chaotic itinerancy) in the state space of the network and enables the object to track a moving target along a certain trajectory successfully. The performance of tracking is evaluated by calculating the success rate over 100 trials with respect to nine kinds of trajectories along which the target moves respectively. Computer experiments show that chaotic dynamics is useful to track a moving target. To understand the relations between these cases and chaotic dynamics, dynamical structure of chaotic dynamics is investigated from dynamical viewpoint.
RESEARCH ARTICLE
Novel tracking function of moving target using chaotic dynamics
in a recurrent neural network model
Yongtao Li Æ Shigetoshi Nara
Received: 15 May 2007 / Revised: / Accepted: 14 September 2007 / Published online: 9 October 2007
Springer Science+Business Media B.V. 2007
Abstract Chaotic dynamics introduced in a recurrent
neural network model is applied to controlling an object to
track a moving target in two-dimensional space, which is
set as an ill-posed problem. The motion increments of the
object are determined by a group of motion functions
calculated in real time with firing states of the neurons in
the network. Several cyclic memory attractors that corre-
spond to several simple motions of the object in two-
dimensional space are embedded. Chaotic dynamics
introduced in the network causes corresponding complex
motions of the object in two-dimensional space. Adaptively
real-time switching of control parameter results in con-
strained chaos (chaotic itinerancy) in the state space of the
network and enables the object to track a moving tar-
get along a certain trajectory successfully. The
performance of tracking is evaluated by calculating the
success rate over 100 trials with respect to nine kinds of
trajectories along which the target moves respectively.
Computer experiments show that chaotic dynamics is
useful to track a moving target. To understand the relations
between these cases and chaotic dynamics, dynamical
structure of chaotic dynamics is investigated from
dynamical viewpoint.
Keywords Chaotic dynamics Tracking Moving target
Neural network
Introduction
Associated with rapid development of science and tech-
nology, great attentions on biological systems have been
paid because of excellent functions not only in information
processing, but also in well-regulated functioning and
controlling, which work quite adaptively in various envi-
ronments. Despite many attempts to understand the
mechanisms of biological systems, we have yet poor
understanding them.
In biological systems, well-regulated functioning and
controlling originate from the strongly nonlinear interac-
tion between local systems and total system. Therefore, it is
very difficult to understand and describe these systems
using the conventional methodologies based on reduc-
tionism, which means that a system is decomposed into
parts or elements. The conventional reductionism more or
less falls into two difficulties due to enormous complexity
originating from dynamics in systems with large but finite
degrees of freedom. One is ‘combinatorial explosion’ and
the other is ‘‘divergence of algorithmic complexity’’. These
difficulties are not yet solved in spite of many efforts. On
the other hand, a novel idea based on functional viewpoint
was introduced to understand the mechanisms. It is a new
approach called ‘the methodology of complex dynamics’’,
which has been constructed in various fields of science and
engineering in the several decades associated with the
remarkable development of computers and simulation
methods. Especially, chaotic dynamics observed in bio-
logical systems including brains has attracted great interest
(Babloyantz and Destexhe 1986; Skarda and Freeman
1987). It is considered that chaotic dynamics would play
important roles in complex functioning and controlling of
biological systems including brains. From this viewpoint,
many dynamical models have been constructed for
Y. Li (&) S. Nara
Graduate School of Natural Science and Technology,
Okayama University, 3-1-1 Tsushima-naka, Okayama 700-8530,
Japan
e-mail: li@chaos.elec.okayama-u.ac.jp
123
Cogn Neurodyn (2008) 2:39–48
DOI 10.1007/s11571-007-9029-6
approaching the mechanisms by means of large-scale
simulation or heuristic methods. Artificial neural networks
in which chaotic dynamics can be introduced has been
attracting great interests.
Over decade years ago, chaotic itinerancy was observed
in neural networks and was proposed as a universal
dynamical concept in high-dimensional dynamical sys-
tems. Artificial neural networks for chaotic itinerancy were
studied with great interests (Aihara et al. 1990; Tsuda
1991, 2001; Kaneko and Tsuda 2003; Fuji et al. 1996). As
one of those works, by Nara and Davis, chaotic dynamics
was introduced in a recurrent neural network model
(RNNM) consisting of binary neurons, and for investigat-
ing the functional aspects of chaos, they have applied
chaotic dynamics by means of numerical methods to
solving, for instance, a memory search task which is set in
an ill-posed context (Nara and Davis 1992, 1997; Nara
et al. 1993, 1995; Kuroiwa et al. 1999; Suemitsu and Nara
2003). In their papers, they proposed that chaotic itinerancy
could be potentially useful dynamics to solve complex
problem, such as ill-posed problems. Standing on this
viewpoint, auditory behaviour of the cricket shows a typ-
ical ill-posed problem in biological systems. Female
cricket can track towards directions of male position leaded
by calling song of male in dark fields with a large number
of obstacles (Huber and Thorson 1985). This behaviour
includes two ill-posed properties. One is that darkness and
noisy environments prevent female from accurate deciding
of directions of male positions, and the other is that a large
number of big obstacles in fields force female to solve two-
dimensional maze as one of ill-posed problems. Therefore,
in order to investigate the brain from functional aspects, we
try to construct a model to approach insect behaviours to
solve ill-posed problems. As one of functional examples,
chaotic dynamics introduced in a recurrent network model
was applied to solving a two-dimensional maze, which
is set as an ill-posed problem (Suemitsu and Nara 2004).
A simple coding method translating the neural states into
motion increments and a simple control algorithm adap-
tively switching a system parameter to produce chaotic
itinerant behaviours are proposed. The conclusions show
that chaotic itinerant behaviours can give better perfor-
mance to solving a two-dimensional maze than that of
random walk.
In order to further investigate functional aspects of
chaotic dynamics, it was applied to tracking a moving
target, which is set as another ill-posed problem. Generally
speaking, as an object is tracking an target that is moving
along a certain trajectory in two-dimensional space, it is an
ill-posed problem because there are many tracking results
with uncertainty. In conventional methods, the object is set
to obtain more precise information from the target as
possible, so as to successfully capture the moving target.
However, in our study, the object obtain only rough
information of the target and successfully capture the
moving target using chaotic dynamics in RNNM.
In the case of tracking a moving target, the first problem
is to realize two-dimensional motion control of the object.
In our model, we assume that the object moves with dis-
crete time steps. The firing state in the neural network is
transformed into two-dimensional motion increments by
the coding of motion functions, which will be illustrated in
the later section. In addition, several limit cycle attractors,
which are corresponding to the prototypical simple motions
of the object in two-dimensional space, are embedded in
the neural network. At a certain time, if the firing pattern
converges into an prototypical attractor, the object moves
in a monotonic direction of several directions in two-
dimensional space. If chaotic dynamics is introduced into
the network, the firing pattern could not converge into an
prototypical attractor, that is, attractors fall to ruin. At the
same time, the corresponding motion of the object is cha-
otic in two-dimensional space. By adaptive switching of a
certain system parameter, chaotic itinerancy generated in
the neural network results in complex two-dimensional
motions of the object in various environments. Considering
this point, we have proposed a simple control algorithm of
tracking a moving target, and quite good tracking perfor-
mances have been obtained, as will be stated in the later
section .
This paper is organized as follows. In the next section
we describe the network model and chaotic dynamics in it,
the control algorithm of tracking a moving target is illus-
trated in Sect. ‘Algorithm of tracking a moving target’’.
We discuss the results of computer simulation and evaluate
the performance of tracking a moving target in Sect.
‘Experimental results’’.
Chaotic dynamics in a recurrent neural network model
Our study works with a fully interconnected recurrent
neural network consisting of N neurons , which is shown as
Fig. 1. Its updating rule is defined by
S
i
ðt þ 1Þ¼sgn
X
j2G
i
ðrÞ
W
ij
S
j
ðtÞ
0
@
1
A
sgnðuÞ¼
þ1u0;
1u\0:
ð1Þ
where S
i
(t)=±1(i =1* N) represents the firing state of
a neuron specified by index i at time t. W
ij
is an
asymmetrical connection weight (synaptic weight) from
the neuron S
j
to the neuron S
i
, where W
ii
is taken to be 0.
G
i
(r) means a connectivity configuration set of connectivity
40 Cogn Neurodyn (2008) 2:39–48
123
r (0 \ r \ N) that is fan-in number for the neuron S
i
.Ata
certain time t, the state of neurons in the network can be
represented as a N-dimensional state vector S(t), called as
state pattern. Time development of state pattern S(t)
depends on the connection weight matrix W
ij
and
connectivity r, therefore, in the case of full connectivity
r = N –1, if W
ij
could be appropriately determined,
arbitrarily chosen state pattern nðtÞ would be multiple
stationary states in the development of S(t), which is
equivalent to storing memory states in the functional
context. In our study, W
ij
are determined by a kind of
orthogonalized learning method and taken as follows.
W
ij
¼
X
L
l¼1
X
K
k¼1
ðn
kþ1
l
Þ
i
ðn
k
l
Þ
y
j
ð2Þ
where fn
k
l
jk ¼ 1...K; l ¼ 1...Lg is an attractor pattern
set, K is the number of memory patterns included in a cycle
and L is the number of memory cycles. n
ky
l
is the conjugate
vector of n
k
l
which satisfies n
ky
l
n
k
0
l
0
¼ d
ll
0
d
kk
0
; where d is
Kronecker’s delta. This method was confirmed to be
effective to avoid spurious attractors that affect L attractors
with K-step maps embedded in the network when con-
nectivity r = N (Nara and Davis 1992, 1997; Nara et al.
1993, 1995; Nara 2003; Suemitsu and Nara 2003).
In the case of full connectivity r = N 1, as time
evolves, the state pattern S(t) converges into one of the
cyclic memory patterns. Therefore, the network can func-
tion as a conventional associative memory. If the state
pattern S(t) is one of the memory patterns, n
k
l
; then the next
output S(t + 1) will be the next memory pattern of the
cycle, n
kþ1
l
: Even if the state pattern S(t) is near one of the
memory patterns, n
k
l
: the output sequence S(t + kK)(k =
1,2,3...) will converge to the memory pattern n
k
l
: In other
words, for each memory pattern, there is a set of the state
patterns, called as memory basin B
l
k
.IfS(t) is in the
memory basin B
l
k
, then the output sequence S(t + kK)
(k = 1,2,3...) will converge to the memory pattern n
k
l
:
It is quite difficult to estimate basin volume accurately
because one must check the final state (lim
k!1
SðkKÞÞ of all
initial state patterns (the total number is 2
N
), as requires an
enormous amounts of time. Therefore, a statistical method
is applied to estimating the approximate basin volume.
First, random initial state patterns are generated with a
sufficiently large amount so that they can cover the entire N-
dimensional state space uniformly. As state updating
develops, it is specified that the final state lim
k!1
SðkKÞ of
each initial pattern would converge into a certain memory
attractor. The ratios between the number of initial state
patterns that converge into a certain memory attractor and
the total number of initial state patterns are taken. The rate
of convergence to each memory attractor is proportional to
the basin volume, and is regarded as the approximate basin
volume for each memory attractor. An actual example of
the basin volume is shown in Fig. 2. The basin volume
shows that almost all initial state patterns converge into one
of the memory attractors averagely, that is, there are mainly
the memory attractors in the whole state space.
Next, we continue to decrease connectivity r. When r is
large enough, r ^ N, memory attractors are stable, the
network can still function as a conventional associative
memory. When r becomes smaller and smaller, the basin
volume of all the memory attractors are becoming smaller
and smaller, in other words, more and more state patterns
gradually do not converge into a certain memory pattern
despite the network is updated for a long time, that is, each
basin vanishes and the attractor becomes unstable. So that
if the number of connectivity r becomes quite small, state
patterns do not converge into any memory pattern even if
the network is updated for a long time.
Fig. 1 Fully interconnected recurrent neural network model
0
0.01
0.02
0.03
0.04
0.05
0.06
0 5 10 15 20 25 30
Basin volume
Memory pattern number
Fig. 2 Basin volume fraction (r = N 1 = 399): The horizontal axis
represents memory pattern number (1–24). The basin number 25 shows
the volume fraction which corresponds to the initial patterns that
converged into cyclic output states with a period of six steps but not any
one of the memory attractors. The basin number 26 shows the volume
fraction which corresponds to the initial patterns which did not
converged, that means chaotically itinerant . The vertical axis represents
the ratio of each sample to the total number of samples. Alternative
hatching and nonhatching are used to show different cyclic attractors
Cogn Neurodyn (2008) 2:39–48 41
123
Since chaotic dynamics in the network depends on
system parameter—the connectivity r, in order to analyze
the destabilizing process in our model, we have calculated
a bifurcation diagram of overlap, where overlap means
one-dimensional projection of state pattern S(t) to a certain
reference pattern. Therefore, an overlap m(t) is defined by
mðtÞ¼
1
N
Sð0ÞSðtÞð3Þ
where S(0) is an initial pattern(reference pattern) and S(t)is
the state pattern at time step t. Because m(t) is a normalized
inner product, –1 £ m(t) £ 1. For connectivity from 1 to
N 1, we have respectively calculated the corresponding
overlap m(t) as state pattern S(t) evolves for long time.
Figure 3 shows the overlap m(t) as a function of connec-
tivity. In the case of large enough connectivity r, state
pattern S(t) at each K time step is same with S(0). With the
decrease of connectivity r, state pattern S(t) at each K time
step gradually becomes different to S(0), that is, cyclic
memory attractor becomes unstable. Finally, non-period
dynamics occurs, that is, cyclic memory attractors ruin.
In our previous papers, we confirmed that the non-period
dynamics in the network is chaotic wandering. In order to
investigate the dynamical structure, we calculated basin
visiting measures and it suggests that the trajectory can
pass the whole N-dimensional state space, that is, cyclic
memory attractors ruin due to a quite small connectivity
(Nara and Davis 1992, 1997; Nara et al. 1993, 1995; Nara
2003; Suemitsu and Nara 2003).
Motion control and memory patterns
Biological data show that the number of neurons in the
brain varies dramatically from species to species and the
human brain has about 100 billion (10
11
) neurons, but one
human has only over 600 muscles that function to produce
force and cause motion. These motions are controlled by
the neuron system. That is, the motions of relatively few
muscles are controlled by the activities of enormous neu-
rons. Therefore, the neural network consisting of N neurons
is used to realize two-dimensional motion control of an
object.
We confirmed that chaotic dynamics introduced in the
network does not so sensitively depend on the size of the
neuron number (Nara 2003). However, if N is too small,
chaotic dynamics can not occur; whereas if N is oversized,
it results in excessive computing time. Therefore, the
number of neurons is N = 400 in our actual computer
simulation. At a certain time, the state pattern in the net-
work is represented by 400-dimensional state vectors,
while the motion in two-dimensional space is only two-
dimensional vectors. Suppose that a moving object moves
from the position (p
x
(t),p
y
(t)) to (p
x
(t + 1), p
y
(t + 1)) with a
set of motion increments (Df
x
(t), Df
y
(t)). The state pattern
S(t) at time t is a 400-dimensional vector, so we must
transform it to two-dimensional motion increments by
coding. The coding relations are implemented by replacing
motion increments with a group of motion functions
(f
x
(S(t)), f
y
(S(t))). In 2-dimensional space, the actual
motion of the object is given by
p
x
ðt þ 1Þ¼p
x
ðtÞþf
x
ðSðtÞÞ ð4Þ
p
y
ðt þ 1Þ¼p
y
ðtÞþf
y
ðSðtÞÞ ð5Þ
where f
x
(S(t)), f
y
(S(t)) are the x-axis increment and y-axis
increment respectively, and they are calculated from firing
states of the neural network model and defined by
f
x
ðSðtÞÞ ¼
4
N
A C f
y
ðSðtÞÞ ¼
4
N
B D ð6Þ
where A, B, C, D are four independent N/4 dimensional
sub-space vectors of state pattern S(t). Therefore, after the
inner product between two independent sub-space vectors
is normalized by 4/N, motion functions range from –1 to
+1, that is,
1 f
x
ðSðtÞÞþ1 ð7Þ
1 f
y
ðSðtÞÞþ1 ð8Þ
Referring to Eq. (4) and (5) and the definition of motion
functions, in our actual simulations, two-dimensional space
is digitized with a resolution 8/N = 0.02 due to the binary
neuron state ±1 and N = 400.
Next, let us consider the construction of memory
attractors corresponding to prototypical simple motions. It
is considerable that two-dimensional motion consists of
several prototypical simple motions. We take four types of
-1
-0.5
0
0.5
1
0 50 100 150 200 250 300 350 400
m(t)
connectivity r
Fig. 3 Bifurcation diagram with respect connectivity r: The hori-
zontal axis represents connectivity r (0–399). The vertical axis
represents the long-time behaviours of overlap m(t)atK-step
mappings
42 Cogn Neurodyn (2008) 2:39–48
123
motion that one object moves toward (+ 1, + 1), (–1, + 1),
(–1, –1), (+ 1, –1) in two-dimensional space, as prototyp-
ical simple motions. Under these situations, four groups of
attractor patterns, which are corresponding to the proto-
typical simple motions of the object in two-dimensional
space, are embedded in the neural network by means of
embedding of associative memory introduced in the pre-
vious section. Each group of attractor patterns includes six
patterns that are corresponding to one prototypical simple
motion. Each group is a cyclic memory, or a limit cycle
attractor in 400-dimensional state space. We take n
k
l
(l =1,2,3,4andk =1,2,, 6) as the attractor pattern
that is k pattern in l group (see Fig. 4). Therefore, in our
actual simulation, L =4,K = 6. In the present simulation,
we directly employed K = 6 because it is an optimized
selection after one of the authors and his collaborators had
done a number of simulations for various K (3, 5, 6, 10, for
instance). All of the results show that , if K is too small, it is
difficult to avoid spurious attractors, on the other hand,
quite large K can not also give stronger attraction. The
corresponding relations between attractor patterns and
prototypical simple motions are shown as follows.
ðf
x
ðn
k
1
Þ; f
y
ðn
k
1
ÞÞ ¼ ðþ1; þ 1Þ
ðf
x
ðn
k
2
Þ; f
y
ðn
k
2
ÞÞ ¼ ð1; þ 1Þ
ðf
x
ðn
k
3
Þ; f
y
ðn
k
3
ÞÞ ¼ ð1; 1Þ
ðf
x
ðn
k
4
Þ; f
y
ðn
k
4
ÞÞ ¼ ðþ1; 1Þ
When connectivity r is sufficiently large, one random
initial pattern converges into one of four limit cycle
attractors as time evolves. The corresponding motion of the
object in 2-dimensional space becomes monotonic, and a
simulation example is shown in Fig. 5. On the other hand,
when connectivity r is quite small, chaotic dynamics is
observed in the network with the development of time. At
the same time, the corresponding motion of the object is
chaotic, and Fig. 6 shows an simulation example of chaotic
motion in two-dimensional space. Therefore, when the
network evolves, monotonic motion and chaotic motion
can be switched by switching the connectivity r.
Algorithm of tracking a moving target
Now we want to discuss how to realize motion control so
as to track a moving target. In our study, we suppose that
an object is tracking a target that is moving along a certain
trajectory in two-dimensional space, and the object can
obtain the rough directional information D
1
(t) of the
moving target. At a certain time t, the present position of
the object is assumed at the point (p
x
(t), p
y
(t)). This point is
taken as the origin point and two-dimensional space can be
divided into four quadrants. If the target is moving in the
first quadrant, D
1
(t) = 1. Therefore, if the target is moving
in the nth quadrant, D
1
(t)=n (n = 1,2,3,4), which is called
global target direction for it is a rough directional
information.
Fig. 4 Memory attractor patterns: Pattern (1–24) includes l =4
groups of cyclic memory consisting k = 6 patterns. Each cyclic
memory corresponds to a prototypical simple motion
-1
0
1
2
3
4
5
6
-1 0 1 2 3 4 5 6
START
r=399
Fig. 5 An example of monotonic motion: When associative network
state (r = 399) occurs in the state space, the object performs
monotonic motion (+ 1, + 1) after some updating steps, from start
point (0, 0) in two-dimensional space
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
START
r=30
Fig. 6 An example of chaotic motion: When chaotic network state
(r = 40) occurs in the network, correspondingly, the object moves
chaotically from start point (0,0) in two-dimensional space
Cogn Neurodyn (2008) 2:39–48 43
123
Next, we also suppose that the object can also know
another directional information D
2
(t), which means which
quadrant the moving object has moved toward from time
t –1tot, that is, in the previous step. The direction D
2
(t)is
called global motion direction, and defined as
D
2
ðtÞ¼
1 ðc
x
ðtÞ¼þ1 and c
y
ðtÞ¼þ1Þ
2 ðc
x
ðtÞ¼1 and c
y
ðtÞ¼þ1Þ
3 ðc
x
ðtÞ¼1 and c
y
ðtÞ¼1Þ
4 ðc
x
ðtÞ¼þ1 and c
y
ðtÞ¼1Þ
8
>
>
<
>
>
:
ð9Þ
where c
x
(t) and c
y
(t) are given as
c
x
ðtÞ¼
p
x
ðtÞp
x
ðt 1Þ
jp
x
ðtÞp
x
ðt 1Þj
ð10Þ
c
y
ðtÞ¼
p
y
ðtÞp
y
ðt 1Þ
jp
y
ðtÞp
y
ðt 1Þj
ð11Þ
Now we know that global target direction D
1
(t) and
global motion direction D
2
(t) are time-dependent variables.
If the network can get feedback signals from these two
directions in real time, the connectivity r also becomes a
time-dependent variable r(t) and is determined by global
target direction D
1
(t) and global motion direction D
2
(t).
Therefore, a simple control algorithm of tracking a moving
target is proposed and shown in Fig. 7, where R
L
is a
sufficiently large connectivity and R
S
is a quite small
connectivity that can lead to chaotic dynamics in the neural
network. Adaptive switching of connectivity is the core
idea of the algorithm. If global motion direction and global
target direction is coincident, that is, D
2
(t)=D
1
(t), the
network is updated with sufficiently large connectivity
r(t)=R
L
; otherwise, if global motion direction and global
target direction is not coincident, the network is updated
with quite small connectivity r(t)=R
S
. When the synaptic
connectivity r(t) is determined by comparing two
directions, D
1
(t 1) and D
2
(t 1), the motion increments
of the object are calculated from the state pattern of the
network updated with r(t). The new motion causes the next
D
1
(t) and D
2
(t), and produces the next synaptic
connectivity r(t + 1). By repeating this process, the
synaptic connectivity r(t) is adaptively switching between
R
L
and R
S
, the object is alternatively implementing
monotonic motion and chaotic motion in two-dimensional
space.
In closing this section, one point we must mention is
that we have to use an engineering approach to switch
the parameter r in computer simulation experiments
because we started from heuristic approach using a sim-
ple model to apply chaotic dynamics to complex
problems which includes ill-posed property. However, as
the future scope, we will develop it to investigate bio-
logical mechanism of advanced functions or controls in
real biological systems.
Experimental results
Generally speaking, it is difficult to give mathematical
proof that our method always produces correct solutions in
tracking of arbitrarily moving target. Necessarily, we must
rely on computer experiments and showing typical prop-
erties connected to universal effectiveness of chaos in
biological systems. Therefore, at the start point, some
simple trajectories along which the target moves should be
set. In future, more complex orbits of moving target will be
investigated. In order to simplify our investigation, we
have taken nine kinds of trajectories that include one cir-
cular trajectory and eight linear trajectories, shown in
Fig. 8.
Suppose that the initial position of the object is the
origin (0,0) of two-dimensional space. The distance d
between initial position of the object and that of the target
is a constant value. Therefore, at the beginning of tracking,
the object is at the circular center of the circular trajectory
and the other eight linear trajectories are tangential to the
circular trajectory along a certain angle a, where the angle
is defined by the x axis. The tangential angle a = np/
4(n = 1,2,,8), so we number the eight linear trajectories
as LT
n
.
Fig. 7 Control algorithm of tracking a moving target: By judging
whether global target direction D
1
(t) coincides with global motion
direction D
2
(t) or not, adaptive switching of connectivity r between
R
S
and R
L
results in chaotic dynamics or attractor’s dynamics in state
space. Correspondingly, the object is adaptively tracking a moving
target in two-dimensional space
Fig. 8 Trajectories of moving target: one is circular and eight are
linear(LT
1
LT
8
)
44 Cogn Neurodyn (2008) 2:39–48
123
Next, let us consider the velocity of the target. In
computer simulation, the object moves one step per dis-
crete time step, at the same time, the target also moves one
step with a certain step length SL that represents the
velocity of the target. The motion increments of the object
ranges from –1 to 1 (see Eq. (7) and (8)), so the step length
SL is taken with an interval 0.01 from 0.01 to 1 up to 100
different velocities. Because velocity is a relative quantity,
so SL = 0.01 is a slower target velocity and SL = 1 is a
faster target velocity relative to the object.
Now, let us look at a simulation of tracking a moving
target using the algorithm proposed above, shown in
Fig. 9. When an target is moving along a circular tra-
jectory at a certain velocity, the object captured the target
at a certain point of the circular trajectory, which is a
successful capture to a circular trajectory. Another simu-
lation of tracking a target that moves along a linear
trajectory is shown in Fig. 10, which is a successful
capture to a linear trajectory.
Performance evaluation
To show the performance of tracking a moving target, we
have evaluated the success rate of tracking a moving target
that moves along one of nine trajectories. However, even
though tracking a same target trajectory, the performance of
tracking depends not only on synaptic connectivity r, but also
on target velocity or target step length SL. Therefore, when
we evaluate the success rate of tracking, a pair of parameters,
that is, one of connectivity r (1 £ r £ 60) and one of target
velocity SL (0.01 £ T £ 1.0), is taken. Because we take 100
different target velocity with a same interval 0.01, we have
C
100
60
pairs of parameters. We have evaluated the success rate
of tracking a circle trajectory, shown as Fig. 11.Fromthe
simulation results, we can know that the success rate of
tracking a circle trajectory with chaotic dynamics is signifi-
cantly high, and that, the success rate highly depends on
synaptic connectivity r and the velocity of the target.
In order to observe the performance clearly, we have
taken the data of certain connectivities, and plot them in
two-dimensional coordinates, shown as Fig. 12. Compar-
ing these figures, we can see a novel performance, when
the target velocity becomes faster, the success rate has a
upward tendency, such as r = 51. In other words, when the
chaotic dynamics is not too strong, it seems useful to
tracking a faster target.
Certainly, the performance of tracking a moving tar-
get also depends on the target trajectory. In this paper,
because linear target trajectories is too much, we only show
two of them in Fig. 13. From the results of computer
experiment, we know the following two points. First, the
success rate decreases rapidly as the target velocity
increases. Second, comparing these success rate of the
linear trajectories with that of the circular trajectory, we are
sure that tracking a moving target of circular trajectory has
better performance than that of linear trajectory. However,
to some linear trajectories, quite excellent performance was
observed, such as Fig. 13b.
-20
-15
-10
-5
0
5
10
15
20
-20 -15 -10 -5 0 5 10 15 20
Object
Target
Capture
Fig. 9 An example of tracking a target that is moving along a
circular trajectory with the simple algorithm. The object captured the
moving target at the intersection point
-20
-15
-10
-5
0
5
10
15
20
-20 -15 -10 -5 0 5 10 15 20
Object
Target
Capture
Fig. 10 An example of tracking a target that is moving along a linear
trajectory with the simple algorithm. The object captured the moving
target at the intersection point
0
10
20
30
40
50
60
0
20
40
60
80
100
0
0.2
0.4
0.6
0.8
1
Success Rate
Connectivity
Target Velocity
x10
-2
Fig. 11 Success rate of tracking a moving target along circle
trajectory: Over 100 random initial patterns, the rate of successfully
capturing the moving target within 600 steps is estimated as the
success rate. The positive orientation obeys the right-hand rule. The
vertical axis represents success rate, and two axes in the horizontal
plane represents connectivity r and target velocity SL, respectively
Cogn Neurodyn (2008) 2:39–48 45
123
Discussion
In order to show the relations between the above cases and
chaotic dynamics, from dynamical viewpoint, we have
investigated dynamical structure of chaotic dynamics. For
small connectivities from 1 to 60, the network takes chaotic
wandering. During this wandering, we have taken a sta-
tistics of continuously staying time in a certain basin
(Suemitsu and Nara 2004) and evaluated the distribution
p(l,l) which is defined by
pðl; lÞ¼fthe number of l jSðtÞ2b
l
in s t s þ l
and Sðs 1Þ 62 b
l
and Sðs þ l þ1Þ 62 b
l
; ljl 1; Lg
ð12Þ
b
l
¼
X
K
k¼1
B
k
l
ð13Þ
T ¼
X
l
lpðl; lÞð14Þ
where l is the length of continuously staying time steps in
each attractor basin, and p(l,l) represents a distribution of
continuously staying l steps in attractor basin L = l
within T steps. In our actual simulation, T =10
5
.To
different connectivity r = 15 and r = 50, the distribution
p(l,l) are shown in Fig. 14a and b. In these figures, dif-
ferent basins are marked with different colors and
symbols. From the results, we know, with increase of the
connectivity, continuously staying time l becomes longer
and longer.
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100
Success rate
Target Velocity ( x0.01)
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100
Success rate
Target Velocity ( x0.01)
(a)
r 16: down
ward tendency
(b)
r 51: u
p
w
ard tendenc
y
Fig. 12 Success rates drawn
from Fig. 11. We take the data
of a certain connectivity and
show them in two dimension
diagram. The horizontal axis
represents target velocity from
0.01 to 1.0, and the vertical axis
represents success rate. With the
increase of target velocity,
(a) r = 16: downward tendency;
(b) r = 51: upward tendency
0
10
20
30
40
50
60
0
20
40
60
80
100
0
0.2
0.4
0.6
0.8
1
Success Rate
Connectivity
Target Velocity
x10
-2
0
10
20
30
40
50
60
0
20
40
60
80
100
0
0.2
0.4
0.6
0.8
1
Success Rate
Connectivity
Target Velocity
x10
-2
(a)
moving target along linear trajectory LT
2
(b)
AA moving target along linear trajectory LT
6
Fig. 13 Success rates of
tracking a moving target along
different linear trajectories.
(a) A moving target along linear
trajectory LT
2
;(b) A moving
target along linear trajectory
LT
6
Fig. 14 The log plot of the frequency distribution of continuously
staying time l: The horizontal axis represents continuously staying
time steps l in a certain basin l during long time chaotic wandering,
and the vertical axis represents the accumulative number p(l,l) of the
same staying time steps l in a certain basin l. continuously staying
time steps l becomes long with the increase of connectivity r.(a)
r = 15: l is shorter; (b) r = 50: l is longer
46 Cogn Neurodyn (2008) 2:39–48
123
Referring to those novel performances talked in previ-
ous section, let us try to consider the reason. First, in the
case of slower target velocity, a decreasing success rate
with the increase of connectivity r is observed from both
circular target trajectory and linear ones. This point shows
that chaotic dynamics localized in a certain basin for too
much time is not better to track a slower target.
Second, in the case of faster target velocity, it seems
useful to track a faster target when chaotic dynamics is not
too strong. Computer experiments show that, when the
target moves quickly, the action of the object is always
chaotic so as to track the target. In past experiments, we
know that motion increments of chaotic motion is very
short. Therefore, shorter motion increments and faster
target velocity result in not good tracking performance.
However, when continuously staying time l in a certain
basin becomes longer, the object can move toward a certain
direction for l steps. This is useful to track the faster target
for the object. Therefore, when connectivity becomes a
little large (r = 50 or so), success rate arises following the
increase of target velocity, such as the case shown in
Fig. 12.
Third, we try to explain the reason why success rate of
tracking a moving target along a linear trajectory decreases
rapidly when the target velocity increases a little. In this
case, faster target velocity results in more chaotic motions
of the object. At the same time, the target has moved too
far away from the object along a linear trajectory. There-
fore, the success rates become worse. Generally speaking,
chaotic dynamics is not always useful to solve an ill-posed
problem. However, better performance can be often
observed using chaotic dynamics to solve an ill-posed
problem. As an issue for future study, a functional aspect of
chaotic dynamics still has context dependence.
Finally, let us consider the approach in robot navigation.
There are many approaches in robot navigation. As an
approach using dynamical neural network, a simple
mechanism—dynamical neural Smitt trigger was applied to
a small neural network controlling the behaviour of a
autonomous miniatur robot (Hu
¨
lse and Pasemann 2002).
On the other hand, our model is a recurrent neural network
with N neurons, that is, a large neural network. Recent
works about brain-machine interface and the parietal lobe
suggested that, in cortical area, the ‘message’ defining a
given hand movement is widely disseminated (Wessberg
et al. 2000; Nicolelis 2001). Therefore, the difference
between our novel approach and the Smitt trigger approach
in robot navigation are quite big. Our approach emphasizes
the whole state of neurons, but the Smitt trigger pays
attention to the interaction between a few of neurons.
Furthermore, our approach has a huge reservoir of redun-
dancy and results in great robustness. Generally speaking,
methods in robot navigation often fall into enormous
computing complexity. However, our approach proposed a
simple adaptive control algorithm.
Summary
We proposed a simple method to tracking a moving target
using chaotic dynamics in a recurrent neural network
model. Although chaotic dynamics could not always solve
all complex problems with better performance, better
results often were often observed on using chaotic
dynamics to solve certain ill-posed problems, such as
tracking a moving target and solving mazes (Suemitsu and
Nara 2004). From results of the computer simulation, we
can state the following several points.
A simple method to tracking a moving target was
proposed
Chaotic dynamics is quite efficient to track a target that
is moving along a circular trajectory.
Performance of tracking a moving target of a linear
trajectory is not better than that of a circular trajectory,
however, to some linear trajectories, excellent perfor-
mance was observed.
The length of continuously staying time steps becomes
long with the increase of synaptic connectivity r that
can lead chaotic dynamics in the network.
Continuously longer staying time in a certain basin
seems useful to track a faster target.
References
Aihara K, Takabe T, Toyoda M (1990) Chaotic neural networks. Phys
Lett A 114:333–340
Babloyantz A, Destexhe A (1986) Low-dimensional chaos in an
instance of epilepsy. Proc Natl Acad Sci USA 83:3513–3517
Fujii H, Itoh H, Ichinose N, Tsukada M (1996) Dynamical cell
assembly hypothesis—theoretical possibility of spatio-temporal
coding in the cortex. Neural Netw 9:1303–1350
Huber F, Thorson H (1985) Cricket auditory communication. Sci
Amer 253:60–68
Hu
¨
lse M, Pasemann F (2002) Dynamical neural Schmitt trigger for
robot control. In: Dorronsoro J (ed) ICANN 2002: topics in
artificial neural networks. International Conference On Artificial
Neural Networks, Madrid, Spain, August 28–30, 2002. Lecture
notes in computer science, vol 2415. Springer Verlag, Berlin,
pp 783–788
Kaneko K, Tsuda I (2003) Chaotic itinerancy. Chaos 13(3):926–936
Kuroiwa J, Nara S, Aihara K (1999) Functional possibility of chaotic
behaviour in a single chaotic neuron model for dynamical signal
processing elements. In: 1999 IEEE International Conference on
Systems, Man, and Cybernetics (SMC’99), Tokyo, October,
1999, vol 1. p 290
Nara S (2003) Can potentially useful dynamics to solve complex
problems emerge from constrained chaos and/or chaotic itiner-
ancy? Chaos 13(3):1110–1121
Nara S, Davis P (1992) Chaotic wandering and search in a cycle
memory neural network. Prog Theor Phys 88:845–855
Cogn Neurodyn (2008) 2:39–48 47
123
Nara S, Davis P (1997) Learning feature constraints in a chaotic
neural memory. Phys Rev E 55:826–830
Nara S, Davis P, Kawachi M, Totuji H (1993) Memory search using
complex dynamics in a recurrent neural network model. Neural
Netw 6:963–973
Nara S, Davis P, Kawachi M, Totuji H (1995) Chaotic memory
dynamics in a recurrent neural network with cycle memories
embedded by pseudo-inverse method. Int J Bifurcation and
Chaos Appl Sci Eng 5:1205–1212
Nicolelis M (2001) Actions from thoughts. Nature 409:403–407
Skarda CA, Freeman WJ (1987) How brains make chaos in order to
make sense of the world. Behav Brain Sci 10:161–195
Suemitsu Y, Nara S (2003) A note on time delayed effect in a
recurrent neural network model. Neural Comput Appl 11(3&4):
137–143
Suemitsu Y, Nara S (2004) A solution for two-dimensional mazes
with use of chaotic dynamics in a recurrent neural network
model. Neural Comput 16(9):1943–1957
Tsuda I (1991) Chaotic itinerancy as a dynamical basis of
hermeneutics in brain and mind. World Futures 32:167–184
Tsuda I (2001) Toward an interpretation of dynamic neural activity in
terms of chaotic dynamical systems. Behav Brain Sci 24(5):
793–847
Wessberg J, Stambaugh C, Kralik J, Beck P, Laubach M, Chapin J,
Kim J, Biggs S, Srinivasan M, Nicolelis M (2000) Real-time
prediction of hand trajectory by ensembles of cortical neurons in
primates. Nature 408:361–365
48 Cogn Neurodyn (2008) 2:39–48
123
    • "During this wandering, we have taken statistics of the residence time, the time during which the system continuously stays in a certain basin (Suemitsu and Nara 2004; Li and Nara 2008) and evaluated the distribution p(l, l) which is defined by pðl; lÞ ¼ fthe number of ljSðtÞ 2 b l in s t s þ l and Sðs À 1Þ 6 2 b l and Sðs þ l þ 1Þ 6 2 b l ; ljl 2 ½1; "
    [Show abstract] [Hide abstract] ABSTRACT: Chaotic dynamics generated in a chaotic neural network model are applied to 2-dimensional (2-D) motion control. The change of position of a moving object in each control time step is determined by a motion function which is calculated from the firing activity of the chaotic neural network. Prototype attractors which correspond to simple motions of the object toward four directions in 2-D space are embedded in the neural network model by designing synaptic connection strengths. Chaotic dynamics introduced by changing system parameters sample intermediate points in the high-dimensional state space between the embedded attractors, resulting in motion in various directions. By means of adaptive switching of the system parameters between a chaotic regime and an attractor regime, the object is able to reach a target in a 2-D maze. In computer experiments, the success rate of this method over many trials not only shows better performance than that of stochastic random pattern generators but also shows that chaotic dynamics can be useful for realizing robust, adaptive and complex control function with simple rules.
    Full-text · Article · Dec 2009
    • "Subsequently, chaotic artificial neural networks with the conventional sigmoidal activation functions were successfully implemented for solving various practical problems. Instances abound: chaotic simulated annealing was utilized to solve combinatorial optimization problems, such as the well-known traveling salesman problem [32]–[34]; chaotic itinerancy was applied to solve ill-posed problems, such as tracking a moving target [35], [36]. Moreover, it was found that the outputs of the chaotic neural network model with sigmoidal functions are always nonperiodic, changing continuously in a relatively narrow and asymmetric range, therefore, sometimes cannot be directly stabilized to one of its stored patterns or the corresponding reversed patterns. "
    [Show abstract] [Hide abstract] ABSTRACT: In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
    Full-text · Article · Sep 2009
    • "Furthermore, the idea is extended to challenging application of chaotic dynamics in control. Chaotic dynamics introduced in a recurrent network model was applied to control tasks that an object should solve a two-dimensional maze for catching a target (Suemitsu and Nara 2004), or should capture a target moving along different trajectories (Li and Nara 2008). A simple coding method is employed to project the higher dimensional neural states dynamics into lower dimensional motion increments. "
    [Show abstract] [Hide abstract] ABSTRACT: Originating from a viewpoint that complex/chaotic dynamics would play an important role in biological system including brains, chaotic dynamics introduced in a recurrent neural network was applied to control. The results of computer experiment was successfully implemented into a novel autonomous roving robot, which can only catch rough target information with uncertainty by a few sensors. It was employed to solve practical two-dimensional mazes using adaptive neural dynamics generated by the recurrent neural network in which four prototype simple motions are embedded. Adaptive switching of a system parameter in the neural network results in stationary motion or chaotic motion depending on dynamical situations. The results of hardware implementation and practical experiment using it show that, in given two-dimensional mazes, the robot can successfully avoid obstacles and reach the target. Therefore, we believe that chaotic dynamics has novel potential capability in controlling, and could be utilized to practical engineering application.
    Full-text · Article · Oct 2008
Show more