Conference PaperPDF Available

Abstract

In a series of papers published in the seventies, Grossberg has developed a geometric approach for analyzing the global dynamical behavior and convergence properties of a class of competitive dynamical systems. In this paper, Grossberg approach is extended to competitive standard cellular neural networks (CNNs), and it is used to investigate convergence of classes of non-symmetric competitive CNNs under the hypothesis that they induce a globally consistent decision scheme.
A Study on Convergence of Competitive CNNs
M. Di Marco, M. Forti, M. Grazzini, P. Nistri and L. Pancioni
Department of Information Engineering
University of Siena
Via Roma, 56 - 53100 Siena, Italy
E-mail: {dimarco, forti, grazzini, pnistri, pancioni}@dii.unisi.it
Abstract In a series of papers published in the seventies,
Grossberg has developed a geometric approach for analyzing
the global dynamical behavior and convergence properties of a
class of competitive dynamical systems. In this paper, Grossberg
approach is extended to competitive standard Cellular Neural
Networks (CNNs), and it is used to investigate convergence of
classes of non-symmetric competitive CNNs under the hypothesis
that they induce a globally consistent decision scheme.
I. INTRODUCTION
In [1]–[3], Grossberg has developed a geometric approach
for analyzing the global dynamical behavior and convergence
properties of the class of nonlinear competitive systems
˙xi=α(xi)Mi(x),i=1,2,...,n (1)
where x=(xi)i=1,2,...,n Rnis the vector of state variables.
The continuously differentiable function Mi:RnR, which
satisfies Mi
∂xj
(x)0j=i, xRn
represents the competitive balance at the state xi. The con-
tinuous function α:RRis such that α(xi)>0when
xi>0, and α(0) = 0, i.e., αis an amplification function
that converts the competitive balance into the growth rate
˙xi. Moreover, since α(0) = 0, the hyperplanes xi=0are
invariant for the dynamics of (1). Hence there is a subset of
Rn, namely the positive orthant O+={xRn:xi>
0,i=1,2,...,n}, where the state variables evolve: for initial
conditions xi(0) >0,i=1,2,...,n, the solution x(t)of (1)
is such that x(t)O+for all t0. An important special
case is the classical Volterra-Lotka system for ncompeting
species
˙xi=xi1
n
k=1
Bikxk,i=1,2,...,n (2)
where Bik 0for all i=k.
It has been proved in [1]–[3] that each competitive system
(1) induces a decision scheme, and if the scheme is globally
consistent, then each solution is forced through a series of local
decisions (or jumps) which eventually lead to a final global
decision (or global consensus). This corresponds to the fact
that the solution has settled into an equilibrium point, i.e., (1)
is convergent. In other circumstances, it may happen that the
decision scheme is globally inconsistent, hence the series of
local decisions never terminates and the system can sustain
non-vanishing oscillations. This is true for example in the
‘voting paradox’ discussed by May and Leonard in 1975, i.e.,
a Volterra-Lotka model with three competing species x1,x
2,
and x3, which has a contradictory decision scheme where x1
beats x2,x2beats x3, and x3beats x1.
The goal of this paper is to extend Grossberg approach in
order to address convergence of standard competitive cellular
neural networks (CNNs) [4], i.e., CNNs with inhibitory (non-
positive) interconnections between distinct neurons. What
makes really attractive Grossberg approach in the CNN frame-
work, is that it does not require the existence of a Lyapunov
function, hence it is applicable also to address convergence of
non-symmetric CNNs. In this regard, we recall that the Lya-
punov method developed by Chua and Yang [4] is applicable
to symmetric (reciprocal) CNNs, only.
The paper shows that it is possible to associate a decision
scheme with the competitive dynamical system satisfied by
the CNN, and to globally analyze the CNN dynamics and
convergence properties on the basis of the consistency or in-
consistency of the scheme. In particular, the paper investigates
the convergence properties implied by a globally consistent
decision scheme, in the case where there are three competing
neurons. The proofs of the main results in this paper are given
in [5]. There, the convergence results are also extended to
higher-order CNNs.
II. COMPETITIVE CNNS
The standard CNNs, introduced by Chua and Yang in 1988
[4], obey the system of nonlinear differential equations
˙x=Dx +TG(x)+I(N)
where x=(xi)i=1,2,...,n Rnis the vector of neuron state
variables; D= diag(d1,d
2,...,d
n)Rn×nwith di>0,
i=1,2,...,n, is a diagonal matrix of neuron self-inhibitions;
T=(Tij )i,j=1,2,...,n Rn×nis the neuron interconnection
matrix; I=(Ii)i=1,2,...,n Rnis the vector of biasing inputs,
and G(x)=(g(x1),...,g(xn)):RnRn, where the prime
means transpose, is a diagonal mapping where
g(ρ)=1
2(|ρ+1|−|ρ1|):RR
is the piecewise-linear neuron activation.
Definition 1: The CNN (N) is said to be convergent (or
completely stable), if and only if for any solution x(t)of (N)
we have limt+x(t)=xe, where xeis an equilibrium point
of (N).
3155
1-4244-0921-7/07 $25.00 © 2007 IEEE.
Assumption 1: The CNN (N) is competitive, i.e., we have
Tij 0, for all i=j.
Competitive CNNs are of great importance for the applica-
tions, see [6]–[9] and references therein. Under the assumption
that Tis symmetric, it is well-known that a competitive CNN
admits a global Lyapunov function and is convergent from the
general Lyapunov theory developed in [4]. On the contrary
non-symmetric competitive CNNs may exhibit non-convergent
dynamics including non-vanishing oscillations and chaos. For
example, in [10] a third-order non-symmetric competitive
CNN has been studied, which displays Hopf bifurcations
originating a large-amplitude and globally attracting stable
limit cycle, in a wide range of parameters. In [11], a fourth-
order non-symmetric competitive CNN is presented, which
exhibits a cascade of period-doubling bifurcations leading to
the birth of a complex attractor. Up to now, no general method
is available for determining conditions under which a non-
symmetric competitive CNN is convergent.
The goal of this paper is to extend Grossberg approach to
competitive CNNs. This involves two main steps: (i) first of
all we write the CNN equations with respect to the neuron
outputs, and show that in this way we are brought back to
a dynamical system that is structurally similar to the class
of competitive systems (1) (Section III); (ii) we analyze the
convergence properties of the dynamical system satisfied by
the CNN outputs, by generalizing to this system Grossberg
approach (Sections IV-V). We stress that the extended method
does not require the existence of a Lyapunov function. As such
it is applicable to address convergence in the general case
where the interconnection matrix Tof the competitive CNN
is not necessarily symmetric, and the CNN possesses multiple
equilibrium points.
III. DYNAMIC SYSTEM FOR CNN OUTPUTS
The CNN (N) is quite different in structure with respect to
(1), and Grossberg approach is not directly applicable to ana-
lyze (N). In particular, a competitive balance and amplification
function with properties analogous to those of model (1), are
not identifiable for (N). Moreover, the state space of (N) is
the whole Rnspace. This notwithstanding, we prove in what
follows that it is possible to put the CNN equations in a form
structurally analogous to that of system (1), if we write the
same equations with respect to the neuron outputs.
Let us consider the following system of differential inclu-
sions
˙yH(y)M(y),yRn(3)
where
M(y)=Ay +I=(D+T)y+I, y Rn
is the affine vector field satisfied by the CNN (N) in the
linear region {yRn:|yi|<1,i =1,2,...,n},
extended to the whole Rnspace. Furthermore, H(y)=
diag(h(y1),...,h(yn)) : RnRnis a diagonal set-valued
map, such that h(ρ):RRis a non-negative set-valued
map defined as
h(ρ)=
1,|ρ|<1
[0,1] ,|ρ|=1
0,|ρ|>1.
(4)
Note that under Assumption 1 we have
∂Mi
∂yj
(y)=Aij =Tij 0i=j, y=(yi)i=1,2,...,n Rn.
Property 1: Let x(t),t0, be a solution of the CNN (N),
and y(t)=G(x(t)),t0, the corresponding output solution.
Then, y(t),t0, is also a solution of (3).
By writing (3) in components we obtain
˙yih(yi)Mi(y)=h(yi)n
k=1
Aikyk+Ii,i=1,2,...,n
(5)
for any y=(yi)i=1,2,...,n Rn, whose form is analogous to
that of the class of competitive systems (1), and in particular to
the Volterra-Lotka system (2). Indeed, (5) is characterized by:
(a) functions Mi,i=1,2,...,n, which are the competitive
balance at each CNN neuron output yi; (b) a non-negative
function h, which plays the role of an amplification function
converting the competitive balance into the growth rate ˙yi,
and (c) a subset of the space Rn, namely the hypercube
Kn=[1,1]n, where the solutions of (5) starting in Kn
are constrained to evolve.
Although (5) is structurally analogous to model (1), there is
however a basic difference. In fact, the amplification hin (4)
is a set-valued map assuming multiple values at the saturation
levels ρ=±1,1while the amplification function αin model
(1) is a conventional single-valued function. Consequently,
we need ad hoc techniques to analyze the dynamics of the
differential inclusion (5), which differ substantially from those
used to analyze the ordinary differential equation in model (1).
IV. DECISION SCHEME FOR COMPETITIVE CNNS
By means of the competitive balance M, we define for the
CNN the following relevant functions and subsets of Kn.Let
M+(y) = max
i=1,2,...,n
Mi(y)=
n
j=1
Aij yj+Ii
:KnR
be the maximal balance function, and
M(y) = min
i=1,2,...,n
Mi(y)=
n
j=1
Aij yj+Ii
:KnR
the minimal balance function. Furthermore, consider the sub-
sets of Kn
R+={yKn:M+(y)0}
R={yKn:M(y)0}
R=R+R.
1This agrees with the fact that the velocity ˙yi(t)of the CNN output solution
y(t)is not uniquely defined at the saturation levels yi(t)=±1, but rather
there are multiple feasible velocities when yi(t)=±1.
3156
A. Ignition property
Property 2: If the CNN (N) satisfies Assumption 1, then
R+is positively invariant for the output solutions of (N). This
means that, given any output solution y(t)of (N) such that
y(t+)R+for some t+0, then we have y(t)R+for
all tt+.ThesetsRand Rare positively invariant for
the output solutions of (N) as well.
Property 2 represents an ignition property for the competi-
tive CNN. Suppose that at some instant t+the output of neuron
iis enhanced, i.e., dyi(t)/dt|t=t+0. Then, there is at least
a neuron j=j(t)∈{1,2,...,n}, the index jdepending
on t, which is enhanced for any time tt+. Said another
way, when the competition between neurons starts, it thereafter
never turns off.
Property 3: Suppose that the CNN (N) satisfies Assumption
1, and let y(t)be an output solution of (N) such that y(t)/R
for all t0. Then, all outputs yi(t),i=1,2,...,n,are
either monotonically increasing or monotonically decreasing
for t0, hence there exists the limt+y(t)=y().
Property 3 implies that only within Rthere is an interesting
dynamics for (N) where the neuron outputs are not necessarily
monotone increasing or monotone decreasing functions. In
what follows we thus restrict our analysis to any output
solution y(t)to (N) that hits Rat some finite instant t,
and is such that y(t)Rfor all tt(see Property 2).
B. Jumps and jump sets
Let y(t)be an output solution of the competitive CNN (N),
such that y(t)Rfor tt. In analogy to Grossberg
approach, we want to analyze the dynamical behavior of
y(t)in Rby keeping track of which neuron is winning
the competition, i.e., by tracking the index w=w(t)
{1,2,...,n}, in general depending on t, such that we have
Mw(t)(y(t)) = M+(y(t)). To this end, we will follow the
jumps of y(t)between the regions
R+
i={yR:Mi(y)=M+(y)},i=1,2,...,n.
Definition 2: We say that an output solution y(t)of (N)
makes no jump (between regions R+
i) in the time interval
(ta,t
b)(t,+), if and only if there exists an index w
{1,2,...,n}such that Mw(y(t)) = M+(y(t)) for all t
(ta,t
b).
Let
t1=sup{τ>t
:y(t) makes no jump in (t)}
where t<t
1+. Moreover, let w(1) ∈{1,2,...,n}be
such that Mw(1)(y(t))=M+(y(t)) for all t(t,t
1).
If t1=+, then w(1) is the winning neuron for all tt.
Otherwise, if t1<+we let
t2=sup{τ>t
1:y(t) makes no jump in (t1)}
where t1<t
2+. Also, let w(2) ∈{1,2,...,n}be
such that Mw(2)(y(t)) = M+(y(t)) for all t(t1,t
2).Of
course, we have w(2) =w(1). If the case t1<+occurs,
we will say that y(t)jumps from the winning region R+
w(1) to
the winning region R+
w(2) at the instant t1.
If t2=+, then w(2) is the winning neuron for all tt1.
Otherwise, if t2<+we let
t3=sup{τ>t
2:y(t) makes no jump in (t2)}
where t2<t
3+. Moreover, let w(3) ∈{1,2,...,n}be
such that Mw(3)(y(t)) = M+(y(t)) for all t(t2,t
3).We
have w(3) =w(2).Ift2<+,wewillsaythaty(t)jumps
from region R+
w(2) to region R+
w(3) at the instant t2.
Proceeding in this way, we can construct a sequence of in-
stants t<t
1<t
2<t
3<..., and a corresponding sequence
of indexes w(1),w(2),w(3),...,in the set {1,2,...,n}, such
that y(t)does not jump in the intervals (tk1,t
k),k=1,2,...
(we have let t0=t), whereas y(t)jumps from region R+
w(k)
to region R+
w(k+1) at the instants tk,k=1,2,... Such a
sequence of jumps may be finite or infinite. If it is finite,
then there exist an index w∈{1,2,...,n}and an instant
tw<+, such that Mw(y(t))=M+(y(t)) for all ttw,
namely ywis the eventual winning neuron.
C. Decision scheme
Suppose that y(t)jumps from region R+
ito region R+
j,
j=i, at the instant t=tj. We equivalently say that the CNN
takes a local decision where the CNN decides to maximally
enhance neuron jinstead of neuron i,att=tj. Of course,
the jump can only occur on the positive jump set between the
winning regions R+
iand R+
j, which is given by
J+
ij =R+
iR+
j={yR:Mi(y)=Mj(y)=M+(y)}.
Definition 3: We say that the positive jump set J+
ij ,j=i,
is crossable from ito j, if and only if there exists at least
an output solution y(t)of the CNN (N), such that y(t)jumps
from R+
ito R+
jon J+
ij at some instant t=tj. Otherwise, we
say that the jump set J+
ij is non-crossable from ito j.
It is now possible to associate with the competitive CNN
(N) a positive directed decision graph G+. First, consider a
directed graph specified by the set of nodes {1,2,...,n}, each
node corresponding to a neuron, which is fully connected.
Then, we construct a (reduced) positive directed decision
graph G+as follows: for each i, j ∈{1,2,...,n}, with j=i,
we remove the branch oriented from node ito node jif and
only if J+
ij is non-crossable from ito j. The graph G+is
identified with the positive decision scheme induced by (N).
Definition 4: Suppose that the CNN (N) satisfies Assump-
tion 1. The positive directed decision graph G+associated
with (N) is said to be globally consistent, if and only if G+is
acyclic, i.e., G+has no directed jump cycle.
V. C ONVERGENCE OF COMPETITIVE CNNS
A. A general result
The next general theorem on the asymptotic behavior of the
solutions of a competitive CNN holds.
Theorem 1: Suppose that the CNN (N) satisfies Assumption
1, and that the positive directed decision graph G+associated
3157
with the CNN is globally consistent. Let y(t),t0,bean
output solution of (N) such that y(t)Rfor all tt.
Then, the following hold.
a) The solution y(t)undergoes at most n1jumps between
regions R+
ifor tt, hence there exist an index w
{1,2,...,n}and tw>t
, such that we have y(t)R+
w,t
tw(neuron wis the eventual winning neuron). Furthermore,
there exists the limt+yw(t)=yw()[1,1].
b) If we have 1<y
w(t)<1,ttw, then there exists the
limt+y(t)=y()Kn.Otherwise, we have yw(t)=
1oryw(t)=1,ttw1for some tw1tw.
B. Second-Order and Third-order Competitive CNNs
For competitive CNNs with two neurons the next basic
result can be proved.
Theorem 2: Consider a second-order CNN (N) satisfying
Assumption 1. Then, the positive directed decision graph G+
is globally consistent, moreover (N) is convergent.
A third-order CNN in general does not have a globally
consistent decision scheme (G+is in general not acyclic), and
it may be non-convergent, as it is shown in the next example.
Example. In [10], the following third-order competitive CNN
has been considered
˙x1
˙x2
˙x3
=
x1
x2
x3
+
0αβ
β0α
αβ0
g(x1)
g(x2)
g(x3)
(6)
where α, β > 0. It can be verified that when parameters α, β
belong to the open region
Rosc =(α, β):α+β>2; α<β2+1
β+1;β<α2+1
α+1(7)
then (6) induces the positive directed decision scheme G+:
12,23, and 31, which has a directed jump cycle.
It is shown in [10] that for these parameters the solutions of
(6) display large-size non-vanishing oscillations.
In the next basic result it is shown that, when the third-order
CNN has a globally consistent decision scheme, then it also
enjoys the property of convergence.
Theorem 3: Consider a third-order CNN (N) satisfying As-
sumption 1, and assume that the positive directed decision
graph G+is globally consistent. Then, (N) is convergent.
It is possible to analytically characterize some classes of
third-order competitive CNNs satisfying the hypotheses of
Theorem 3. Consider the third-order CNN (N) defined by
A=(Aij )i,j =1,2,3=D+T, and suppose that di=1,
i=1,2,3, and that the input I=0. The following holds.
Property 4: Let A⊂R9be the set of interconnection
parameters Aij ,i, j ∈{1,2,3}, satisfying
A13 <A
23 0; A12 <A
32 0; A21 <A
31 0; A11 R
A22 <A
12 −|A11 A21 |−|A13 A23|−2IM
A33 <min{a1,a
2}
where
a1=A13 −|A11 A31 |−|A12 A32|−2IM
a2=A23 −|A22 A32 |−|A21 A31|−2IM
and IM= maxi=1,2,3|Ii|. Then, within Athe third-order CNN
has the globally consistent decision scheme G+
1:21,3
1,32. Moreover, Ais the union of a finite number of
convex polyhedra with non-empty interior in R9.
Property 4 has the following consequence. Given a nominal
third-order CNN whose interconnection parameters belong
to the interior of region Adefined in Property 4, then: (a)
the nominal CNN has a globally consistent decision scheme
and is convergent; (b) global consistency of decisions and
convergence are properties that are robust with respect to
(small) perturbations of the nominal CNN interconnections.
We stress that within region Athere are CNNs with non-
symmetric interconnection matrices T, for which convergence
cannot be proved to the authors’ knowledge by means of other
existing methods.
VI. CONCLUSION
The paper has extended to standard competitive CNNs a
geometric approach previously developed by Grossberg for
analyzing convergence of a different class of competitive
dynamical systems. The approach permits to associate with a
competitive CNN a decision scheme, and to globally analyze
its dynamical behavior and convergence properties under the
hypothesis that the decision scheme is globally consistent. An
application has been considered to convergence of a class of
non-symmetric third-order competitive CNNs.
REFERENCES
[1] S. Grossberg, “Competition, decision, and consensus,” J. Math. Anal.
Appl., vol. 66, pp. 470–493, 1978.
[2] ——, “Decisions, patterns, and oscillations in nonlinear competitive
systems with applications to volterra-lotka systems,” J. Theor. Biol.,
vol. 73, pp. 101–130, 1978.
[3] ——, “Biological competition,” Proc. Natl. Acad. Sci., vol. 77, pp.
2388–2342, 1980.
[4] L. O. Chua and L. Yang, “Cellular neural networks: Theory, IEEE
Trans. Circuits Syst., vol. 35, no. 10, pp. 1257–1272, Oct. 1988.
[5] M. Di Marco, M. Forti, M. Grazzini, P. Nistri, and L. Pancioni, “Deci-
sions and trajectory convergence in competitive CNNs,” Department of
Information Engineering, University of Siena, Via Roma 56, 53100
Siena, Italy, Tech. Rep. 15, 2006.
[6] L. O. Chua and T. Roska, “Stability of a class of nonreciprocal neural
networks,” IEEE Trans. Circuits Syst., vol. 37, pp. 1520–1527, December
1990.
[7] P. Thiran, G. Setti, and M. Hasler, An approach to information
propagation in 1-D cellular neural networks-Part I: Local diffusion,
IEEE Trans. Circuits Syst. I, vol. 45, pp. 777–789, 1998.
[8] B. E. Shi and K. Boahen, “Competitively coupled orientation selective
cellular neural networks,” IEEE Trans. Circuits Syst. I, vol. 49, pp. 388–
394, 2002.
[9] G. A. Barreto, J. C. M. Mota, L. G. M. Souza, R. A. Frota, and
L. Aguayo, “Condition monitoring of 3G cellular networks through
competitive neural models, IEEE Trans. Neural Networks, vol. 16, pp.
1064–1075, September 2005.
[10] M. Di Marco, M. Forti, and A. Tesi, “Bifurcations and oscillatory
behavior in a class of competitive cellular neural networks, Int. J.
Bifurcation and Chaos, vol. 10, no. 6, pp. 1267–1293, June 2000.
[11] M. Di Marco, M. Forti, M. Grazzini, and L. Pancioni, “Fourth-order
nearly-symmetric cnns exhibiting complex dynamics, Int. J. Bifurcation
and Chaos, vol. 15, no. 5, pp. 1579–1587, May 2005.
3158
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
When the neuron interconnection matrix is symmetric, the standard Cellular Neural Networks (CNN's) introduced by Chua and Yang [1988a] are known to be completely stable, that is, each trajectory converges towards some stationary state. In this paper it is shown that the interconnection symmetry, though ensuring complete stability, is not in the general case sufficient to guarantee that complete stability is robust with respect to sufficiently small perturbations of the interconnections. To this end, a class of third-order CNN's with competitive (inhibitory) interconnections between distinct neurons is introduced. The analysis of the dynamical behavior shows that such a class contains nonsymmetric CNN's exhibiting persistent oscillations, even if the interconnection matrix is arbitrarily close to some symmetric matrix. This result is of obvious relevance in view of CNN's implementation, since perfect interconnection symmetry in unattainable in hardware (e.g. VLSI) realizations. More insight on the behavior of the CNN's here introduced is gained by discussing the analogies with the dynamics of the May and Leonard model of the voting paradox, a special Volterra–Lotka model of three competing species. Finally, it is shown that the results in this paper can also be viewed as an extension of previous results by Zou and Nossek for a two-cell CNN with opposite-sign interconnections between distinct neurons. Such an extension has a significant interpretation in the framework of a general theorem by Smale for competitive dynamical systems.
Article
Full-text available
In this paper, the possible presence of complex dynamics in nearly-symmetric standard Cellular Neural Networks (CNNs), is investigated. A one-parameter family of fourth-order CNNs is presented, which exhibits a cascade of period-doubling bifurcations leading to the birth of a complex attractor, close to some nominal symmetric CNN. Different from previous work on this topic, the bifurcations and complex dynamics are obtained for small relative errors with respect to the nominal interconnections. The fourth-order CNNs have negative (inhibitory) interconnections between distinct neurons, and are designed by a variant of a technique proposed by Smale to embed a given dynamical system within a competitive dynamical system of larger order.
Article
Full-text available
We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.
Article
Full-text available
We extend previous work in orientation selective cellular neural networks to include competitive couplings between different layers tuned to different orientations and spatial frequencies. The presence of these interactions sharpens the spatial frequency tuning of the filters in two ways, when compared to a similar architecture proposed previously which lacks these interactions. The first is the introduction of nulls in the frequency response. The second is the introduction of constraints on the passbands of the coupled layers. Based on an understanding of these two effects, we propose a method for choosing spatial frequency tunings of the individual layers to enhance orientation selectivity in the coupled system
Article
When the neuron interconnection matrix is symmetric, the standard Cellular Neural Networks (CNN's) introduced by Chua and Yang [1988a] are known to be completely stable, that is, each trajectory converges towards some stationary state. In this paper it is shown that the interconnection symmetry, though ensuring complete stability, is not in the general case sufficient to guarantee that complete stability is robust with respect to sufficiently small perturbations of the interconnections. To this end, a class of third-order CNN's with competitive (inhibitory) interconnections between distinct neurons is introduced. The analysis of the dynamical behavior shows that such a class contains nonsymmetric CNN's exhibiting persistent oscillations, even if the interconnection matrix is arbitrarily close to some symmetric matrix. This result is of obvious relevance in view of CNN's implementation, since perfect interconnection symmetry in unattainable in]hardware (e.g. VLSI) realizations. More insight on the behavior of the CNN's here introduced is gained by discussing the analogies with the dynamics of the May and Leonard model of the voting paradox, a special Volterra-Lotka model of three competing species. Finally, it is shown that the results in this paper can also be viewed as an extension of previous results by Zou and Nossek for a two-cell CNN with opposite-sign interconnections between distinct neurons. Such an extension has a significant interpretation in the framework of a general theorem by Smale for competitive dynamical systems.
Article
This chapter proves that all competitive systems which admit an adaptation level are absolutely stable. This theorem suggests an approach to an old philosophical problem: How can you design systems of communicators wherein each communicator is characterized by arbitrary individual differences, or personal parameters, each communicator knows about other communicators only through locally perceived signals, yet the communication system as a whole can generate a global consensus? How can the system as a whole achieve coherence even if its parts are carelessly thrown together? One answer is: “Balance the individual differences against an adaptation level”. In other words, if you design part of the system very carefully, you can let the rest go wild without sacrificing system stability. It seems to me that this type of insight should be generally better understood, notably in discussions of free market forces.
Article
This paper describes new properties of competitive systems which arise in population biology, ecology, psychophysiology, and developmental biology. These properties yield a global method for analyzing the geometric design and qualitative behavior, e.g. limits or oscillations, of competitive systems. The method explicates a main theme about competitive systems: who is winning the competition? The systems can undergo a complicated series of discrete decisions, or jumps, whose structure can, for example, yield global pattern formation or sustained oscillations, as in the voting paradox. The method illustrates how a parallel continuous system can be analyzed in terms of discrete serial operations, but notes that the next operation can be predicted only from the parallel interactions. It is shown that binary approximations to sigmoid signals in nonlinear networks are not valid in general, It is also shown how a temporal series of nested dynamic boundaries can be induced by purely nonlinear interactive effects. These boundaries restrict the fluctuations of population sizes or activities to ever finer intervals. The method can be used where Lyapunov methods fail and often obviates the need for local stability analysis. The paper also strengthens and corrects some previous results on the voting paradox.
Conference Paper
Fuzzy cellular neural networks (FCNN) are novel classes of cellular neural networks. In this paper, the basic theory of FCNN is presented. FCNN is a generalization of CNN by using fuzzy operations in the synaptic law calculation allowing us to combine the low level information processing capability of CNNs with the high level information processing capability, such as image understanding, of fuzzy systems. The FCNN structures are based on the uncertainties in human cognitive processes and in modeling neural systems, and provide an interface between the human expert and the classical CNN
Article
This is the first of two companion papers devoted to a deep analysis of the dynamics of information propagation in the simplest nontrivial Cellular Neural Network (CNN), which is one-dimensional and has connections between nearest neighbors only. We will show that two behaviors are possible: local diffusion of information between neighboring cells and global propagation through the entire array. This paper deals with local diffusion, of which we will first give an accurate definition, before computing the template parameters for which the CNN has this behavior. Next we will compute the number of stable equilibria, before examining the convergence of any trajectory toward them, for three different kinds of boundary conditions: fixed Dirichlet, reflective, and periodic
Article
Cellular neural networks with an appropriate choice of templates can solve, among other things, local and global pattern recognition problems. The complete stability of these networks has been proved earlier for the symmetric (reciprocal) cases where the feedback values between the different cells within a neighborhood are the same in both directions. It is shown that at least for some interesting classes of templates, this symmetry (reciprocity) condition is in general not necessary for complete stability. Moreover, the conditions discussed are robust in the sense that they require neither precise template-value relations nor a closeness to some prescribed values. On the other hand, examples are shown of cases where violating some basic conditions would give rise to oscillations