Page 1

Hindawi Publishing Corporation

Discrete Dynamics in Nature and Society

Volume 2010, Article ID 513218, 15 pages

doi:10.1155/2010/513218

Research Article

Mean Square Exponential Stability of

Stochastic Cohen-Grossberg Neural Networks

with Unbounded Distributed Delays

Chuangxia Huang,1Lehua Huang,1and Yigang He2

1College of Mathematics and Computing Science, Changsha University of Science and Technology,

Changsha, Hunan 410114, China

2College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, China

Correspondence should be addressed to Chuangxia Huang, cxiahuang@126.com

Received 5 July 2010; Accepted 1 September 2010

Academic Editor: Juan J. Nieto

Copyright q 2010 Chuangxia Huang et al. This is an open access article distributed under the

Creative Commons Attribution License, which permits unrestricted use, distribution, and

reproduction in any medium, provided the original work is properly cited.

This paper addresses the issue of mean square exponential stability of stochastic Cohen-

Grossberg neural networks ?SCGNN?, whose state variables are described by stochastic nonlinear

integrodifferential equations. With the help of Lyapunov function, stochastic analysis technique,

and inequality techniques, some novel sufficient conditions on mean square exponential stability

for SCGNN are given. Furthermore, we also establish some sufficient conditions for checking

exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays.

1. Introduction

Consider the Cohen-Grossberg neural networks ?CGNN? described by a system of ordinary

differential equations

˙ xi?t? ? −ai?xi?t??

⎡

⎣?bi?xi?t?? −

n ?

j?1

cijfj

?xj?t??

⎤

⎦,

?1.1?

where t ≥ 0, n ≥ 2; n corresponds to the number of units in a neural network; xi?t?

denotes the potential ?or voltage? of cell i at time t; fj?·? denotes a non-linear output function

between cell i and j; ai?·? > 0 represents an amplification function;?bi?·? represents an

of connectivity between cells, and if the output from neuron j excites ?resp., inhibits? neuron

i, then cij≥ 0 ?resp., cij≤ 0?.

appropriately behaved function; the n×n connection matrix C ? ?cij?n×ndenotes the strengths

Page 2

2Discrete Dynamics in Nature and Society

During hardware implementation, time delays do exist due to finite switching speed

of the amplifiers and communication time and, thus, it is important to incorporate delays in

the neural networks. Just take the delayed cellular neural network as an example, which had

been successfully applied to solve some moving image processing problem ?1?. For model

?1.1?, Ye et al. ?2? introduced delays by considering the following delay differential equations:

˙ xi?t? ? −ai?xi?t??

⎡

⎣?bi?xi?t?? −

n ?

j?1

cijfj

?xj?t??−

K

?

k?1

n ?

j?1

dk

ijfj

?xj?t − τk??

⎤

⎦,i ? 1,...,n.

?1.2?

Guo and Huang ?3? generalized model ?1.1? as the following delay differential equations:

˙ xi? −ai?xi?

⎡

⎣?bi?xi? −

n ?

j?1

dijfj

?xj

?t − τij

??? Ii

⎤

⎦.

?1.3?

Some other more detailed justifications for introducing delays into model equations of neural

networks can be found in ?4, 5? and references therein.

The delays in all above mentioned papers have been largely restricted to be discrete.

As is well known, the use of constant fixed delays in models of delayed feedback provides

of a good approximation in simple circuits consisting of a small number of cells. However,

neural networks usually have a spatial extent due to the presence of a multitude of parallel

pathways with a variety of axon sizes and lengths. Thus there will be a distribution of

conduction velocities along these pathways and a distribution of propagation delays. In

these circumstances, the signal propagation is not instantaneous and cannot be modeled with

discrete delays and a more appropriate way is to incorporate continuously distributed delays.

For instance, in ?6?, Tank and Hopfield designed an analog neural circuit with distributed

delays,whichcansolveageneralproblemofrecognizing patternsinatime-dependentsignal.

A more satisfactory hypothesis is that to incorporate continuously distributed delays, we

refer to ?7–11?. Then model ?1.3? can be modified as a system of integro-differential equations

of the form

dxi?t?

dt

? ai?xi?t??

⎡

⎣−?bi?xi?t?? ?

n ?

j?1

cij?fj

?xj?t???

n ?

j?1

dij? gj

??t

−∞

Kij?t − s?xj?s?ds

?

? Ii

⎤

⎦,

?1.4?

with initial values given by ui?s? ? ψi?s? for s ∈ ?−∞,0?, where each ψi?·? is bounded and

continuous on ?−∞,0?.

In the past few years, the dynamical behaviors of stochastic neural networks have

emerged as a new subject of research mainly for two reasons: ?i? in real nervous systems

and in the implementation of artificial neural networks, synaptic transmission is a noisy

process brought on by random fluctuations from the release of neurotransmitters and other

probabilistic causes, hence, noise is unavoidable and should be taken into consideration

in modeling ?12–14?; ?ii? it has been realized that a neural network could be stabilized or

destabilized by certain stochastic effects ?15–17?. Although systems are often perturbed by

various types of environmental “noise” ?12–14, 18?, it turns out that one of the reasonable

Page 3

Discrete Dynamics in Nature and Society3

interpretation for the “noise” perturbation is the so-called white noise dω?t?/dt, where ω?t?

is the Brownian motion process, also called as Wiener process ?17, 19?. More detailed mecha-

nism of the stochastic effects on the interaction of neurons can be found in ?19?. However,

because the Brownian motion ω?t? is nowhere differentiable, the derivative of Brownian

motion dω?t?/dt cannot be defined in the ordinary way, the stability analysis for stochastic

neural networks is difficult. In ?12?, through constructing a novel Lyapunov-Krasovskii

functional, Zhu and Cao obtain several novel sufficient conditions to ensure the exponential

stability of the trivial solution in the mean square. In ?13?, using linear matrix inequality

?LMI? approach, Zhu et al. investigated the asymptotical mean square stability of Cohen-

Grossberg neural networks with random delay, In ?14?, by utilizing Poincar´ e inequality, Pan

and Zhong derived some sufficient conditions to check the almost sure exponential stability

and mean square exponential stability of stochastic reaction-diffusion Cohen-Grossberg

neural networks. In ?20?, Wang et al. developed a linear matrix inequality ?LMI? approach

to study the stability of SCGNN with mixed delays. To the best of the authors’ knowledge,

the convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded

distributed delays have not been studied yet, and still remain as a challenging task.

Keeping this in mind, in this paper, we consider the SCGNN described by the

following stochastic nonlinear integro-differential equations:

dxi?t? ? ai?xi?t??

⎡

⎣−?bi?xi?t?? ?

?xj?t??dωj?t?,

n ?

j?1

cij?fj

?xj?t???

n ?

j?1

dij? gj

??t

−∞

Kij?t − s?xj?s?ds

?

? Ii

⎤

⎦dt

?

n ?

j?1

σij

?1.5?

where σ?t? ? ?σij?t??n×nis the diffusion coefficient matrix and ω?t? ? ?ω1?t?,...,ωn?t??Tis

an n-dimensional Brownian motion defined on a complete probability space ?Ω,F,P? with a

natural filtration {Ft}t≥0?i.e., Ft? σ{w?s? : 0 ≤ s ≤ t}?.

Obviously, model ?1.5? is quite general and it includes several well-known neural

networks models as its special cases such as Hopfield neural networks, cellular neural

networks, and bidirectional association memory neural networks ?21, 22?.

The remainder of this paper is organized as follows. In Section 2, the basic notations

and assumptions are introduced. In Section 3, some criteria are proposed to determine

mean square exponential stability for ?1.5?. Furthermore, we also establish some sufficient

conditions for checking exponential stability for Cohen-Grossberg neural networks with

unbounded distributed delays in this section. In Section 4, an illustrative examples is given.

We conclude this paper in Section 5.

2. Preliminaries

Noting that a vector x ? ?x1,...,xn?T∈ Rnusually can be equipped with the common norms

?x? as

?

i?1

?x? ?

n ?

|xi|2

?1/2

.

?2.1?

Page 4

4Discrete Dynamics in Nature and Society

Forthesakeofconvenience,someofthestandingdefinitionsandassumptionsareformulated

below.

Definition 2.1 ?see ?17??. The trivial solution of ?1.5? is said to be mean square exponentially

stable if there is a pair of positive constants λ and G such that

E?x?t;t0,x0?? < G?x0?e−λ?t−t0?,

on t ≥ t0

?2.2?

for all x0∈ Rn, where λ also called as convergence rate.

One also assumes that

?H1? there exist positive constants Lj,Gj, j ? 1,...,n, such that

????fj?u? −?fj?v?

??? ≤ Lj|u − v|,

∀u,v ∈ R,

??? gj?u? − ? gj?v???≤ Gj|u − v|,

∀u,v ∈ R;

?2.3?

?H2? There exist positive constants bj, such that

?u − v?

??bj?u? −?bj?v?

?

≥ bj?u − v?2,

∀u,v ∈ R;?2.4?

?H3? There exist positive constants αi,αi, such that

αi≤ ai?xi?t?? ≤ αi;?2.5?

?H4? Assume σij?x∗

positive constants Mi,j, i,j ? 1,...,n, such that

j? ? 0 ?x∗? ?x∗

1,...,x∗

n?Tto be determined later?, and there exist

??σij?u? − σij?v???≤ Mij|u − v|,

∀u,v ∈ R.

?2.6?

Remark 2.2. The activation functions are typically assumed to be continuous, bounded,

differentiable, and monotonically increasing, such as the functions of sigmoid type; these

conditions are no longer needed in this paper. For example, when neural networks are

designed for solving optimization problems in the presence of constraints ?linear, quadratic,

or more general programming problems?, unbounded activations modelled by diode-like

exponential-type functions are needed to impose constraints satisfaction ?3?. In this paper,

the activation functions fi?·?,gi?·? also including some kinds of typical functions widely used

in the circuit designs, such as nondifferentiable piecewise linear output functions of the form

f?u? ? ?1/2??|u − 1| − |u ? 1|?, nonmonotonically increasing functions of the form Gaussian

and inverse Gaussian functions, see ?4, 5, 23? and references therein.

Page 5

Discrete Dynamics in Nature and Society5

Using variable substitution, ?t − s? ?→ s?, we get

?t

−∞

Kij?t − s?xj?s?ds ?

?∞

0

Kij

?s??xj

?t − s??ds?,

?2.7?

therefore, system ?1.5? for convenience can be put in the form

dxi?t? ? ai?xi?t??

⎡

⎣−?bi?xi?t?? ?

?xj?t??dωj?t?.

n ?

j?1

cij?fj

?xj?t???

n ?

j?1

dij? gj

??∞

0

Kij?s?xj?t − s?ds

?

? Ii

⎤

⎦dt

?

n ?

j?1

σij

?2.8?

As usual, the initial conditions for system ?2.8? are x?t? ? ϕ?t?, −∞ < t ≤ 0, ϕ ∈ L2

Rn?, here L2

satisfying that E{sup−∞≤s≤0|ϕ?s?|2} < ∞.

The conditions ?H1? and ?H4? imply that ?2.8? has a unique global solution on t ≥ 0

for the initial conditions?17?.

If V ∈ C1,2?R × Rn;R??, define an operator LV associated with ?2.8? as

F0??−∞,0?,

F0??−∞,0?,Rn? is the family of all F0-measurable Rn-valued random variables

LV?t,x? ? Vt?t,x? −

n ?

i?1

∂V?t,x?

∂xn

ai?xi?t??

×

⎡

⎣−?bi?xi?t?? ?

?1

2trace

n ?

j?1

cij?fj

?xj?t???

?

n ?

j?1

dij? gj

??∞

0

Kij?s?xj?t − s?ds

?

? Ii

⎤

⎦

?

σTVxx?t,x?σ,

?2.9?

where Vt?t,x? ? ∂V?t,x?/∂t, Vxx?t,x? ? ?∂2V?t,x?/∂xi∂xj?n×n.

We always assume that the delay kernels Kij, i,j

nonnegative functions defined on ?0,∞? and satisfy

? 1,...,n to be real-valued

?∞

0

Kij?s?ds ? 1,

?∞

0

Kij?s?eμsds < ∞,

?2.10?

for some positive constant μ. A typical example of such delay kernel function is given by

Kij?s? ? ?sr/r!?γr?1

have been used in ?6, 9, 24? for various stability investigations on neural network models. In

?24?, these kernels are called as the Gamma Memory Filter.

ije−γijsfor s ∈ ?0,∞?, where γij ∈ ?0,∞?, r ∈ {0,1,...,n}. These kernels