Parametric Stabilization of the Ring and Linear Neural Network with Two Delays ()
1. Introduction
Over the past decades, the analysis of the dynamics for the neural networks known as Hopfield neural networks, cellular neural networks and Cohen-Grossberg neural networks has received considerable attention after the publication of the representative work [1] [2] [3]. A variety of the local and global stability conditions for different classes of delayed neural networks have been published [4] [5] [6]. In particular, the ring neural networks, which can describe biological networks (for example, the nuclei of human brain) and social networks, gain more attention. The dynamics of a ring neural network with delayed coupling are completely analyzed using characteristic equation and Lyapunov functionals in [7] [8] [9]. Some delay-dependent and delay-independent stability criteria are obtained for the neural network with one delay [10]. By applying the center manifold theorem and the normal form theory, the bifurcations and chaotic behavior of a discrete-time-delayed Hopfield neural network with ring architecture are studied in [11]. On the other hand, if you take away one of the links between the adjacent two neurons of a ring, you get a linear configuration [12]. The comparative analysis of the stability for the ring and linear neural network with delayed interactions is an interesting topic. From the neurons' number point of view, a principle “the breaking of the ring neural network with one delay contributes to the stability” is proved in [13] [14], and they also find a “paradoxical region” in the space of parameters wherein the neural ring is stable while the linear neural configuration is unstable. However, if there are two or more delays in the connection links, the stability analysis will become more complex. How do the time delays affect the stability and dynamics of the system?
In this paper, we firstly devote to establish the stability conditions of a ring neural network with double delays, and the mathematical model is stated as follows (see Figure 1):
(1.1)
where
is the state of the ith neuron at time t,
is the neuron gains,
is the activation function of neurons,
is the time delay in signal transmission between neurons, and
,
is the forward connection weight between neurons,
is the reverse connection weight between neurons. Moreover, if the link between the first and last neurons is cut, system (1) becomes a linear neuron network (see Figure 2):
(1.2)
Figure 1. Schematic of a ring neural network (1.1).
Figure 2. Schematic of a linear neural network (1.2).
The comparative analysis of (1.1) and (1.2) are also presented in the paper.
The paper is organized as follows. In Section 2, we linearize the Equation (1.1) and show the characteristic equation. Section 3 and Section 4 is devoted to establish the delay-dependent and delay-independent stability conditions. In Section 5, we discuss the linear case when the link between the adjacent two neurons is broken. We make a comparative analysis for the ring neural network and the linear one in Section 6.
Our contributions in this paper are as follows. Firstly, we establish the stability region in the parameters space of a Hopfield-type neural network (1.1) with bi-directional ring architecture and two delays, the delay-independent stability conditions (see Theorem 3.1 and Theorem 3.2) and delay-dependent stability conditions (see Theorem 3.3) are presented. Secondly, if a link between the adjacent two neurons in (1.1) is cut, we get a linear network (1.2), whose stability conditions (see Theorem 5.5 and Theorem 5.6) are also deduced. Moreover, we show that the breaking of the neural ring extends the stability region.
2. Set-Up of (1.1) and the Characteristic Equation
Since
, it results in the linearized equation of (1.1) at origin in the vectorizing form:
(2.1)
where
, and “T” denotes the transpose of a vector or a matrix. Moreover,
(2.2)
It is well-known that the zero solution of (1.1) is locally stable if and only if the zero solution of (2.1) is (locally, but hence globally since (2.1) is linear) asymptotically stable.
By a simple calculation, we can get that the characteristic equation of (2.1) is
(2.3)
where i is the imaginary unit, and
,
.
Remark. The roots of equation (2.3) appear in pairs. If
is a root of (2.3), the same is true for
.
Remark. If there are three neurons,
, the Equation (2.3) takes the following form:
(2.4)
3. The Distribution of the Roots of an Exponential Polynomial (2.3)
As everyone knows, the system (2.1) is asymptotically stable if all the roots of Equation (2.3) have negative real parts. In this part, we will discuss the distribution of the roots of a first degree exponential polynomial:
(3.1)
which is derived from (2.3).
Theorem 3.1. If
, then all roots of Equation (2.3) have negative real parts for every
. So the trivial solution of system (2.1) is asymptotically stable.
Proof. Denote
and
(3.2)
Obviously,
(3.3)
Since
, we have
(3.4)
and
(3.5)
Combining with (3.3)-(3.5), if
,
(3.6)
Hence, all roots of equation (5) have negative real parts. o
Theorem 3.2. If
, the Equation (2.3) has at least one nonnegative real root, for
. So the trivial solution of system (2.1) is not asymptotically stable.
Proof. From Equation (3.1), we have
It is obvious that
, and
. Then if
, we have
. Hence, by the intermediate value theorem, we obtain that Equation (2.3) has at least one nonnegative real root. o
Remark. In fact, Theorem 3.1 and Theorem 3.2 have discussed completely for
. Next, we focus our attention on the cases
or
.
Theorem 3.3. For
, if
(3.7)
and
. Then all roots of Equation (2.3) have negative real parts. So the trivial solution of system (2.1) is asymptotically stable.
Proof. Denote
be a root of Equation (2.3), substitute it into (3.1), we can have
Without loss of generality, we can assume
, since the roots of (2.3) appear in pairs. In fact, if
, and let
(3.8)
Obviously,
(3.9)
Next,
(3.10)
and
(3.11)
So, for
,
, if
. This contradicts with
. Hence,
, and all roots of Equation (2.3) have negative real parts. o
Remark. For
, the condition (3.7) can be satisfied for some
except for
. Actually, if there are four neurons,
, then
, the condition (3.7) turns to
(3.12)
respectively. Hence, no matter
or
, the range of parameter k can be empty. It shows that the sufficiency of condition (3.7) is too strict for
.
4. The Case for n = 4
In this part, we will discuss the case of four neurons in a ring neural network (2.1) as follows:
(4.1)
The characteristic equation of (4.1) is defined as follows:
(4.2)
Denote
(4.3)
Theorem 4.1 Assume
. If
or
, the equation (4.2) has at least one nonnegative real root, for every
. So the trivial solution of system (4.1) is not asymptotically stable.
Proof. It is obvious that
and
,
. Then if
or
, we have
or
. Hence, by the intermediate value theorem, we obtain that Equation (4.2) has at least one nonnegative real root. o
Remark. Combining with Theorem 3.1 and Theorem 4.1, we can have that if
, the stability and un-stability of (4.1) have been fully discussed. Next, we will study the case
.
Theorem 4.2 If
, and one of the following conditions holds:
1)
,
,
;
2)
,
,
.
All roots of Equation (4.2) have negative real parts. So the trivial solution of system (4.1) is asymptotically stable.
Proof. Since the complex roots
of (4.2) appear in pairs, we can regard
. Moreover, substitute
into each equation of (4.3) and after a simple calculation, we have
(4.4)
and
(4.5)
Case 1: If
is a root of
, separating the real and imaginary parts, we have
(4.6)
It is obvious that
. This contradicts with
.
Case 2: if
is a root of
, we have
(4.7)
where,
Since
Then, if
,
. This contradicts with
.
Case 3: If
is a root of
, separating the real and imaginary parts, we have
(4.8)
by the second equation of (4.8),
and then,
(4.9)
By squaring both sides of the above Equation (4.8), we have
(4.10)
Since
Then, if
,
. This contradicts with
.
Case 4: If
is a root of
, separating the real and imaginary parts, we have
(4.11)
By squaring both sides of the above Equation (4.11), we have
(4.12)
Since
Then, if
,
. This contradicts with
.
A similar argument applies for case (ii). Here we omit it.
5. Breaking of the Ring
In this part, we will discuss the case that a link between the adjacent two neurons is broken, as shown in Figure 2. Then the link matrix
in Equation (2.1) becomes
(5.1)
and the Equation (2.1) change into
(5.2)
The characteristic polynomial of (5.2) is
(5.3)
which is a tridiagonal determinant. It is obvious that if
(
, or
), the bi-directional connection in (5.2) changes to a single one-way direction. By (5.3), we have all the characteristic roots are
. Hence, the system (5.2) is always stable, no matter how
or
are taken. Next we focus on the case
.
Lemma 5.1.
(5.4)
Proof. By a series of simple calculation and inductive methods for nth-order determinant, this result can be proved. So we omit it here. o
According to Lemma 5.1, we can get
in (5.3) as follows:
(5.5)
where
(5.6)
Solving the characteristic equation
, we have
(5.7)
By applying Euler’s formula, we know that
(5.8)
where
(5.9)
After a series of algebraic simplification, we have
(5.10)
Let
(5.11)
the Equation (5.10) is transformed into
(5.12)
Hence, the roots of Equation (5.12) are characteristic roots of (5.2). It is clear that we can have the following result.
Theorem 5.2. If there exists some p and n in (5.9) such that
, then
is a root of Equation (5.12). So the trivial solution of system (5.2) is unstable.
Theorem 5.3. Assume
, and for
,
. If
, all roots of Equation (5.12) have negative real parts. So the trivial solution of system (5.2) is asymptotically stable.
Proof. If
, the Equation (5.12) turns to
(5.13)
So we have
(5.14)
and then, whether
or
,
is always true. o
Theorem 5.4. Assume
and there exists some p and n in (5.9) such that
. Then Equation (5.12) has at least one nonnegative real root. So the trivial solution of system (5.2) is unstable.
Proof. Denote
(5.15)
If
and
hold,
. On the other hand,
(5.16)
Hence, by the intermediate value theorem, we obtain that Equation (5.12) has at least one nonnegative real root. o
Theorem 5.5. Assume
. if one of the following conditions hold:
(i)
, and for
,
;
(ii)
, and for
,
;
Then all the roots of Equation (5.12) have negative real parts, for any
. So the trivial solution of system (5.2) is asymptotically stable.
Proof. Suppose
is a pure imaginary root of Equation (5.12), we have
that is
(5.17)
Separating the real and imaginary parts, we get
(5.18)
From Equations (5.18), we see that
must satisfy
(5.19)
case (i): If
, the Equation (5.19) turns to
(5.20)
which has no root, since
.
case (ii): It is obvious that the Equation (5.19) has no root if the condition (ii) holds. o
Remark. If
, and there exists some
such that
, the Equation (5.19) turns to
(5.21)
It is clear that the Equation (5.19) has only one positive root, denote by
. Then by the second equation of (5.18), we define that
(5.22)
(5.23)
and we can have the following result.
Theorem 5.6. Assume
,
, and there exists some p such that
. Then the roots of Equation (5.12) have negative real parts for
. So the trivial solution of system (5.2) is asymptotically stable for
.
Remark. The results of Theorem 5.2-Theorem 5.6 indicate the stability and un-stability region on the
parameter plane, as shown in Figure3.
6. The Comparative Analysis When a Link Is Cut
In this section, we make a comparative analysis for the ring neural network (2.1) and the breaking ring (5.2). The next results show that the breaking of the neural
Figure 3. A chart of stability region for (5.2) on the
parameter plane.
ring extends the stability region. The case of delay-independent stability Theorem 3.1 and Theorem 5.5 show that the neural network does not lose stability when a link between the adjacent two neurons is broken. In fact, if
(6.1)
system (2.1) is delay-independent stable. Meanwhile, from Theorem 5.5, we have that the breaking neural network (5.2) is stable if and only if
(6.2)
Obviously, the condition (6.2) is more relaxed than (6.1), since
(6.3)
The Case of Delay-Dependent Stability
Firstly, we take
for example. Then the conditions (3.7) in Theorem 3.3 become
(6.4)
We can choose
, then (6.4) holds and the delay must satisfy
(6.5)
On the other hand, there exists
such that the conditions in Theorem 5.6 hold. Combining with (5.9) (5.22) (5.23), we get that
(6.6)
Obviously, the delay margin is enlarged from 0.707 to 0.898. Even if
, by Theorem 3.3, we have
. Hence, we can see that the stability region of time-delay
is enlarged after the neural ring is broken.
7. Concluding Remarks
Based on the approach of the characteristic equation, we establish the stability region of system parameters for a ring and linear neural network, which is different from the work in [13]. The discussion in [13] is mainly from the neurons’ number (small or sufficiently large? odd or even?) point of view. For example, it shows if the number of neurons is small, then a “paradoxical” region exists in the parameters space, wherein the ring neural configuration is stable, while the linear one is unstable. However, the results in our paper have nothing to do with the number of neurons as long as
. Hence, conclusions in this paper may be considered as a useful complement. Furthermore, exploring the dynamics of a ring neural network, such as Hopf bifurcation, chaos and existence of periodic solutions, is still a challenging direction for future research. Applications of such neural networks in fields like human psyche, signal processing or associative memories are also worth investigating.
Funding
This work has been supported by the Natural Science Foundation of China (12001343), and General Project of the Natural Science Foundation of Shanxi Province (201801D121027).