_{1}

In this paper, we discuss the driving-response synchronization problem for two memristive neural networks with retarded and advanced arguments under the condition of additional noise. The control law is related to the linear time-delay feedback term, and the discontinuous feedback term. Moreover, the random different equation is used to prove the stability of this theory. At the end, the simulation results verify the correctness of the theoretical results.

In the past 10 years, neural networks have shown great potential application in pattern classification, associative memory. Also, the neural memory network has attracted extensive attention. In [

By comparing with the traditional neural networks, based on the former research in [

In this paper, we continue to discuss the master-slave synchronization of memristor neural network (MNN) with retarded and advanced argument. The problem of additional noise in the stochastic differential equation model is studied. First, we design a control law that consists of discontinuous feedback and deviating functions. Based on the mean square, the sufficient condition of global synchronization is linear matrix inequality (LMI).

Also, the extended feedback term is constituted by the adaptive control law, it makes the control gain which is a discontinuous feedback term. In this paper, as described in [

Firstly, we present the prepared form of memristor which about the RMNN (recursive memristival neural network) and DMNN (delayed memristive neural networks). Besides, we recommend some definitions, remarks, and lemmas. Moreover, we define the compression lag feature in [

U ( h ( t ) ) = ( U ′ ( h ( t ) ) , D − h ( t ) < 0 ; U ″ ( h ( t ) ) , D − h ( t ) > 0 ; U ( h ( t − ) ) , D − h ( t ) = 0. (1)

where h ( t ) presents the voltage which applied to the memristor, U ( h ( t ) ) is related to the memristor (voltage-controlled), the left Dini-derivatisation of h ( t ) is D − h ( t ) in t, U ( h ( t − ) ) which is defiled to the left limit U ( h ( t ) ) . U ′ ( h ( t ) ) or U ″ ( h ( t ) ) and U ( h ( t − ) ) is equal to each other. The memductance function may be discontinuous.

As described in [

U ( h ( t ) ) = ( U ′ , D − h ( t ) < 0 ; U ″ , D − h ( t ) > 0 ; U ( h ( t − ) ) , D − h ( t ) = 0 , (2)

where U ′ and U ″ are constants.

In the traditional memory neural network, MNN is constructing by a memristor instead of resistance in [

d x ( t ) = [ − B x ( t ) + C ( x ( t ) ) f ( x ( t ) ) + A ( x ( t ) ) f ( x ( γ ( t ) ) ) ] d t + σ ( t , x ( t ) , x ( γ ( t ) ) ) d ω ( t ) , (3)

x ( t ) ∈ R n is the state of this networks; The active functions of neurons is correspond by f ( ⋅ ) ; the deviating function is γ ( t ) ; C ( x ) = [ c i j ( f j ( x j ( t ) ) − x i ( t ) ) ] n × n and A ( x ) = [ a i j ( f j ( x j ( γ ( t ) ) ) − x j ( t ) ) ] n × n which are the two type of memristive connection weight matrix about the delay feedback. And the functions c i j ( ⋅ ) and a i j ( ⋅ ) are defined as (2), c i j means synaptic strengths at time t and a i j denote synaptic strengths at γ ( t ) . The two different values can be switched between connection weight freely. Here, c i j ( ⋅ ) and a i j ( ⋅ ) which is the value of the functions, denote as { c ˙ ′ i j , c ˙ ″ i j } , { a ˙ ′ i j , a ˙ ″ i j } . Besides, c ^ i j = max { c ˙ ′ i j , c ˙ ″ i j } , c ⌣ i j = min { c ˙ ′ i j , c ˙ ″ i j } , a ^ i j = max { a ˙ ′ i j , a ˙ ″ i j } , a ⌣ i j = min { a ˙ ′ i j , a ˙ ″ i j } .

Let L F 0 2 ( [ − r ,0 ] ; R n ) be the the family of R n -valued stochastic process ξ ( s ) { s : [ − r ,0 ] } . And ξ ( s ) is F 0 -measuarable and ∫ − r 0 E ‖ ξ ‖ 2 d s < ∞ , which the W ˙ is the mathematic expectation.

The initional condition of (3) is t ∈ ( 0 , − r ] , φ ( t ) = x ( t ) , φ ∈ L F 0 2 ( [ − r ,0 ] ; R n ) . The x ( t ; φ ) is satisfied in (3) which is continuity. This equality about x which is shown x ( s ; φ ) = φ ( s ) , and satisfy to the s ∈ [ − r ,0 ] .

Throughout this paper, the assumptions are used to support our proof.

Assumption 1. f i ( 0 ) = g i ( 0 ) = σ i ( 0 ) = 0 { l i ≥ 0 } . And | f i ( u ) | ≤ τ i , u ∈ R , τ > 0 . For u , v ∈ R , there exist a positive constances about this F i > 0 , G i > 0 , K i > 0 (K is called as the convergence rate) and this inequality about f , g , σ shown as followed:

| f i ( v ) − f i ( u ) | ≤ l i | v − u | ,

| g i ( v ) − g i ( u ) | ≤ G i | v − u | ,

| σ i ( v ) − σ i ( u ) | ≤ k i | v − u | .

Assumption 2. (G1) There exist a constant θ * > 0 , θ k * − θ k − 1 * ≤ θ * , for k ∈ N ,

(G2) 2 θ 2 [ ( N 1 + N 2 ) 2 + N 3 2 ] < 1 , 6 θ 2 ( N 1 2 + N 2 2 + N 3 2 ) e 6 θ 2 ( N 1 2 + N 3 2 ) < 1 ,

(G3) N 4 − μ N 5 > 0 . Moreover, | f i ( u ) | ≤ γ i hold for u ∈ R , where γ i > 0 ,

(G4) The matrix diag ( a 1 , a 2 , ⋯ , a n ) − ( | a i j F j + b i j G j | ) n × n ,

N 1 = max 1 ≤ i ≤ n { ∑ j = 1 n | B i j | ( B − G 1 + F j ) } ,

N 2 = max 1 ≤ i ≤ n { ∑ j = 1 n | a i j | G 2 } ,

N 3 = max 1 ≤ i ≤ n { k i } .

Assumption 3. { φ : R n × n ← R n × R n × R + } , φ is derived from the inner trace product and is a matrix of uniform Lipschitz continuous norm.

t r a c e [ ( φ ( t , v 1 , u 1 ) − φ ( t , v 2 , u 2 ) ) T × ( φ ( t , v 1 , u 1 ) − φ ( t , v 2 , u 2 ) ) ] ≤ ‖ M 1 ( v 1 − v 2 ) ‖ 2 + ‖ M 2 ( u 1 − u 2 ) ‖ 2 ,

where N 1 and N 2 are constant matrix, and it has a consist dimension.

Assumption 4. Based on C 2,1 ( R n × [ − r , + ∞ ] ; R + ) . It is connected to the family of all the negative function on V ( t , x ) on ( [ − r , + ∞ ] × R n ) , it is twice differentiable in x, t is once differentiable. If V ∈ C 2,1 ( [ − r , + ∞ ] × R n ; R + ) , and LV is the weak infinitesimal operators, which is related to the following error system (5):

L V ( t , x ) = V t + V x [ − B e ( t ) + C ( y ( t ) ) f ( y ( t ) ) − C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) − A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] + t r a c e [ σ T V x x σ ] . (4)

which V x = ∂ V ( t , x ) ∂ x , V t = ∂ V ( t , x ) ∂ t , V x x = ( ∂ 2 V ( t , x ) ∂ x i ∂ x j ) n × n and

σ ( t , e ( t ) , e ( γ ( t ) ) ) = σ .

Remark 1. Relating to k ∈ N , t ∈ [ θ k + 1 , ξ k ) , γ ( t ) = ξ k , the γ ( t ) is the deviating function. The (1) is a retarded system once t ∈ ( ξ , θ k + 1 ) satisfy t > γ ( t ) . The (1) is an advanced system if t < γ ( t ) satisfy t ∈ [ θ k , ξ k ] . Therefore, the deviating function γ ( t ) is influnced by the mixed system (1). In the driving-response system, the two identical RMN systems with different initial conditions called the driving system of RMNN (recursive memristival neural network) and the response system of RMNN. When the state of the variables of the two RMN in driving-response system will be synchronized as the mean square approaches 0 as time elapses. In this article, RMNN (3) is considering a master (or driver) system. The slave (or response) system is:

d y ( t ) = [ − B y ( t ) + C ( y ˙ ( t ) ) f ( y ˙ ( t ) ) + A ( y ˙ ( t ) ) f ( y ˙ ( γ ( t ) ) ) + u ( t ) ] d t + σ ( t , y ( t ) , y ( γ ( t ) ) ) d ω ( t ) . (5)

where u ( t ) ∈ R n , and the control vector is U in this system. The initional condition which is in (4) shown that: ϕ ( t ) = y ( t ) , t ∈ [ − r , 0 ] , and ϕ ∈ L F 0 2 ( [ − r ,0 ] ; R n ) . Desiging u ( t ) is the control vector. And the core of this article is to synchronize the master system with the slave system. So we use this condition y ( t ) − x ( t ) = e ( t ) and subtract (3) from (4) to get this error system:

d e ( t ) = [ − B e ( t ) + C ( y ( t ) ) f ( y ( t ) ) − C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ) ) − A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] d t + σ ( t , e ( t ) , e ( γ ( t ) ) ) d ω ( t ) . (6)

It is obvious to see that the system (6) is equivalent to the following integral equation:

x i ( t ) = x i ( t 0 ) + ∫ t 0 t [ − ( B − G 1 ) e ( s ) + G 2 ( e ( γ ( s ) ) ) + G 3 s i g n ( e ( s ) ) + C ( y ( s ) ) f ( y ( s ) ) − C ( x ( s ) ) f ( x ( s ) ) + A ( y ( s ) ) f ( y ( γ ( s ) ) ) − A ( x ( s ) ) f ( x ( γ ( s ) ) ) + u ( s ) ] d s + ∫ t 0 t σ i ( γ ( s ) , e ( s ) , s ) d w ( s ) . (7)

For ( i = 1 , 2 , ⋯ , n ) , { t ≤ t 0 } , we have this equation:

y i ( t ) = y i ( t 0 ) + ∫ t 0 t [ − B y i ( s ) + C ( y ( s ) f ( y ( s ) ) ) + A ( y ( s ) f ( y ( γ ( s ) ) ) ) + I i ] d s + ∫ t 0 t σ ( s , y ( s ) , y ( γ ( s ) ) ) d B ( s ) .

Remark 2. Since the right-hand side of the system (1) at θ ∈ n is discontinuous, the deviating function of γ ( t ) that does not apply to stochastic differential equations. One of the solution x ( t ) = ( x 1 ( t ) , x 2 ( t ) , ⋯ , x n ( t ) ) T in system (1) is a continuous function. Each point θ k , k ∈ N exists one-side derivative of x ( t ) , and each one is exist in the [ θ k , θ k + 1 ] , which exists in the derivation of x ( t ) , where σ ( x ( t ) , x ( γ ) , t ) = σ ( y ( t ) , y ( γ ( t ) ) , t ) − σ ¯ ( e ( t ) , e ( γ ( t ) ) , t ) .

t ∈ [ − r ,0 ] , ϕ − ϕ ( t ) = ψ ( t ) = e ( t ) is the initional condition of (5) is ψ ∈ L F 0 2 ( [ − r ,0 ] ; R n ) . U ( t ) (control vector) represents the synchronization problem associated with RMNN (3) and (4) for this U ( t ) . In the mean square is t → + ∞ and e ( t ) → 0 . The influence of noise is considering. Next, we will define the mean square stability as follows:

Lemma 1. x , y ∈ R n and define matrix S ∈ R n × n , the inequality matrix are shown: 2 x T y ≤ x T S x + y T S − 1 y .

In this part of the paper, we are giving a control law that is discontinuous. The response systems in RMNN (4) with time-delay feedback and in RMNN (3) are globally synchronized exponentially.

A Time-Delay Control Law with Constant Feedback GainsThe D ( t ) (control vector) is designed in this equation:

D ( t ) = G 1 ( e ( t ) ) + G 2 ( e ( γ ( t ) ) ) + G 3 s i g n ( e ( t ) ) . (8)

where G 1 , G 2 and G 3 ∈ R n × n are constant gain matrix which will be defined laterly. G 3 is a diagonal matrix and G 3 = diag { g 31 , g 32 , ⋯ , g 3 n } . Subtituting (9) into error system (5), the result is shown:

d e ( t ) = [ − D ( G 1 ) e ( t ) + G 2 ( e ( γ ( t ) ) ) + G 3 s i g n ( e ( t ) ) + C ( y ( t ) ) f ( y ( t ) ) − C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) − A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] d t + φ ( t , e ( t ) , e ( γ ( t ) ) ) d w ( t ) . (9)

We can define two matrix Q = diag { q 1 , q 2 , ⋯ , q n } , J = diag { j 1 , j 2 , ⋯ , j n } with j i = 2 ∑ j = 1 n ( | c ′ i j − c ″ i j | + | a ′ i j − a ″ i j | ) γ j , C ¨ = [ c ⌣ i j ] n × n with A ¨ = [ a i j ] n × n which the first one with c ˜ i j ∈ [ c ⌣ i j , c ^ i j ] and the second one with a i j ∈ [ a ⌣ i j , a ^ i j ] . We have shown the theorem as followed.

Theorem 1. Let G 1 and G 2 hold, which is for any solution y ( t ) = ( y 1 ( t ) , y 2 ( t ) , ⋯ , y n ( t ) ) T of (9). And we have the inequation:

E ‖ y ( γ ( t ) ) ‖ 2 ≤ μ E ‖ y ( t ) ‖ 2 .

And t ∈ [ 0, + ∞ ) , where μ is defined as it in (2).

Proof. Fix k ∈ N , t ∈ [ θ k − 1 , θ k ) , it follows that

y i ( t ) = y i ( t 0 ) + ∫ ξ t t [ − B y i ( s ) + C ( y ( s ) f ( y ( s ) ) ) + A ( y ( s ) f ( y ( γ ( ξ k ) ) ) ) + I i ] d s + ∫ ξ k t σ ( s , y ( s ) , y ( γ ( s ) ) ) d B ( s ) .

then

E ‖ y ( t ) ‖ 2 = E { ∑ i = 1 n | Z i ( t ) | } 2 ≤ E { ∑ i = 1 n | y i ( ξ k ) | + ∑ i = 1 n | ∫ ξ k t [ − C y i ( s ) + ∑ j = 1 n C i j ( y j ( s ) ) f ( y j ( s ) ) + ∑ j = 1 n A i j ( y ( s ) ) f ( y ( γ ( ξ k ) ) ) ] d s | + ∑ i = 1 n | ∫ ξ k t σ i ( y ( s ) , γ ( s ) ) d B ( s ) | } 2 ≤ E { ‖ y ( ξ k ) ‖ + θ λ 2 ‖ y ( ξ k ) ‖ + N 1 ∫ ξ k t ‖ y ( t ) ‖ d s + N 3 ∫ ξ k t 0 ‖ y ( s ) ‖ d B ( s ) } 2 ≤ 3 ( 1 + θ λ 2 ) 2 E ‖ y ( ξ k ) ‖ 2 + 3 θ ( N 1 2 + N 3 2 ) ∫ ξ k t E ‖ y ( s ) ‖ 2 d s ,

E ‖ y ( t ) ‖ 2 ≤ 3 ( 1 + λ 2 ) 2 E ‖ y ( ξ k ) ‖ 2 e 3 ( λ 1 2 + λ 3 2 ) = ζ E ‖ y ( ξ k ) ‖ 2 ,

hence,

E ‖ y ( ξ k ) ‖ 2 ≤ 6 1 − 6 ( θ N 2 ) 2 + 3 θ 2 ( N 1 2 + N 3 2 ) E ‖ y ( t ) ‖ 2 = μ E ‖ y ( t ) ‖ 2 .

Theorem 2. Let Assumptions 1 - 3 hold. In mean square, under the control law (7) and the RMNN (3)-(4) can achieve global asymptotical synchronization if there existed a ρ (positive real number), define positionality diagonal matrix H = diag { h 1 , h 2 , ⋯ , h n } and R = diag { r 1 , r 2 , ⋯ , r n } , and we define two positive matrix P = [ P i j ] n × n , O = [ O i j ] n × n , it is shown in this equation:

N = [ Π 1 H G 2 H C ¨ + H R H A ¨ ∗ Π 2 0 0 ∗ ∗ P − 2 R 0 ∗ ∗ ∗ − ( 1 − p ) P ] < 0 , (10)

G 3 + M < 0 , (11)

Q < ρ I . (12)

N = [ Π 1 H G 2 H C ¨ + H R H A ¨ ∗ Π 2 0 0 ∗ ∗ P − 2 R 0 ∗ ∗ ∗ − ( 1 − p ) P ] < 0 , (13)

G 3 + M < 0 , (14)

Q < ρ I . (15)

where Π 1 = H ( − ( C + G 1 ) + ( − C + G 1 ) T ) H + O + ρ M 1 T M 1 and Π 2 = ρ M 2 T − ( 1 − P ) O .

Consider the Lyapunov functional in this equation:

V ( t ) = ∑ i = 1 3 V i ( t ) , (16)

where

V 1 ( t ) = e T ( t ) P e ( t ) . (17)

and H, O and P are given matrices. x t = x ( t + s ) , and { t ≤ 0, s ∈ [ 0, − r ] } . The weak infinite operator of random process L is the operator at V 1 ( t ) .

According to the control law (7) of Definition 2, this definition only applies to the specificity of the system (8):

L V 1 ( t ) = 2 e T ( t ) H [ ( − C + G 1 ) e ( t ) + G 2 e ( γ ( t ) ) + G 3 s i g n ( e ( t ) ) + C g ( e ( t ) ) + A g ( e ( γ ( t ) ) ) + ( C ( y ( t ) ) − C ) f ( y ( t ) ) + ( C − C ( x ( t ) ) ) f ( x ( t ) ) + ( A ( y ( t ) ) − A ) f ( y ( γ ( t ) ) ) + ( A − A ( x ( t ) ) ) f ( x ( γ ( t ) ) ) ] + t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) P σ ( t , e ( t ) , e ( γ ( t ) ) ) ] . (18)

The f i ( ⋅ ) (activating function) is bounded, we have this result includes equation and inequation:

2 e T ( t ) H ( C ( y ( t ) ) − C ¨ ) f ( y ( t ) ) = 2 ∑ i = 1 n ∑ j = 1 n e i ( t ) h i ( c i j ( y ( t ) ) − C ¨ i j ) f j ( y j ( t ) ) ≤ 2 ∑ i = 1 n ( ∑ j = 1 n h i | c ′ i j − c ″ i j | γ i ) | e i ( t ) | . (19)

We can similatively expect the gain in these inequations:

2 e T ( t ) H ( C ¨ − C ( x ( t ) ) ) f ( x ( t ) ) ≤ 2 ∑ i = 1 n ( ∑ j = 1 n h i | c ′ i j − c ″ i j | γ j ) | e i ( t ) | , (20)

2 e T ( t ) H ( A − A ¨ ) f ( y γ ( t ) ) ≤ 2 ∑ i = 1 n ( ∑ j = 1 n H i | a ′ i j − a ″ i j | γ j ) | e i ( t ) | , (21)

and

2 e T ( t ) H ( A ¨ − A ¨ ( x ( t ) ) ) f ( γ ( t ) ) ≤ 2 ∑ i = 1 n ( ∑ j = 1 n h i | a ′ i j − a ″ i j | γ j ) | e i ( t ) | . (22)

Besides, we have that

2 e T ( t ) H G 3 s i g n ( e ( t ) ) = 2 ∑ i = 1 n h i g 3 i | e i ( t ) | . (23)

Based on the Assumption 2 and (15):

t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) H σ ( t , e ( t ) , e ( γ ( t ) ) ) ] ≤ ρ t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) σ ( t , e ( t ) , e ( γ ( t ) ) ) ] ≤ ρ [ e T ( t ) M 1 T M 1 e ( t ) + e T ( γ ( t ) ) M 2 T M 2 e ( γ ( t ) ) ] . (24)

Related to Assumption 1, we can easily obtain these equations and inequations:

g T ( t ) D L e ( t ) = ∑ i = 1 n g i ( t ) d i l i e i ( t ) ≥ ∑ i = 1 n d i g i 2 ( t ) = g T ( t ) D g ( t ) , (25)

which implies that

g T ( t ) D g ( t ) ≤ g T ( t ) ( t ) (26)

Let η = [ e T ( t ) , e T ( γ ( t ) ) , g T ( e ( t ) ) , g T ( e ( γ ( t ) ) ) ] T and combining (11)-(12) and (16)-(22), basing on this above that we can obtain this inequation:

L V ≤ ∑ i = 1 2 L V i ( t ) + g T ( t ) D L e ( t ) − g T ( t ) D g ( t ) ≤ ρ T N ρ + 2 H ( G 3 + J ) | e ( t ) | ≤ 0. (27)

Related on the (24) and Itô Formula, it is showing in this equation:

W V ( 1 ) − W V ( 0 ) = W ∫ 0 1 L V ( s ) d s . (28)

Based on the (14), there exists a positionality of constant λ shown in these inequations:

‖ e ( t ) ‖ ≤ W V ( 0 ) + W ∫ 0 t L V ( s ) d s ≤ L V ( 0 ) + λ max W ∫ 0 t ‖ e ( s ) ‖ 2 d s . (29)

Related on [

Remark 1. In [_{1}, G_{2}, G_{3} the form of LMIS is derived synchronization, considering the conditions, considering the two properties of neurons one is excitability, the other is inhibition, derivation synchronization is a feature of LMIS. Besides, we have two advantages in this case. First, by solving the value LMIS, it can verify the conditions of G_{1}, G_{2} and G_{3}. Second, global synchronization cannot be achieved by adjusting any matrices or parameters.

Remark 2. We adopt the following decomposition technique in Theorem 2.

A ( y ( t ) ) f ( y ( t ) ) − C ( x ( t ) ) f ( x ( t ) ) = C ¨ g ( e ( t ) ) + ( A ( y ( t ) ) − C ¨ ) f ( y ( t ) ) + ( C ¨ − A ( x ( t ) ) ) f ( x ( t ) ) , (30)

and

A ( y ( t ) ) f ( y ( γ ( t ) ) ) − A ( x ( t ) ) f ( x ( γ ( t ) ) ) = A ¨ g ( e ( γ ( t ) ) ) + ( A ( y ( t ) ) − A ¨ ) f ( y ( γ ( t ) ) ) + ( A ¨ − A ( x ( t ) ) ) f ( x ( γ ( t ) ) ) . (31)

The results show that by using the decomposition technique, it is worthwhile to consider the previous results in the synchronization of the memristor neural network models in [

Remark 3. When the drive system and the response system have different states, the system is also different. MNN depends on the switching system. In the proof of Theorem 2, the discontinuous term in the control law is using to offset the difference between the two RMNS resulting in the anti-synchronization effect. Moreover, the discontinuous feedback controls are using to reduce interference.

Remark 4. To eliminate the chattering caused by the discontinuous control law (7), (7) can be modified as:

D ( t ) = G 1 ( t ) + G 2 ( e ( γ ( t ) ) ) + G 3 e ( t ) | e ( t ) | + ε 1 ,

where

e ( t ) | e ( t ) | + ε = ( e 1 ( t ) | e 1 ( t ) | + ε 1 , ⋯ , e 1 ( t ) | e 1 | + ε 1 ) T .

For i = 1 , 2 , 3 , ⋯ , n . The ε i is a small enough constant. We have these corollaries, let H = I n in Theorem 1, and shown in the following corollary:

Corollary 1. Assumption 1 - 3 is valid. Under the control law, in the mean square RMNN (3) and (4) can achieve the global asymptotic synchronization if there is a matrix R that is positive diagonal matrix, R = diag { r 1 , r 2 , ⋯ , r n } , and positionally of definientia matrix P = [ p i j ] n × n , O = [ o i j ] n × n , it is shown in this equation:

N = [ Π 1 G 2 C ˜ + Q R A ˜ ∗ Π 2 0 0 ∗ ∗ P − 2 R 0 ∗ ∗ ∗ − ( 1 − h ) H ] < 0 , (32)

G 3 + J < 0 , (33)

where Π 1 = − D + G 1 + ( − D + G 1 ) T + O + M 1 T M 1 and Π 2 = M 2 T M 2 − ( 1 − P ) O .

Corollary 2. In the mean square, under the control law (7), the RMNN (3) and (4) can achieve global asymtotic synchronization which let assumption 1 - 3 hold, if there exist a positive real number ρ , positioning defile the diagonal matrix H = diag { h 1 , h 2 , ⋯ , h n } and R = diag { r 1 , r 2 , ⋯ , r n } and define positive matrix P ∈ R n × n , and diagonal matrix G ′ 3 ∈ R n × n , it is shown in this equation:

N = [ Π 1 G 2 P C ˜ + Q R H A ∗ Π 2 0 0 ∗ ∗ P − 2 R 0 ∗ ∗ ∗ − ( 1 − p ) P ] < 0 , (34)

G ′ 3 + J < 0 , (35)

and

H ≤ ρ I , (36)

where Π 1 = − 2 H D + G ′ 1 + G 1 T + R + ρ M 1 T M 1 and Π 2 = ρ M 2 T M 2 − ( 1 − p ) O . Moreover, G 1 = H − 1 G ′ 1 , G 2 = H − 1 G ′ 2 , and G 3 = G ′ 3 .

Proof. This corollary can be directivity verified by letting G 1 = H − 1 G ′ 1 , G 2 = H − 1 G ′ 2 and G 3 = G ′ 3 .

Remark 5. Besides, if we set G ′ 2 = 0 , and the control law (7) is consistent with this paper, it also can synchronize both MNN with random disturbance on the mean square. Therefore, it provides retarded and advanced argument in the drive-response systems of MNNs, and the results will change. Furthermore, MNNs (with driving-response system) will be more meaningful due to the deviating function γ ( t ) .

Without the random disturbance, we using the MNNS (with driving-response system) in this equation:

d x ( t ) d t = − B x ( t ) + C ( x ( t ) ) f ( x ( t ) ) + A ( x ( t ) ) F ( x ( γ ( t ) ) ) , (37)

and

d y ( t ) d t = − B y ( t ) + C ( y ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) + u ( t ) . (38)

Based on Theorem 2, we associate Corollary 3 with the following corollary:

Corollary 3. Let Assumption 1 and 3 hold, we established under the control law (7) as well as (36) and (37), global asymptotic synchronization can be achieved if exist positive definite diagonal matrix H = diag { h 1 , h 2 , ⋯ , h n } , R = diag { r 1 , r 2 , ⋯ , r n } , which define positively matrix H = [ h i j ] , O = [ O i j ] n × n , it is shown in this equation:

N = [ Π 1 H G 2 P C ¨ + Q R H A ∗ − ( 1 − h ) O 0 0 ∗ ∗ P − 2 R 0 ∗ ∗ ∗ − ( 1 − p ) P ] < 0 , (39)

and

G 3 + J < 0 , (40)

where Π 1 = H ( − C + G 1 ) + ( − C + G 1 ) T H + O .

Corollary 4. Based on the matrix G 1 = diag { g 11 , ⋯ , g 1 n } and G 2 = 0 in (7), if Assumption 1 holds the control law (7), it can achieve the global synchronization exponention of MNN (36) and (37), if there exist positive define martix r i ( i = 1 , 2 , ⋯ , n ) , it is shown in this inequation:

g 1 i > − c i + ∑ j = 1 n r j r i l i ( | c ¨ j i | + | a ¨ j i | ) , (41)

and

g 3 i ≤ 2 ∑ j = 1 n | c ′ i j − c ″ i j | + | a ′ i j − a ″ i j | γ j . (42)

Proof. Considering a function defined by this equation:

V ( t , e t ) = e δ t ∑ i = 1 n ∑ j = 1 n r j | a ¨ j i | × ∫ γ ( t ) t | g i ( e ( i ( s ) ) ) | e δ + i d s . (43)

As noted in Remark 2, following this proof of Theorem 1, and using this decomposition technique in Theorem 2, it can quickly verify the corollary.

Remark 6. This system can identify the retarded and advanced system, unlike traditional neural networks that need to be identifying in the system (1). Meanwhile, the comparative analysis of [

Remark 7. By sorting out Theorem 2 and Theorem 3, it is noting that these two parameters do not need to deal with MNN, additive noise, and deviation function. Also, the uncertain disturbance can be determined by this method.

Remark 8. Retarded and advanced arguments can describe harmonized nonlinear systems with internal mechanisms. By using the theory of these differential equations, the substitutable, hysteresis, and advanced parameters are introducing.

This paper studies a class of neural memory networks with discontinuous neuronal activation and constant variables. Based on the non-smooth analysis theory, the generalized Lyapunov functional method and equivalent transformation method are adopted to design the state feedback controllers napping control scheme. The neural memory network’s global synchronization results with discontinuous neuron activation based on drive response are obtaining. It is worth pointing out that these controllers and non-smooth Lyapunov functional functions in this paper are new. For neuron memory networks with unbounded intermittent neuron activation, synchronization research’s main problem is to process amplification signals, and the equivalent variation method is using to process amplification functions. Unlike the previous paper, [

This work is supported by the Natural Science Foundation of China under Grants 61976084 and 61773152.

The author declares no conflicts of interest regarding the publication of this paper.

Xian, R.X. (2021) Synchronization of Stochastic Memristive Neural Networks with Retarded and Advanced Argument. Journal of Intelligent Learning Systems and Applications, 13, 1-14. https://doi.org/10.4236/jilsa.2021.131001