Synchronization of Stochastic Memristive Neural Networks with Retarded and Advanced Argument

In this paper, we discuss the driving-response synchronization problem for two memristive neural networks with retarded and advanced arguments under the condition of additional noise. The control law is related to the linear time-delay feedback term, and the discontinuous feedback term. Moreover, the random different equation is used to prove the stability of this theory. At the end, the simulation results verify the correctness of the theoretical results.


Introduction
In the past 10 years, neural networks have shown great potential application in pattern classification, associative memory. Also, the neural memory network has attracted extensive attention. In [1] [2] [3], under the influence of memristors, the relationship between each single or double neural is replacing the traditional one. Dynamical analysis of neural network has been widely studied. The existence and uniqueness of the equilibrium point are certified. And RMNN (recursive memristive neural network) was proposed in 1990 and is regarded as the generalization of the recurrent neural network. When each parent node of a recursive neural network is connected to only one child node, its structure is equivalent to that of a fully connected cyclic neural network. Recursive neural networks can introduce a gating mechanism to learn long-distance dependency.
By comparing with the traditional neural networks, based on the former re-

Preliminaries and Mode
Firstly, we present the prepared form of memristor which about the RMNN (recursive memristival neural network) and DMNN (delayed memristive neural networks). Besides, we recommend some definitions, remarks, and lemmas. Moreover, we define the compression lag feature in [13] [14]. , 0.
In the traditional memory neural network, MNN is constructing by a memristor instead of resistance in [3]. We consider the dynamic system of recursive memory neural network with retarded and advance argument in [15]: is the state of this networks; The active functions of neurons is corres- Throughout this paper, the assumptions are used to support our proof. Assumption 1. , , and LV is the weak infinitesimal operators, which is related to the following error system Therefore, the deviating function ( ) t γ is influnced by the mixed system (1). In the driving-response system, the two identical RMN systems with different initial conditions called the driving system of RMNN (recursive memristival neural network) and the response system of RMNN. When the state of the variables of the two RMN in driving-response system will be synchronized as the mean square approaches 0 as time elapses. In this article, RMNN (3) is considering a master (or driver) system. The slave (or response) system is: where ( ) n u t R ∈ , and the control vector is U in this system. The initional condition which is in (4) shown that: And the core of this article is to synchronize the master system with the slave system. So we use this condition ( ) ( ) ( ) y t x t e t − = and subtract (3) from (4) to get this error system: It is obvious to see that the system (6) is equivalent to the following integral equation: Since the right-hand side of the system (1) at n θ ∈ is discontinuous, the deviating function of ( ) t γ that does not apply to stochastic differential equations. One of the solution ( ) exists one-side derivative of ( ) x t , and each one is exist in the [ ] 1 , k k θ θ + , which exists in the derivation of ( ) is the initional condition of (5)

Main Results
In this part of the paper, we are giving a control law that is discontinuous. The . D t G e t G e t G sign e t γ = + + Subtituting (9) into error system (5), the result is shown: , , , n y t y t y t y t = of (9). And we have the inequation: . .
Consider the Lyapunov functional in this equation: According to the control law (7) of Definition 2, this definition only applies to the specificity of the system (8):

LV t e t H C G e t G e t
G sign e t Cg e t Ag e t We can similatively expect the gain in these inequations:   T   T  T  T  T  1  1  2  2 , , , .

trace t e t e t H t e t e t trace t e t e t t e t e t e t M M e t e t M M e t
Related to Assumption 1, we can easily obtain these equations and inequations:  (  )   T  T  T  T  T   , , , e t e t g e t g e t η γ γ   =   and combining (11)- (12) and (16)-(22), basing on this above that we can obtain this inequation:  ( ) .
The results show that by using the decomposition technique, it is worthwhile to consider the previous results in the synchronization of the memristor neural network models in [19].
Remark 3. When the drive system and the response system have different states, the system is also different. MNN depends on the switching system. In the proof of Theorem 2, the discontinuous term in the control law is using to offset the difference between the two RMNS resulting in the anti-synchronization effect. Moreover, the discontinuous feedback controls are using to reduce interference.
where ( ) Journal of Intelligent Learning Systems and Applications Remark 7. By sorting out Theorem 2 and Theorem 3, it is noting that these two parameters do not need to deal with MNN, additive noise, and deviation function. Also, the uncertain disturbance can be determined by this method.
Remark 8. Retarded and advanced arguments can describe harmonized nonlinear systems with internal mechanisms. By using the theory of these differential equations, the substitutable, hysteresis, and advanced parameters are introducing.

Conclusion
This paper studies a class of neural memory networks with discontinuous neu- drive-response control system to the presence of parameter mismatch between the neural networks. Besides, we also briefly discussed pinning strategy; for example, we should first peg which neurons. We can choose how much control gain discrete neural networks were achieving by pinning a control scheme of finite-time synchronization.