<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article  PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="3.0" xml:lang="en" article-type="research article"><front><journal-meta><journal-id journal-id-type="publisher-id">JAMP</journal-id><journal-title-group><journal-title>Journal of Applied Mathematics and Physics</journal-title></journal-title-group><issn pub-type="epub">2327-4352</issn><publisher><publisher-name>Scientific Research Publishing</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.4236/jamp.2017.511173</article-id><article-id pub-id-type="publisher-id">JAMP-80159</article-id><article-categories><subj-group subj-group-type="heading"><subject>Articles</subject></subj-group><subj-group subj-group-type="Discipline-v2"><subject>Physics&amp;Mathematics</subject></subj-group></article-categories><title-group><article-title>
 
 
  New Results of Global Asymptotical Stability for Impulsive Hopfield Neural Networks with Leakage Time-Varying Delay
 
</article-title></title-group><contrib-group><contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Qiang</surname><given-names>Xi</given-names></name><xref ref-type="aff" rid="aff1"><sub>1</sub></xref></contrib></contrib-group><aff id="aff1"><label>1</label><addr-line>School of Mathematic and Quantitative Economics, Shandong University of Finance and Economics, Ji’nan, China</addr-line></aff><author-notes><corresp id="cor1">* E-mail:</corresp></author-notes><pub-date pub-type="epub"><day>02</day><month>11</month><year>2017</year></pub-date><volume>05</volume><issue>11</issue><fpage>2112</fpage><lpage>2126</lpage><history><date date-type="received"><day>9,</day>	<month>October</month>	<year>2017</year></date><date date-type="rev-recd"><day>4,</day>	<month>November</month>	<year>2017</year>	</date><date date-type="accepted"><day>7,</day>	<month>November</month>	<year>2017</year></date></history><permissions><copyright-statement>&#169; Copyright  2014 by authors and Scientific Research Publishing Inc. </copyright-statement><copyright-year>2014</copyright-year><license><license-p>This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/</license-p></license></permissions><abstract><p>
 
 
  In this paper, Hopfield neural networks with impulse and leakage time-varying delay are considered. New sufficient conditions for global asymptotical stability of the equilibrium point are derived by using Lyapunov-Kravsovskii functional, model transformation and some analysis techniques. The criterion of stability depends on the impulse and the bounds of the leakage time-varying delay and its derivative, and is presented in terms of a linear matrix inequality (LMI).
 
</p></abstract><kwd-group><kwd>Global Asymptotical Stability</kwd><kwd> Hopfield Neural Networks</kwd><kwd> Leakage Time-Varying Delay</kwd><kwd> Impulse</kwd><kwd> Lyapunov-Kravsovskii Functional</kwd><kwd> Linear Matrix Inequality</kwd></kwd-group></article-meta></front><body><sec id="s1"><title>1. Introduction</title><p>As we know, time delay is a common phenomenon that describes the fact that the future state of a system depends not only on the present state but also on the past state, and often encountered in many fields such as automatic control, biological chemistry, physical engineer, neural networks, and so on [<xref ref-type="bibr" rid="scirp.80159-ref1">1</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref2">2</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref3">3</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref4">4</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref5">5</xref>] . Moreover, the existence of time delay in a real system may lead to instability, oscillation, and bad dynamic performance [<xref ref-type="bibr" rid="scirp.80159-ref3">3</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref4">4</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref5">5</xref>] . So, it is significant and necessary to consider the delay effects on stability of dynamical systems. In Recent years, one typical class of neural networks, Hopfield neural networks (HNN) have been successfully applied to associative memory, pattern recognition, automatic control, optimization problems, etc, and HNN with various types of delay have been widely investigated by many authors, some interesting and important results have been reported in the literature, see [<xref ref-type="bibr" rid="scirp.80159-ref6">6</xref>] - [<xref ref-type="bibr" rid="scirp.80159-ref16">16</xref>] and the references therein.</p><p>On the other hand, impulsive phenomenon exists universally in a wide variety of evolutionary processes where the state is changed abruptly at certain moments of time, involving such fields as chemical technology, population dynamics, physics and economics [<xref ref-type="bibr" rid="scirp.80159-ref17">17</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref18">18</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref19">19</xref>] . Hopfield neural networks may experience change of the state abruptly, that is, do exhibit impulsive effects. Recently, some results for the stability of HNN with impulse as well as delays are obtained via different approaches [<xref ref-type="bibr" rid="scirp.80159-ref20">20</xref>] - [<xref ref-type="bibr" rid="scirp.80159-ref26">26</xref>] .</p><p>In the past several years, a special type of time delay, namely, leakage delay (or forgetting delay), is identified and investigated due to its existence in many real systems such as neural networks, population dynamics and some fuzzy systems [<xref ref-type="bibr" rid="scirp.80159-ref1">1</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref3">3</xref>] . Leakage delay is a time delay that exists in the negative feedback terms of the system which are known as forgetting or leakage terms. It has been shown that such kind of time delay has a tendency to destabilize a system [<xref ref-type="bibr" rid="scirp.80159-ref27">27</xref>] . In [<xref ref-type="bibr" rid="scirp.80159-ref27">27</xref>] , Gopalsamy initially investigated the dynamics of bidirectional associative memory (BAM) network model with leakage delays by using model transformation technique, Lyapunov-Kravsovskii functional and inequalities together with some properties of M-matrices. Based on this work, several papers have considered stability of some kinds of neural networks [<xref ref-type="bibr" rid="scirp.80159-ref28">28</xref>] - [<xref ref-type="bibr" rid="scirp.80159-ref34">34</xref>] . More recently, Li et al. [<xref ref-type="bibr" rid="scirp.80159-ref35">35</xref>] , initially studies the impulsive effects on existence-uniqueness and stability problems of recurrent neural networks with leakage delay via some analysis techniques on impulsive functional differential equations. However, it is worth noting that in those existing results, the leakage delay considered is usually a constant. Stability research on leakage time-varying delay has been hardly considered in the literature. In [<xref ref-type="bibr" rid="scirp.80159-ref36">36</xref>] , Li et al. studied the effect of leakage time-varying delay on stability of nonlinear differential systems, but ignored impulsive effect. It is interesting to consider neural networks with leakage time-varying delay as well as impulse, which describes more realistic models [<xref ref-type="bibr" rid="scirp.80159-ref37">37</xref>] - [<xref ref-type="bibr" rid="scirp.80159-ref40">40</xref>] .</p><p>With the above motivation, in this paper, we consider Hopfield neural networks with leakage time-varying delay and impulse. By using Lyapunov-Kravsovskii functional, model transformation and some analysis techniques, New sufficient conditions for global asymptotical stability of the equilibrium point are derived. The criterion depends on the impulse and the bounds or length of the leakage time-varying delay and its derivative, and is given in terms of a linear matrix inequality (LMI). The developed results generalize the corresponding results in reference [<xref ref-type="bibr" rid="scirp.80159-ref36">36</xref>] . The work is organized as follows. In Section 2, we introduce the model, some basic notations and lemmas. In Section 3, we present the main results. Finally, the paper is concluded in Section 4.</p></sec><sec id="s2"><title>2. Preliminaries</title><p>Notations. Let ℝ denote the set of real numbers, ℝ + the set of nonnegative real numbers, ℤ + the set of positive integers, ℝ n the n-dimensional real space and ℝ n &#215; m n &#215; m -dimensional real space equipped with the Euclidean norm |   ⋅   | ,</p><p>respectively. For S = ( s i j ) ∈ ℝ n &#215; n , set ‖ S ‖ 2 = ∑ i = 1 n ∑ j = 1 n     s i j 2 . A &gt; 0 or A &lt; 0</p><p>denotes that the matrix A is a symmetric and positive definite or negative definite matrix. The notation A T and A − denote the transpose and the inverse of A , respectively. If A , B are symmetric matrices, A &gt; B means that A − B is positive definite matrix. λ m a x ( A ) and λ m i n ( A ) denote the maximum eigenvalue and the minimum eigenvalue of matrix A , respectively. E denotes the identity matrix with appropriate dimensions and Λ = { 1 , 2 , ⋯ , n } . For any J ⊆ ℝ , S ⊆ ℝ k ( 1 ≤ k ≤ n ) , set ℂ ( J , S ) = { ϕ : J → S   is   continuous } and ℙ ℂ 1 ( J , S ) = { ϕ : J → S is continuously differentiable everywhere except at finite number of points t at which ϕ ( t + ) , ϕ ( t − ) , ϕ ˙ ( t + ) , ϕ ˙ ( t − ) exist and ϕ ( t + ) = ϕ ( t ) , ϕ ˙ ( t + ) = ϕ ˙ ( t ) where ϕ ˙ denotes the derivative of ϕ } . For any t ∈ ℝ + , x t is defined by x t = x ( t + s ) , x t − = x ( t − + s ) , s ∈ [ − σ , 0 ] . The notation ⋆ always denotes the symmetric block in one symmetric matrix.</p><p>Consider the following impulsive hopfield neural networks with leakage time- varying delay:</p><p>{ x ˙ ( t ) = − C x ( t − σ ( t ) ) + A f ( x ( t ) ) + B g ( x ( t − τ ( t ) ) ) + J , t &gt; 0 , t ≠ t k , Δ x ( t k ) = x ( t k ) − x ( t k − ) = J k ( x ( t k − ) , x t k − ) ,   k ∈ ℤ + , x ( t ) = φ ( t ) ,   t ∈ [ − η , 0 ] , (1)</p><p>where x ( t ) = ( x 1 ( t ) , ⋯ , x n ( t ) ) T is the neuron state vector of the neural networks; C = d i a g ( c 1 , ⋯ , c n ) is a diagonal matrix with c i &gt; 0, i ∈ Λ ; A and B are the connection weight matrix and the delayed weight matrix, respectively; J is an external input; f and g represent the neuron activation functions. Through-out this paper, we make the following assumptions:</p><p>(H<sub>1</sub>) σ ( t ) and τ ( t ) denote the time-varying leakage delay and time-varying transmission delay, respectively, and satisfies 0 ≤ σ ( t ) ≤ σ , 0 ≤ τ ( t ) ≤ τ and | σ ˙ ( t ) | ≤ ρ σ &lt; 1 , τ ˙ ( t ) ≤ ρ τ &lt; 1 , where σ , τ , ρ σ , ρ τ are some real constants;</p><p>(H<sub>2</sub>) J k ( ⋅ , ⋅ ) : ℝ n &#215; ℝ n → ℝ n , k ∈ ℤ + , are some continuous functions;</p><p>(H<sub>3</sub>) The impulsive times t k satisfy 0 = t 0 &lt; t 1 &lt; ⋯ &lt; t k → ∞ and i n f k ∈ ℤ + { t k − t k − 1 } &gt; 0 .</p><p>(H<sub>4</sub>) φ ∈ ℙ ℂ 1 = ˙ ℙ ℂ 1 ( [ − η ,0 ] , ℝ n ) , where η = ˙ max { σ , τ } . For φ ∈ ℙ ℂ 1 , define ‖ φ ‖ η = sup θ ∈ [ − η , 0 ] | φ ( θ ) | .</p><p>The following Lemmas will be used to derive our main results.</p><p>Lemma 2.1. [<xref ref-type="bibr" rid="scirp.80159-ref41">41</xref>] Given any real matrices Σ 1 , Σ 2 , Σ 3 of appropriate dimensions and a scalar ϵ &gt; 0 such that 0 &lt; Σ 3 = Σ 3 T . Then the following inequality holds:</p><p>Σ 1 T Σ 2 + Σ 2 T Σ 1 ≤ ϵ Σ 1 T Σ 3 Σ 1 + ϵ − 1 Σ 2 T Σ 3 − 1 Σ 2 .</p><p>Lemma 2.2. [<xref ref-type="bibr" rid="scirp.80159-ref42">42</xref>] Given any real matrix M = M T &gt; 0 of appropriate dimension and a vector function ω ( ⋅ ) : [ a , b ] → ℝ n , such that the integrations concerned are well defined, then</p><p>[ ∫ a b ω ( s ) d s ] T M [ ∫ a b ω ( s ) d s ] ≤ ( b − a ) ∫ a b ω T ( s ) M ω ( s ) d s .</p><p>Lemma 2.3. [<xref ref-type="bibr" rid="scirp.80159-ref43">43</xref>] Let X ∈ ℝ n &#215; n , then</p><p>λ m i n ( X ) a T a ≤ a T X a ≤ λ m a x ( X ) a T a</p><p>for any a ∈ ℝ n if X is a symmetric matrix.</p><p>Lemma 2.4. [<xref ref-type="bibr" rid="scirp.80159-ref44">44</xref>] A given matrix S = ( S 11 S 12 S 21 S 22 ) &gt; 0 , where S 11 T = S 11 ,</p><p>S 22 T = S 22 , is equivalent to any one of the following conditions:</p><p>(1) S 22 &gt; 0 ,   S 11 − S 12 S 22 − 1 S 12 T &gt; 0 ;</p><p>(2) S 11 &gt; 0 ,   S 22 − S 12 T S 11 − 1 S 12 &gt; 0.</p><p>In the following, we assume that some normal conditions, such as Lipschitz continuity of f and g, etc, are satisfied so that the equilibrium point of system (1) does exist, see [<xref ref-type="bibr" rid="scirp.80159-ref13">13</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref21">21</xref>] etc, in which the existence results of equilibrium point are established by employing contraction mapping theorem, Brouwer’s fixed point theorem and some functional method. Note that these results are independent of time delays, so it is easy to extend the results in the literatures to an impulsive neural network with leakage time-varying delays and other delays, we omit the details and investigate the global asymptotic stability of the equilibrium point mainly in next section. As usual, we assume that x ∗ = ( x 1 ∗ , x 2 ∗ , ⋯ , x n ∗ ) T is an equilibrium point of system (1), i.e.</p><p>− C x * + A f ( x * ) + B g ( x * ) + J = 0 ,   J k ( x * , x * ) = 0 ,   k ∈ ℤ + .</p></sec><sec id="s3"><title>3. Global Asymptotic Stability</title><p>In this section, we investigate the global asymptotic stability of the unique equilibrium point of system (1). For this purpose, the impulsive function J k which is viewed as a perturbation of the equilibrium point x * of model (1) without impulses is defined by</p><p>J k ( x ( t k − ) , x t k − ) = − D k { x ( t k − ) − x * − C ∫ t k − σ ( t k ) t k ( x ( s ) − x * ) d s } , k ∈ ℤ + ,</p><p>where D k , k ∈ ℤ + are some n &#215; n real symmetric matrices. It is clear that J k ( x * , x * ) = 0, k ∈ ℤ + . Such a type of impulse describes the fact that the instantaneous perturbations encountered depend not only on the state of neurons at impulse times t k but also the state of neurons in recent history, which reflects a more realistic dynamics. Similar impulsive perturbations have also been investigated by some researchers recently [<xref ref-type="bibr" rid="scirp.80159-ref22">22</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref23">23</xref>] [<xref ref-type="bibr" rid="scirp.80159-ref25">25</xref>] .</p><p>For convenience, we let y ( t ) = x ( t ) − x * , then system (1) can be rewritten as</p><p>{ y ˙ ( t ) = − C y ( t − σ ( t ) ) + A Ω ( y ( t ) ) + B Γ ( y ( t − τ ( t ) ) ) , t &gt; 0 , t ≠ t k , Δ y ( t k ) = y ( t k ) − y ( t k − ) = − D k { y ( t k − ) − C ∫ t k − σ ( t k ) t k   y ( s ) d s } ,   k ∈ ℤ + , y ( t ) = φ ( t ) − x * ,   t ∈ [ − η , 0 ] , (2)</p><p>where</p><p>Ω ( y ( t ) ) = [ Ω 1 ( y 1 ( t ) ) , Ω 2 ( y 2 ( t ) ) , ⋯ , Ω n ( y n ( t ) ) ] T , Ω j ( y j ( t ) ) = f j ( x j * + y j ( t ) ) − f j ( x j * ) ,</p><p>Γ ( y ( t − τ ( t ) ) ) = [ Γ 1 ( y 1 ( t − τ ( t ) ) ) , Γ 2 ( y 2 ( t − τ ( t ) ) ) , ⋯ , Γ n ( y n ( t − τ ( t ) ) ) ] T ,</p><p>Γ j ( y j ( t − τ ( t ) ) ) = g j ( x j * + y j ( t − τ ( t ) ) ) − g j ( x j * ) .</p><p>Obviously, y ≡ 0 is a solution of system (2). Therefore, to consider the stability of the equilibrium point of system (1), it is equal to consider the stability of zero solution of system (2).</p><p>In this paper, we assume that there exist constants M ≥ 0, N ≥ 0 such that</p><p>(H<sub>5</sub>) Ω T ( y ) Ω ( y ) ≤ M y T y ,   Γ T ( y ) Γ ( y ) ≤ N y T y ,</p><p>which is a very important assumption for activation functions f and g. Using a model transformation, system (2) has an equivalent form as follows:</p><p>{ d d t [ y ( t ) − C ∫ t − σ ( t ) t     y ( s ) d s ] = − C y ( t ) − C y ( t − σ ( t ) ) σ ˙ ( t ) + A Ω ( y ( t ) ) + B Γ ( y ( t − τ ( t ) ) ) , t &gt; 0 , t ≠ t k , Δ y ( t k ) = y ( t k ) − y ( t k − ) = − D k { y ( t k − ) − C ∫ t k − σ ( t k ) t k y ( s ) d s } ,   k ∈ ℤ + , y ( t ) = φ ( t ) − x * ,   t ∈ [ − η , 0 ] , (3)</p><p>In the following, we shall establish a theorem which provides sufficient conditions for global asymptotical stability of the zero solution of system (3). It implies that, if system (1) has an equilibrium point, then it is unique and globally attractive.</p><p>Theorem 3.1. Assume that system (1) has one equilibrium and that assumptions (H<sub>1</sub>)-(H<sub>5</sub>) hold. Then the equilibrium of system (1) is unique and is globally asymptotically stable if there exist n &#215; n matrices P &gt; 0 , Q i &gt; 0 , i = 1 , 2 , ⋯ , 7 such that the following LMI holds:</p><p>[ ∏ ρ σ P C P A P B σ C T P C ρ σ σ C T P C σ C T P A σ C T P B ⋆ − Q 1 0 0 0 0 0 0 ⋆ ⋆ − Q 2 0 0 0 0 0 ⋆ ⋆ ⋆ − Q 3 0 0 0 0 ⋆ ⋆ ⋆ ⋆ − Q 4 0 0 0 ⋆ ⋆ ⋆ ⋆ ⋆ − Q 5 0 0 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ − Q 6 0 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ − Q 7 ] &lt; 0 , (4)</p><p>and</p><p>[ P ( E − D k ) T P ⋆ P ] &gt; 0 , k ∈ ℤ + , (5)</p><p>where</p><p>∏ = − 2 P C + [ λ max ( Q 2 ) + λ max ( Q 6 ) ] M E + Q 4 + ρ σ 1 − ρ σ [ Q 1 + Q 5 ]     + 1 1 − ρ τ λ max ( Q 3 + Q 7 ) N E .</p><p>Proof. Let y ( t ) = y ( t , 0 , φ ) be a solution of system (2) through ( 0, φ ) , where φ ∈ ℂ . Construct a Lyapunov-Krasovskii functional in the form</p><p>V ( t , y ) = V 1 ( t , y ) + V 2 ( t , y ) + V 3 ( t , y ) + V 4 ( t , y ) , (6)</p><p>where</p><p>V 1 = [ y ( t ) − C ∫ t − σ ( t ) t y ( s ) d s ] T P [ y ( t ) − C ∫ t − σ ( t ) t y ( s ) d s ] ,</p><p>V 2 = ρ σ 1 − ρ σ ∫ t − σ ( t ) t y T ( s ) [ Q 1 + Q 5 ] y ( s ) d s ,</p><p>V 3 = 1 1 − ρ τ ∫ t − τ ( t ) t Γ T ( y ( s ) ) [ Q 3 + Q 7 ] Γ ( y ( s ) ) d s ,</p><p>V 4 = σ ∫ t − σ t ∫ s t y T ( u ) Q 8 y ( u ) d u d s ,</p><p>Q 8 = C T P C Q 4 − 1 C T P C + ρ σ C T P C Q 5 − 1 C T P C + C T P A Q 6 − 1 A T P C + C T P B Q 7 − 1 B T P C .</p><p>Calculating the upper right derivative of V ( t , y ) along the solution of system (2) at the continuous interval [ t k − 1 , t k ) , k ∈ ℤ + , and considering the Lemma 2.1-2.3, it can be deduced that</p><p>D + V 1 = 2 [ y ( t ) − C ∫ t − σ ( t ) t y ( s ) d s ] T P [ − C y ( t ) − C y ( t − σ ( t ) ) σ ˙ ( t )       + A Ω ( y ( t ) ) + B Γ ( y ( t − τ ( t ) ) ) ] = − 2 y T ( t ) P C y ( t ) − 2 y T ( t ) P C y ( t − σ ( t ) ) σ ˙ ( t ) + 2 y T ( t ) P A Ω ( y ( t ) )       + 2 y T ( t ) P B Γ ( y ( t − τ ( t ) ) ) + 2 [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C y ( t )       + 2 [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C y ( t − σ ( t ) ) σ ˙ ( t ) − 2 [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P A Ω y (t)</p><p>      − 2 [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P B Γ ( y ( t − τ ( t ) ) ) ≤ − 2 y T ( t ) P C y ( t ) + y T ( t ) P C Q 1 − 1 C T P y ( t ) ρ σ + ρ σ y T ( t − σ ( t ) ) Q 1 y ( t − σ ( t ) )     + Ω T ( y ( t ) ) Q 2 Ω ( y ( t ) ) + y T ( t ) P A Q 2 − 1 A T P y ( t )     + Γ T ( y ( t − τ ( t ) ) ) Q 3 Γ ( y ( t − τ ( t ) ) ) + y T ( t ) P B Q 3 − 1 B T P y ( t )     + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C Q 4 − 1 C T P C [ ∫ t − σ ( t ) t y ( s ) d s ]</p><p>+   y T ( t ) Q 4 y ( t ) + ρ σ [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C Q 5 − 1 C T P C [ ∫ t − σ ( t ) t y ( s ) d s ] +   ρ σ y T ( t − σ ( t ) ) Q 5 y ( t − σ ( t ) ) + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P A Q 6 − 1 A T P C [ ∫ t − σ ( t ) t y ( s ) d s ] +   Ω T ( y ( t ) ) Q 6 Ω ( y ( t ) ) + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P B Q 7 − 1 B T P C [ ∫ t − σ ( t ) t y ( s ) d s ] +   Γ T ( y ( t − τ ( t ) ) ) Q 7 Γ ( y ( t − τ ( t ) ) )</p><p>≤ − 2 y T ( t ) P C y ( t ) + y T ( t ) P C Q 1 − 1 C T P y ( t ) ρ σ + ρ σ y T ( t − σ ( t ) ) Q 1 y ( t − σ ( t ) )     + y T ( t ) λ max ( Q 2 ) M E y ( t ) + y T ( t ) P A Q 2 − 1 A T P y ( t )     + Γ T ( y ( t − τ ( t ) ) ) Q 3 Γ ( y ( t − τ ( t ) ) ) + y T ( t ) P B Q 3 − 1 B T P y ( t )     + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C Q 4 − 1 C T P C [ ∫ t − σ ( t ) t y ( s ) d s ]     + y T ( t ) Q 4 y ( t ) + ρ σ [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P C Q 5 − 1 C T P C [ ∫ t − σ ( t ) t y ( s ) d s ] (7)</p><p>+ ρ σ y T ( t − σ ( t ) ) Q 5 y ( t − σ ( t ) ) + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P A Q 6 − 1 A T P C [ ∫ t − σ ( t ) t y ( s ) d s ] + y T ( t ) λ max ( Q 6 ) N E y ( t ) + [ ∫ t − σ ( t ) t y ( s ) d s ] T C T P B Q 7 − 1 B T P C [ ∫ t − σ ( t ) t y ( s ) d s ] + Γ T ( y ( t − τ ( t ) ) ) Q 7 Γ ( y ( t − τ ( t ) ) ) ,</p><p>D + V 2 = ρ σ 1 − ρ σ y T ( t ) [ Q 1 + Q 5 ] y ( t )     − y T ( t − σ ( t ) ) [ Q 1 + Q 5 ] y ( t − σ ( t ) ) ρ σ ( 1 − σ ˙ ( t ) ) 1 − ρ σ ≤ ρ σ 1 − ρ σ y T ( t ) [ Q 1 + Q 5 ] y ( t )     − y T ( t − σ ( t ) ) [ Q 1 + Q 5 ] y ( t − σ ( t ) ) ρ σ , (8)</p><p>D + V 3 = 1 1 − ρ τ Γ T ( y ( t ) ) [ Q 3 + Q 7 ] Γ ( y ( t ) )     − Γ T ( y ( t − τ ( t ) ) ) [ Q 3 + Q 7 ] Γ ( y ( t − τ ( t ) ) ) 1 − τ ˙ ( t ) 1 − ρ τ ≤ 1 1 − ρ τ Γ T ( y ( t ) ) [ Q 3 + Q 7 ] Γ ( y ( t ) )     − Γ T ( y ( t − τ ( t ) ) ) [ Q 3 + Q 7 ] Γ ( y ( t − τ ( t ) ) ) , (9)</p><p>D + V 4 = σ 2 y T ( t ) Q 8 y ( t ) − σ ∫ t − σ t     y T ( s ) Q 8 y ( s ) d s ≤ σ 2 y T ( t ) Q 8 y ( t ) − σ ( t ) ∫ t − σ ( t ) t     y T ( s ) Q 8 y ( s ) d s ≤ σ 2 y T ( t ) Q 8 y ( t ) − [ ∫ t − σ ( t ) t     y ( s ) d s ] T Q 8 [ ∫ t − σ ( t ) t     y ( s ) d s ] , (10)</p><p>where</p><p>Q 8 = C T P C Q 4 − 1 C T P C + ρ σ C T P C Q 5 − 1 C T P C + C T P A Q 6 − 1 A T P C + C T P B Q 7 − 1 B T P C .</p><p>Combining (6)-(10), one may deduce that</p><p>D + V ≤ y T ( t ) [ − 2 P C + ρ σ P C Q 1 − 1 C T P + λ max ( Q 2 ) M E + P A Q 2 − 1 A T P + P B Q 3 − 1 B T P     + Q 4 + λ max ( Q 6 ) M E + ρ σ 1 − ρ σ ( Q 1 + Q 5 ) + 1 1 − ρ τ λ max ( Q 3 + Q 7 ) N E     + σ 2 C T P C Q 4 − 1 C T P C + ρ σ σ 2 C T P C Q 5 − 1 C T P C     + σ 2 C T P A Q 6 − 1 A T P C + σ 2 C T P B Q 7 − 1 B T P C ] y ( t ) = y T ( t ) Σ y ( t ) ,</p><p>where</p><p>Σ = − 2 P C + ρ σ P C Q 1 − 1 C T P + λ max ( Q 2 ) M E + P A Q 2 − 1 A T P + P B Q 3 − 1 B T P     + Q 4 + λ max ( Q 6 ) M E + ρ σ 1 − ρ σ ( Q 1 + Q 5 ) + 1 1 − ρ τ λ max ( Q 3 + Q 7 ) N E     + σ 2 C T P C Q 4 − 1 C T P C + ρ σ σ 2 C T P C Q 5 − 1 C T P C     + σ 2 C T P A Q 6 − 1 A T P C + σ 2 C T P B Q 7 − 1 B T P C</p><p>By the well known Schur complements, we know that Σ &lt; 0 if and only if the LMI (4) holds. Hence, one may derive that</p><p>D + V ( t , y ) ≤ − y T ( t ) Σ * y ( t ) , t ∈ [ t k − 1 , t k ) , k ∈ ℤ + , (11)</p><p>where Σ * = − Σ &gt; 0 .</p><p>Suppose that t ∈ [ t n − 1 , t n ) , for some n ∈ ℤ + . Then integrating inequality (11) at each interval [ t k − 1 , t k ) ,1 ≤ k ≤ n − 1 , we derive that</p><p>V ( t 1 − ) ≤ V ( 0 ) − ∫ 0 t 1   y T ( s ) Σ * y ( s ) d s ,</p><p>V ( t 2 − ) ≤ V ( t 1 ) − ∫ t 1 t 2 y T ( s ) Σ * y ( s ) d s ,</p><p>⋮</p><p>V ( t n − 1 − ) ≤ V ( t n − 2 ) − ∫ t n − 2 t n − 1 y T ( s ) Σ * y ( s ) d s ,</p><p>V ( t ) ≤ V ( t n − 1 ) − ∫ t n − 1 t y T ( s ) Σ * y ( s ) d s ,</p><p>which implies that</p><p>V ( t ) ≤ V ( 0 ) − ∫ 0 t     y T ( s ) Σ * y ( s ) d s + ∑ 0 &lt; t k ≤ t [ V ( t k ) − V ( t k − ) ] , t ≥ 0. (12)</p><p>In order to analyze (12), we need consider the change of V at impulse times.</p><p>Firstly, it follows from (5) that</p><p>[ P ( E − D k ) T P ⋆ P ] &gt; 0 ⇔ [ E O O P − 1 ] [ P ( E − D k ) T P ⋆ P ] [ E O O P − 1 ] &gt; 0</p><p>⇔ [ P ( E − D k ) T ⋆ P − 1 ] &gt; 0</p><p>⇔ P − ( E − D k ) T P ( E − D k ) &gt; 0 , (13)</p><p>in which the last equivalent relation is obtained by Lemma 2.4.</p><p>Secondly, from system (3), it can be obtained that</p><p>y ( t k ) − C ∫ t k − σ ( t k ) t k   y ( s ) d s = y ( t k − ) − D k [ y ( t k − ) − C ∫ t k − σ ( t k ) t k y ( s ) d s − C ∫ t k − σ ( t k ) t k y ( s ) d s ] = ( E − D k ) [ y ( t k − ) − C ∫ t k − σ ( t k ) t k y ( s ) d s ] ,</p><p>which together with (13) yields</p><p>V 1 ( t k ) = [ y ( t k ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] T P [ y ( t k ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] = [ y ( t k − ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] T ( E − D k ) T P ( E − D k ) [ y ( t k − ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] &lt; [ y ( t k − ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] T P [ y ( t k − ) − C ∫ t k − σ ( t k ) t k     y ( s ) d s ] = V 1 ( t k − ) .</p><p>Obviously, we have V i ( t k ) ≤ V i ( t k − ) , i = 2 , 3 , 4 , k ∈ ℤ + .</p><p>Thus, we can deduce that</p><p>V ( t k ) ≤ V ( t k − ) , k ∈ ℤ + .</p><p>Substituting the above inequality in (12) yields</p><p>V ( t ) + ∫ 0 t     y T ( s ) Σ * y ( s ) d s ≤ V ( 0 ) , t ≥ 0. (14)</p><p>By simple calculation, it can be deduced that</p><p>V ( 0 ) ≤ { λ m a x ( P ) ( 1 + ‖ C ‖ σ ) 2 + ρ σ σ 1 − ρ σ λ m a x ( Q 1 + Q 5 )     + τ λ m a x ( Q 3 + Q 7 ) N 1 − ρ τ + σ 3 λ m a x ( Q 8 ) } ‖ φ ‖ η 2 = Δ ‖ φ ‖ η 2 ,</p><p>where Δ = λ m a x ( P ) ( 1 + ‖ C ‖ σ ) 2 + ρ σ σ 1 − ρ σ λ m a x ( Q 1 + Q 5 )     + τ λ m a x ( Q 3 + Q 7 ) N 1 − ρ τ + σ 3 λ m a x ( Q 8 ) .</p><p>It follows that</p><p>λ m i n ( P ) ‖ y ( t ) − C ∫ t − σ ( t ) t     y ( s ) d s ‖ ≤ V 1 ≤ V ≤ V ( 0 ) ≤ Δ ‖ φ ‖ η ,</p><p>which implies that</p><p>‖ y ( t ) ‖ ≤ ‖ C ‖ ∫ t − σ ( t ) t     y ( s ) d s + Δ λ m i n ( P ) ‖ φ ‖ η .</p><p>Employing Gronwall inequality, we get</p><p>‖ y ( t ) ‖ ≤ Δ λ m i n ( P ) ‖ φ ‖ η e σ ( t ) ‖ C ‖                 ≤ Δ λ m i n ( P ) e σ ‖ C ‖ ‖ φ ‖ η &lt; ∞ ,</p><p>which implies that the equilibrium point of system (2) is locally stable, and uniformly bounded on [ 0, ∞ ) .</p><p>Thus, considering the continuity of the activation function f and g, it can be deduced from system (2) that there exists some constant R &gt; 0 such that ‖ y ˙ ( t ) ‖ ≤ R , t ∈ [ t k − 1 , t k ) , k ∈ ℤ + , where y ˙ denotes the right-hand derivative of y at impulse times t k − 1 , k ∈ ℤ + .</p><p>In the following, we shall prove that ‖ y ( t ) ‖ → 0 as t → ∞ .</p><p>We first show that</p><p>‖ y ( t k ) ‖ → 0, t k → ∞ . (15)</p><p>It is equivalent to prove that | y i ( t k ) | = 0 as t k → ∞ , i ∈ Λ . Note that</p><p>| y ˙ i ( t ) | ≤ R , t ∈ [ t k − 1 , t k ) , k ∈ ℤ + , then for any ϵ &gt; 0 , there exists a δ = ϵ 2 R &gt; 0</p><p>such that, for any t ′ , t ″ ∈ [ t k − 1 , t k ) , k ∈ ℤ + , | t ′ − t ″ | &lt; δ implies that</p><p>| y i ( t ′ ) − y i ( t ″ ) | ≤ R | t ′ − t ″ | = ε 2 , i ∈ Λ   . (16)</p><p>By (H<sub>3</sub>), we define δ &#175; = min { δ , 1 2 θ } , where θ = inf k ∈ ℤ + { t k − t k − 1 } &gt; 0 . From</p><p>(14), it can be obtained that</p><p>∫ 0 t | y i ( s ) | 2 d s ≤ ∫ 0 t     y ( s ) T y ( s ) d s ≤ 1 λ min ( Σ * ) ∫ 0 t     y ( s ) T Σ * y ( s ) d s &lt; ∞ , t &gt; 0 ,</p><p>which implies that ∫ t k t k + δ &#175; | y i ( s ) | 2 d s → 0 as t k → ∞ .</p><p>Applying Lemma 2.2, we get</p><p>∫ t k t k + δ &#175; | y i ( s ) | d s ≤ δ &#175; ∫ t k t k + δ &#175; | y i ( s ) | 2 d s → 0 ,   t k → ∞ . (17)</p><p>So for the above-given ϵ , there exists a T = T ( ϵ ) &gt; 0 such that t k &gt; T implies that</p><p>∫ t k t k + δ &#175; | y i ( s ) | d s &lt; ϵ 2 δ &#175; .</p><p>From the continuity of | y i ( t ) | on [ t k , t k + δ &#175; ] , and using the integral mean value theorem, there exists some constant ξ k ∈ [ t k , t k + δ &#175; ] such that</p><p>| y i ( ξ k ) | δ &#175; = ∫ t k t k + δ &#175; | y i ( s ) | d s &lt; ϵ 2 δ &#175; ,</p><p>which leads to</p><p>| y i ( ξ k ) | &lt; ϵ 2 . (18)</p><p>Combining (16) and (18), one may deduce that, for any ϵ &gt; 0 , there exists a T = T ( ϵ ) &gt; 0 such that t k &gt; T implies that</p><p>| y i ( t k ) | ≤ | y i ( t k ) − y i ( ξ k ) | + | y i ( ξ k ) | ≤ ϵ 2 + ϵ 2 = ϵ     .</p><p>This completes the proof of (15).</p><p>Now we are in a position to prove that | y i ( t ) | → 0 as t → ∞ , i ∈ Λ . In fact,</p><p>it follows from (16) that, for any ϵ &gt; 0 , there exists a δ = ϵ 2 M &gt; 0</p><p>such that, for any t ′ , t ″ ∈ [ t k − 1 , t k ) , k ∈ ℤ + , | t ′ − t ″ | &lt; δ implies that</p><p>| y i ( t ′ ) − y i ( t ″ ) | ≤ ϵ 2 , i ∈ Λ . (19)</p><p>Since (15) holds, there exists a constant T 1 = T 1 ( ϵ ) &gt; 0 such that</p><p>| y i ( t k ) | &lt; ϵ 2 , t k &gt; T 1 . (20)</p><p>In addition, applying the same argument as in (17), we can deduce that</p><p>∫ t t + δ &#175; | y i ( s ) | d s → 0 , t → ∞ ,</p><p>where δ &#175; = min { δ , 1 2 θ } , θ = inf k ∈ ℤ + { t k − t k − 1 } &gt; 0.</p><p>So, for the above-given ϵ , there exists a constant T 2 = T 2 ( ϵ ) &gt; 0 such that</p><p>∫ t − δ &#175; t | y i ( s ) | d s &lt; ϵ 2 δ &#175; , t &gt; T 2 . (21)</p><p>Set T * = min { t q | t q ≥ max { T 1 , T 2 } , q ∈ ℤ + } . Now we claim that | y i ( t ) | ≤ ϵ , t &gt; T * . In fact, for any t &gt; T * and without loss of generality assume that t ∈ [ t p , t p + 1 ) , p ≥ q . We consider the following two cases.</p><p>Case1. t ∈ [ t p , t p + δ &#175; ] . In this case, it is obvious from (19) and (20) that</p><p>| y i ( t ) | ≤ | y i ( t ) − y i ( t p ) | + | y i ( t p ) | ≤ ϵ 2 + ϵ 2 = ϵ</p><p>Case2. t ∈ [ t p + δ &#175; , t p + 1 ) . In this case, we know that y i ( s ) is continuous on [ t − δ &#175; , t ] ⊆ [ t p , t p + 1 ) . By the integral mean value theorem, there exists at least one point υ t ∈ [ t − δ &#175; , t ] such that</p><p>∫ t − δ &#175; t | y i ( s ) | d s = | y i ( υ t ) | δ &#175; ,</p><p>which together with (21) yields | y i ( υ t ) | &lt; ϵ 2 . Then, in view of υ t ∈ [ t − δ &#175; , t ] , we obtain</p><p>| y i ( t ) | ≤ | y i ( t ) − y i ( υ t ) | + | y i ( υ t ) | ≤ ϵ 2 + ϵ 2 = ϵ</p><p>So we have proved that | y i ( t ) | ≤ ϵ , t &gt; T * . Therefore, the zero solution of system (2) or (3) is globally asymptotically stable, which implies that system (1) has a unique equilibrium point which is globally asymptotically stable. The proof of Theorem 3.1 is therefore complete. W</p><p>Remark 3.1. Theorem 3.1 provides some delay-dependent conditions for the global asymptotical stability of the unique equilibrium point of impulsive Hopfield neural networks with leakage time-varying delay. We would like to note that such a result has not been reported in other literatures.</p><p>In particular, when the leakage delay and transmission delay are all constants, i.e., σ ( t ) ≡ σ , τ ( t ) ≡ τ , system (1) becomes</p><p>{ x ˙ ( t ) − C x ( t − σ ) + A f ( x ( t ) ) + B g ( x ( t − τ ) ) + J , t &gt; 0, t ≠ t k , Δ x ( t k ) = x ( t k ) − x ( t k − ) = J k ( x ( t k − ) , x t k − ) ,   k ∈ ℤ + , x ( t ) = φ ( t ) ,   t ∈ [ − η ,0 ] . (22)</p><p>For system (22), we have the following result by Theorem 3.1.</p><p>Corollary 3.1. Assume that system (22) has one equilibrium and that assumptions (H<sub>2</sub>)-(H<sub>5</sub>) hold. Then the equilibrium of system (22) is unique and is globally asymptotically stable if there exist n &#215; n matrices P &gt; 0 , Q i &gt; 0 , i = 1 , 2 , ⋯ , 5 such that the following LMI holds:</p><p>[ ∏ P A P B σ C T P C σ C T P A σ C T P B ⋆ − Q 1 0 0 0 0 ⋆ ⋆ − Q 2 0 0 0 ⋆ ⋆ ⋆ − Q 3 0 0 ⋆ ⋆ ⋆ ⋆ − Q 4 0 ⋆ ⋆ ⋆ ⋆ ⋆ − Q 5 ] &lt; 0,</p><p>and</p><p>[ P ( E − D k ) T P ⋆ P ] &gt; 0, k ∈ ℤ + ,</p><p>where</p><p>∏ = − 2 P C + [ λ m a x ( Q 1 ) + λ m a x ( Q 4 ) ] M E + Q 3 + λ m a x ( Q 2 + Q 5 ) N E .</p><p>Remark 3.2. The conditions in Corollary 3.1 are independent on transmission delay and dependent only on leakage delay as ρ τ = 0 in Theorem 3.1. So, based on our results, we would like to say that the stability of system (1) is more sensitive to leakage delay, leakage time-varying delay or leakage constant delay. In other words, we should control not only the bound of leakage delay but also the bound of derivative of leakage delay, to obtain the stability of system (1), while the bound of transmission delay τ or τ ( t ) do not affect the stability of system in our results.</p><p>Remark 3.3. So far, there are many papers to study the dynamics of time delay systems and impulsive systems, many effective methods and results have been developed [<xref ref-type="bibr" rid="scirp.80159-ref19">19</xref>] - [<xref ref-type="bibr" rid="scirp.80159-ref26">26</xref>] . But, those results cannot be applied to systems with leakage time-varying delay and impulse which could affect the dynamics of system essentially. In this paper, we investigate the stability of impulsive Hopfield neural networks with leakage time-varying delay by model transformation technique and a certain Lyapunov-Krasovskii functional combined with LMI technique and construct a new criterion. How to improve the dynamics of systems with leakage time-varying delay and impulse may be an interesting problem and requires further research.</p></sec><sec id="s4"><title>4. Conclusion</title><p>We have studied the global asymptotic stability of the equilibrium point of impulsive Hopfield neural networks with leakage time-varying delay. Via an appropriate Lyapunov-Krasovskii functional and model transformation technique, a new stability criterion which depends on the impulse and the bounds of leakage time-varying delay and its derivative has been presented in terms of a linear matrix inequality. To the best of our knowledge, so far, few authors have considered the dynamics of systems with leakage time-varying delay and impulse which could affect the dynamics of neural networks essentially. How to further improve the conservation of the developed results is still a difficult problem and need consideration in the future work.</p></sec><sec id="s5"><title>Fund</title><p>Supported by National Natural Science Foundation of China (11601269) and Project of Shan-dong Province Higher Educational Science and Technology Program (J15LI02).</p></sec><sec id="s6"><title>Cite this paper</title><p>Xi, Q. (2017) New Results of Global Asymptotical Stability for Impulsive Hopfield Neural Networks with Leakage Time-Varying Delay. Journal of Applied Mathematics and Physics, 5, 2112-2126. https://doi.org/10.4236/jamp.2017.511173</p></sec></body><back><ref-list><title>References</title><ref id="scirp.80159-ref1"><label>1</label><mixed-citation publication-type="other" xlink:type="simple">Gopalsamy, K. (1992) Stability and Oscillations in Delay Differential Equations of Population Dynamics. Kluwer, Dordrecht.</mixed-citation></ref><ref id="scirp.80159-ref2"><label>2</label><mixed-citation publication-type="other" xlink:type="simple">Hale, J. and Verduyn Lunel, S. (1993) Introduction to Functional Differential Equations. Springer-Verlag, New York.</mixed-citation></ref><ref id="scirp.80159-ref3"><label>3</label><mixed-citation publication-type="other" xlink:type="simple">Haykin, S. (1994) Neural Networks. Prentice-Hall, NJ.</mixed-citation></ref><ref id="scirp.80159-ref4"><label>4</label><mixed-citation publication-type="other" xlink:type="simple">Kolmanovskii, V. and Myshkis, A. (1999) Introduction to the Theory and Applications of Functional Differential Equations. Kluwer, Dordrecht, The Netherlands.</mixed-citation></ref><ref id="scirp.80159-ref5"><label>5</label><mixed-citation publication-type="other" xlink:type="simple">Niculescu, S. (2001) Delay Effects on Stability: A Robust Control Approach. Springer-Verlag, New York.</mixed-citation></ref><ref id="scirp.80159-ref6"><label>6</label><mixed-citation publication-type="other" xlink:type="simple">Van den Driessche, P. and Zou, X. (1998) Global Attractivity in Delayed Hopfield Neural Network Models. SIAM Journal on Applied Mathematics, 58, 1878-1890. https://doi.org/10.1137/S0036139997321219</mixed-citation></ref><ref id="scirp.80159-ref7"><label>7</label><mixed-citation publication-type="other" xlink:type="simple">Liao, X. and Yu, J. (1998) Robust Stability for Interval Hopfield Neural Networks with Time Delay. IEEE Transactions on Neural Networks, 9, 1042-1045. https://doi.org/10.1109/72.712187</mixed-citation></ref><ref id="scirp.80159-ref8"><label>8</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Liao</surname><given-names> X. and Xiao D. </given-names></name>,<etal>et al</etal>. (<year>2000</year>)<article-title>Global Exponential Stability of Hopfield Neural Networks with Time Varying Delays</article-title><source> Acta Electronica Sinica</source><volume> 28</volume>,<fpage> 87</fpage>-<lpage>90</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref9"><label>9</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Chen</surname><given-names> T. </given-names></name>,<etal>et al</etal>. (<year>2001</year>)<article-title>Global Exponential Stability of Delayed Hopfield Neural Networks</article-title><source> Neural Networks</source><volume> 14</volume>,<fpage> 977</fpage>-<lpage>980</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref10"><label>10</label><mixed-citation publication-type="other" xlink:type="simple">Wang, L. and Xu, D. (2002) Stability Analysis of Hopfield Neural Networks with Time Delays. Applied Mathematics and Mechanics (English Edition), 23, 65-70. https://doi.org/10.1007/BF02437731</mixed-citation></ref><ref id="scirp.80159-ref11"><label>11</label><mixed-citation publication-type="other" xlink:type="simple">Chen, G., Pu, Z. and Zhang, J. (2005) Global Exponential Stability and Global Attractivity for Variably Delayed Hopfield Neural Network Models. Chinese Journal of Engineering Mathematics, 22, 821-826.</mixed-citation></ref><ref id="scirp.80159-ref12"><label>12</label><mixed-citation publication-type="other" xlink:type="simple">Hou, Y., Liao, T. and Yan, J. (2007) Global Asymptotic Stability for a Class of Nonlinear Neural Networks with Multiple Delays. Nonlinear Analysis, Theory, Methods, Applications, 67, 3037-3040.</mixed-citation></ref><ref id="scirp.80159-ref13"><label>13</label><mixed-citation publication-type="other" xlink:type="simple">Zhang, Q., Wei, X. and Xu, J. (2007) Delay-Dependent Global Stability Results for Delayed Hopfield Neural Networks. Chaos, Solitons Fractals, 34, 662-668.</mixed-citation></ref><ref id="scirp.80159-ref14"><label>14</label><mixed-citation publication-type="other" xlink:type="simple">Lou, X. and Cui, B. (2007) Novel Global Stability Criteria for High-Order Hopfield-Type Neural Networks with Time-Varying Delays. Journal of Mathematical Analysis and Applications, 330, 144-158.</mixed-citation></ref><ref id="scirp.80159-ref15"><label>15</label><mixed-citation publication-type="other" xlink:type="simple">Zhou, Q. and Wan, L. (2008) Exponential Stability of Stochastic Delayed Hopfield Neural Networks. Applied Mathematics and Computation, 206, 818-824.</mixed-citation></ref><ref id="scirp.80159-ref16"><label>16</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Wu</surname><given-names> H. </given-names></name>,<etal>et al</etal>. (<year>2009</year>)<article-title>Global Exponential Stability of Hopfield Neural Networks with Delays and Inverse Lipschitz Neuron Activations</article-title><source> Nonlinear Analysis: Real World Applications</source><volume> 14</volume>,<fpage> 2776</fpage>-<lpage>2783</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref17"><label>17</label><mixed-citation publication-type="other" xlink:type="simple">Bainov, D. and Simennov, P. (1989) Systems with Impulsive Effect Stability Theory and Applications. Ellis Horwood Limited, New York.</mixed-citation></ref><ref id="scirp.80159-ref18"><label>18</label><mixed-citation publication-type="other" xlink:type="simple">Fu, X., Yan, B. and Liu, Y. (2005) Introduction of Impulsive Differential Systems. Science Press, Beijing.</mixed-citation></ref><ref id="scirp.80159-ref19"><label>19</label><mixed-citation publication-type="other" xlink:type="simple">Liu, X. and Wang, Q. (2007) The Method of Lyapunov Functionals and Exponential Stability of Impulsive Systems with Time Delay. Nonlinear Analysis, Theory, Methods, Applications, 66, 1465-1484.</mixed-citation></ref><ref id="scirp.80159-ref20"><label>20</label><mixed-citation publication-type="other" xlink:type="simple">Liu, B., Liu, X. and Liao, X. (2003) Robust H-Stability of Hopfield Neural Networks with Impulsive Effects and Design of Impulsive Controllers. Control Theory Applications, 20, 168-172.</mixed-citation></ref><ref id="scirp.80159-ref21"><label>21</label><mixed-citation publication-type="other" xlink:type="simple">Yang, Z. and Xu, D. (2006) Global Exponential Stability of Hopfield Neural Networks with Variable Delays and Impulsive Effects. Applied Mathematics and Machanics, 27, 1517-1522. https://doi.org/10.1007/s10483-006-1109-1</mixed-citation></ref><ref id="scirp.80159-ref22"><label>22</label><mixed-citation publication-type="other" xlink:type="simple">Shen, J., Liu, Y. and Li, J. (2007) Asymptotic Behavior of Solutions of Nonlinear Neural Differential Equations with Impulses. Journal of Mathematical Analysis and Applications, 332, 179-189.</mixed-citation></ref><ref id="scirp.80159-ref23"><label>23</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Zhou</surname><given-names> Q. </given-names></name>,<etal>et al</etal>. (<year>2009</year>)<article-title>Global Exponential Stability of BAM Neural Networks with Distributed Delays and Impulses</article-title><source> Nonlinear Analysis: Real World Applications</source><volume> 10</volume>,<fpage> 144</fpage>-<lpage>153</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref24"><label>24</label><mixed-citation publication-type="other" xlink:type="simple">Li, X. and Chen, Z. (2009) Stability Properties for Hopfield Neural Networks with Delays and Impulsive Perturbations. Nonlinear Analysis, Theory, Methods, Applications, 10, 3253-3265.</mixed-citation></ref><ref id="scirp.80159-ref25"><label>25</label><mixed-citation publication-type="other" xlink:type="simple">Fu, X. and Li, X. (2009) Global Exponential Stability and Global Attractivity of Impulsive Hopfield Neural Networks with Time Delays. Journal of Computational and Applied Mathematics, 231, 187-199.</mixed-citation></ref><ref id="scirp.80159-ref26"><label>26</label><mixed-citation publication-type="other" xlink:type="simple">Chen, J., Li, X. and Wang, D. (2013) Asymptotic Stability and Exponential Stability of Impulsive Delayed Hopfield Neural Networks. Abstract and Applied Analysis, 2013, Article ID: 638496.</mixed-citation></ref><ref id="scirp.80159-ref27"><label>27</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Gopalsamy</surname><given-names> K. </given-names></name>,<etal>et al</etal>. (<year>2007</year>)<article-title>Leakage Delays in BAM</article-title><source> Journal of Mathematical Analysis and Applications</source><volume> 325</volume>,<fpage> 1117</fpage>-<lpage>1132</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref28"><label>28</label><mixed-citation publication-type="journal" xlink:type="simple"><name name-style="western"><surname>Peng</surname><given-names> S. </given-names></name>,<etal>et al</etal>. (<year>2010</year>)<article-title>Attractive Periodic Solutions of BAM Neural Networks with Continuously Distributed Delays in the Leakage Terms</article-title><source> Nonlinear Analysis: Real World Applications</source><volume> 11</volume>,<fpage> 2141</fpage>-<lpage>2151</lpage>.<pub-id pub-id-type="doi"></pub-id></mixed-citation></ref><ref id="scirp.80159-ref29"><label>29</label><mixed-citation publication-type="other" xlink:type="simple">Li, X. and Cao, J. (2010) Delay-Dependent Stability of Neural Networks of Neutral Type with Time Delay in the Leakage Term. Nonlinearity, 23, 1709-1726. https://doi.org/10.1088/0951-7715/23/7/010</mixed-citation></ref><ref id="scirp.80159-ref30"><label>30</label><mixed-citation publication-type="other" xlink:type="simple">Balasubramaniam, P., Kalpana, M. and Rakkiyappan, R. (2011) State Estimation for Fuzzy Cellular Neural Networks with Time Delay in the Leakage Term, Discrete and Unbounded Distributed Delays. Computers and Mathematics with Applications, 62, 3959-3972.</mixed-citation></ref><ref id="scirp.80159-ref31"><label>31</label><mixed-citation publication-type="other" xlink:type="simple">Lakshmanan, S., Park, J., Jung, H. and Balasubramaniam, P. (2012) Design of State Estimator for Neural Networks with Leakage, Discrete and Distributed Delays. Applied Mathematics and Computation, 218, 11297-11310.</mixed-citation></ref><ref id="scirp.80159-ref32"><label>32</label><mixed-citation publication-type="other" xlink:type="simple">Li, Z. and Xu, R. (2012) Global Asymptotic Stability of Stochastic Reaction-Diffusion Neural Networks with Time Delays in the Leakage Terms. Communications in Nonlinear Science and Numerical Simulation, 17, 1681-1689.</mixed-citation></ref><ref id="scirp.80159-ref33"><label>33</label><mixed-citation publication-type="other" xlink:type="simple">Lakshmanan, S., Park, J., Lee, T., Jung, H. and Rakkiyappan, R. (2013) Stability Criteria for BAM Neural Networks with Leakage Delays and Probabilistic Time-Varying Delays. Applied Mathematics and Computation, 219, 9408-9423.</mixed-citation></ref><ref id="scirp.80159-ref34"><label>34</label><mixed-citation publication-type="other" xlink:type="simple">Li, X. and Rakkiyappan, R. (2013) Stability Results for Takagi CSugeno Fuzzy Uncertain BAM Neural Networks with Time Delays in the Leakage Term. Neural Computing and Applications, 22, 203-219.</mixed-citation></ref><ref id="scirp.80159-ref35"><label>35</label><mixed-citation publication-type="other" xlink:type="simple">Li, X., Fu, X., Balasubramaniam, P. and Rakkiyappan, R. (2010) Existence, Uniqueness and Stability Analysis of Recurrent Neural Networks with Time Delay in the Leakage Term under Impulsive Perturbations. Nonlinear Analysis: Real World Applications, 11, 4092-4108.</mixed-citation></ref><ref id="scirp.80159-ref36"><label>36</label><mixed-citation publication-type="other" xlink:type="simple">Li, X. and Fu, X. (2013) Effect of Leakage Time-Varying Delay on Stability of Nonlinear Differential Systems. Journal of the Franklin Institute, 350, 1335-1344.</mixed-citation></ref><ref id="scirp.80159-ref37"><label>37</label><mixed-citation publication-type="other" xlink:type="simple">Rakkiyappan, R., Chandrasekar, A. and Lakshmanan, S. (2013) Effects of Leakage Time-Varying Delays in Mar-kovian Jump Neural Networks with Impulse Control. Neurocomputing, 121, 365-378.</mixed-citation></ref><ref id="scirp.80159-ref38"><label>38</label><mixed-citation publication-type="other" xlink:type="simple">Lu, C. and Wang, L. (2014) Robust Exponential Stability of Impulsive Stochastic Neural Networks with Leakage Time-Varying Delay. Abstract and Applied Analysis, 2014, Article ID: 831027.</mixed-citation></ref><ref id="scirp.80159-ref39"><label>39</label><mixed-citation publication-type="other" xlink:type="simple">Suresh Kumar, R., Sugumaran, G., Raja, R., Zhu, Q. and Karthik Raja, U. (2016) New Stability Criterion of Neural Networks with Leakage Delays and Impulses: A Piecewise Delay Method. Cognitive Neurodynamics, 10, 85-98. https://doi.org/10.1007/s11571-015-9356-y</mixed-citation></ref><ref id="scirp.80159-ref40"><label>40</label><mixed-citation publication-type="other" xlink:type="simple">Balasundaram, K., Raja, R., Zhu, Q., Chandrasekaran, S. and Zhou, H. (2016) New Global Asymptotic Stability of Discrete-Time Recurrent Neural Networks with Multiple Time-Varying Delays in the Leakage Term and Impulsive Effects. Cognitive Neurodynamics, 214, 420-429.</mixed-citation></ref><ref id="scirp.80159-ref41"><label>41</label><mixed-citation publication-type="other" xlink:type="simple">Li, C. and Huang, T. (2009) On the Stability of Nonlinear Systems with Leakage Delay. Journal of the Franklin Institute, 346, 366-377.</mixed-citation></ref><ref id="scirp.80159-ref42"><label>42</label><mixed-citation publication-type="other" xlink:type="simple">Gu, K. (2000) An Integral Inequality in the Stability Problem of Time-Delay Systems. Proceedings of the 39th IEEE Conference on Decision and Control Sydney, December, 2805-2810.</mixed-citation></ref><ref id="scirp.80159-ref43"><label>43</label><mixed-citation publication-type="other" xlink:type="simple">Berman, A. and Plemmons, R.J. (1979) Nonnegative Matrics in Mathematical Sciences. Academic Press, New York.</mixed-citation></ref><ref id="scirp.80159-ref44"><label>44</label><mixed-citation publication-type="other" xlink:type="simple">Boyd, S., Ghaoui, L. E., Feron, E. and Balakrishnan, V. (1994) Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia.</mixed-citation></ref></ref-list></back></article>