Existence of Periodic Solutions for an Output Hidden Feedback Elman Neural Network

Abstract

We first recall the sufficient conditions for the existence of a periodic output of a modified Elman neural network with a periodic input found by using Mawhin’s continuation theorem of coincidence degree theory. Using this result, we obtain sufficient conditions for the existence of a periodic output for an output hidden feedback Elman neural network with a periodic input. Examples illustrating these sufficient conditions are given.

Share and Cite:

Covachev, V. and Covacheva, Z. (2020) Existence of Periodic Solutions for an Output Hidden Feedback Elman Neural Network. Journal of Software Engineering and Applications, 13, 348-363. doi: 10.4236/jsea.2020.1312023.

1. Introduction

Artificial neural networks are computational paradigms which implement simplified models of their biological counterparts, biological neural networks. Biological neural networks are the local assemblages of neurons and their dendritic connections that form the (human) brain. Accordingly, artificial neural networks are characterized by

· local processing in artificial neurons (or processing elements);

· massively parallel processing, implemented by rich connection pattern between processing elements;

· the ability to acquire knowledge via learning from experience;

· knowledge storage in distributed memory, the synaptic processing element connections.

Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example.

An important application of neural networks is pattern recognition. Pattern recognition can be implemented by using a feed-forward neural network that has been trained accordingly. During training, the network is trained to associate outputs with input patterns. When the network is used, it identifies the input pattern and tries to output the associated output pattern. The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pattern.

Elman neural network [1] is a kind of recurrent neural network. Compared with traditional neural networks, an Elman neural network has additional inputs from the hidden layer, which forms a new layer—the context layer. So the standard back-propagation algorithm used in Elman neural network is called Elman back-propagation algorithm. Elman neural network can be applied to solve prediction problems of discrete time sequences [2] [3] [4].

The Elman neural network is one of the most widely used and most effective neural network models in artificial neural networks and has powerful processing ability for nonlinear decisions [5] [6]. The Elman neural network can be considered as a special kind of feed forward neural network with additional memory neurons and local feedback. Because of its better learning efficiency, approximation ability, and memory ability than other neural networks, the Elman neural network can not only be used in time series prediction, but also in system identification and prediction [4] [7] [8] [9] [10].

Shi et al. [11] proposed Output Hidden Feedback (OHF) Elman neural network based on modified Elman neural network [7] by introducing a feedback between the output layer and an additional, output context layer.

The existence of periodic solutions is a classical problem of the qualitative theory of differential and difference equations. Numerous papers have been devoted to the existence of periodic solutions of different kinds of neural networks with continuous and discrete time. In [12], sufficient conditions were found for the existence and global exponential stability of a class of Hopfield neural networks with periodic impulses and finite distributed delays. In [13], the authors found sufficient conditions for the global exponential periodicity of a discrete-time counterpart of a bidirectional associative memory neural network. In [14], sufficient conditions are obtained for the existence and global asymptotic stability of periodic solutions for delayed complex-valued simplified Cohen-Grossberg neural networks.

In [15] [16], for two different classes of Hopfield-type neural networks with periodic impulses and finite distributed delays we introduced discrete-time counterparts. Using different methods, we found sufficient conditions for the existence and global exponential stability of a unique periodic solution of the discrete systems considered. In [17], sufficient conditions were found for the existence of periodic solutions for the discrete-time counterpart of a neutral-type cellular neural network with time-varying delays and impulses. In [18], we found sufficient conditions for the existence of periodic solutions for the discrete-time counterpart of a complex-valued Hopfield neural network with time-varying delays and impulses. In [19], we proved the global exponential periodicity of a class of Hopfield neural networks with distributed delays and impulses. In our recent paper [20], we obtained sufficient conditions for the existence of a periodic output of a modified Elman neural network with a periodic input by using Mawhin’s continuation theorem of coincidence degree theory [21].

In the present paper, we consider an OHF Elman neural network [4] with a periodic input. Using the result of [20], we find sufficient conditions for the existence of a periodic output of the neural network considered. Furthermore, for a subclass of these OHF Elman neural networks, we shall find the periodic output in a straightforward way using another sufficient condition. Examples illustrating these sufficient conditions are given. The calculations are done using MATLAB.

2. Preliminaries: Modified Elman Neural Network

Here we recall the results of our paper [20]. We consider a modified Elman neural network with r nodes in the input layer, n nodes in the hidden and context layers, respectively, and m nodes in the output layer, which adds a self-feedback factor α, 0 < α < 1 , in the context nodes, based on the traditional Elman neural network [4] [10]. Its mathematical model is:

x i ( k ) = f i ( j = 1 n a i j x j C ( k ) + j = 1 r b i j u j ( k 1 ) ) , i = 1 , n ¯ , k , (1)

x i C ( k ) = α x i C ( k 1 ) + x i ( k 1 ) , i = 1 , n ¯ , k , (2)

y i ( k ) = g i ( j = 1 n c i j x j ( k ) ) , i = 1 , m ¯ , k . (3)

Here is the set of all positive integers, the input u is an r-dimensional vector, the output x of the hidden layer and the output x C of the context nodes are n-dimensional vectors, while the output y of the output layer is an m-dimensional vector. The weights a i j , b i j , and c i j respectively of the context nodes, input nodes and hidden nodes are entries of n × n -, n × r - and m × n -dimensional matrices, respectively; f i , i = 1 , n ¯ , are the transfer functions of the hidden layer often taken as sigmoid functions, g i , i = 1 , m ¯ , are the transfer functions of the output layer and are often taken as linear functions. An Elman neural network with r = 2 , n = 3 and m = 4 is depicted in Figure 1. The values of the numbers r, n and m in Figure 1 and Figure 2 and Examples 1 and 2 are chosen quite small for the sake of simplicity.

Clearly, for a given input u ( k ) , k { 0 } , and initial values x ( 0 ) , x C ( 0 ) , we can find the output y ( k ) , k , from Equations (1)-(3).

Now suppose that the input u ( k ) is N-periodic for some N , that is, u ( k + N ) = u ( k ) , k { 0 } . We shall look for sufficient conditions for the existence of an N-periodic output y ( k ) , k . This means that, for a suitable

Figure 1. An Elman neural network with r = 2 , n = 3 and m = 4 .

choice of the initial values x ( 0 ) , x C ( 0 ) , the output y ( k ) is N-periodic. For this purpose, it suffices that the output x ( k ) of the hidden layer is N-periodic. From now to the end of the present section, we restrict our attention to Equations (1), (2).

Further on, for convenience, we consider Equations (1), (2) for k , that is,

x i ( k ) = f i ( j = 1 n a i j x j C ( k ) + j = 1 r b i j u j ( k 1 ) ) , i = 1 , n ¯ , k , (4)

x i C ( k ) = α x i C ( k 1 ) + x i ( k 1 ) , i = 1 , n ¯ , k , (5)

where is the set of all integers. We assume that u ( k + N ) = u ( k ) for k . Sufficient conditions for the existence of an N-periodic solution x ( k ) , k , of Equations (4), (5) are given below.

We make the following assumptions:

A1. 0 < α < 1 .

A2. There exist positive constants L i , i = 1 , n ¯ , such that

| f i ( x i ) f i ( x ˜ i ) | L i | x i x ˜ i | for all x i , x ˜ i , i = 1 , n ¯ . (6)

A3. There exists a positive integer N such that

u i ( k + N ) = u i ( k ) for all k , i = 1 , r ¯ . (7)

A4. min i = 1 , n ¯ ( 1 1 1 α j = 1 n L j | a j i | ) > 0 .

In order to formulate our main result, we introduce the n × n -matrix

A = ( δ i j L i 1 α | a i j | , i , j = 1 , n ¯ ) , (8)

where δ i j is the Kronecker delta, and assume that:

A5. The matrix A is an M-matrix.

Assumption A5 implies that the matrix A is nonsingular and its inverse has only nonnegative entries [22] [23].

The main result of [20] is the following theorem.

Theorem 1. Suppose that assumptions A1-A5 hold. Then the system of Equations (4), (5) has at least one N-periodic solution x ( k ) .

Theorem 1 is proved using Mawhin’s continuation theorem [ [21], p. 40].

Example 1. Consider a modified Elman neural network with r = 2 , n = 3 and m = 4 (as in Figure 1). Suppose that the transfer functions f i ( x ) , i = 1 , 3 ¯ , of the hidden layer all equal the sigmoid function f ( x ) = 1 1 + e x , α = 1 2 , u 1 , u 2 are arbitrary N-periodic functions for some positive integer N, say,

u 1 ( k ) = k [ k N ] N , u 2 ( k ) = k + 1 [ k N ] N , k { 0 } , (9)

where [ ν ] is the greatest integer in the real number ν , that is,

u 1 ( 0 ) = 0 , u 1 ( 1 ) = 1 , , u 1 ( N 1 ) = N 1 , u 1 ( N ) = 0 , u 1 ( N + 1 ) = 1 , , u 1 ( 2 N 1 ) = N 1 , u 1 ( 2 N ) = 0 , u 1 ( 2 N + 1 ) = 1 , ; (10)

u 2 ( 0 ) = 1 , u 2 ( 1 ) = 2 , , u 2 ( N 1 ) = N , u 2 ( N ) = 1 , u 2 ( N + 1 ) = 2 , , u 2 ( 2 N 1 ) = N , u 2 ( 2 N ) = 1 , u 2 ( 2 N + 1 ) = 2 , (11)

(a) Further on, let us assume that the weights a i j , i , j = 1 , 3 ¯ , of the context nodes all equal 1 2 , the transfer functions g i , i = 1 , 4 ¯ , the weights b i j , i = 1 , 3 ¯ , j = 1 , 2 ¯ , of the input nodes and c i j , i = 1 , 4 ¯ , j = 1 , 3 ¯ , of the hidden nodes are arbitrary.

Then, assumption A2 is satisfied with L i = 1 4 , i = 1 , 3 ¯ , assumption A4 is also satisfied since

1 1 1 α j = 1 3 L j | a j i | = 1 4 , i = 1 , 3 ¯ . (12)

Finally, the matrix

A = ( 3 4 1 4 1 4 1 4 3 4 1 4 1 4 1 4 3 4 ) (13)

is an M-matrix with inverse

A 1 = ( 2 1 1 1 2 1 1 1 2 ) . (14)

Since all assumptions of Theorem 1 are satisfied, the modified Elman neural network under consideration has an N-periodic output y i ( k ) , k .

Let us assume that, moreover, b i j = 1 2 , i = 1 , 3 ¯ , j = 1 , 2 ¯ , and N = 3 . Then the system of Equations (4), (5) takes the form

x i ( k ) = f ( 1 2 ( x 1 C ( k ) + x 2 C ( k ) + x 3 C ( k ) + u 1 ( k 1 ) + u 2 ( k 1 ) ) ) , (15)

x i C ( k ) = 1 2 x i C ( k 1 ) + x i ( k 1 ) , k , i = 1 , 3 ¯ , (16)

where

u 1 ( k ) = { 1 , k 1 ( mod 3 ) , 2 , k 2 ( mod 3 ) , 0 , k 0 ( mod 3 ) , u 2 ( k ) = { 2 , k 1 ( mod 3 ) , 3 , k 2 ( mod 3 ) , 1 , k 0 ( mod 3 ) . (17)

For k , l and N , we recall that k l ( mod N ) (k and l are congruent modulo N) if and only if k l N . For instance, k 1 ( mod 3 ) means that k = 3 κ + 1 for some κ { 0 } .

It suffices to find the initial values x i ( 0 ) , x i C ( 0 ) , i = 1 , 3 ¯ , so that

x i C ( 3 ) = x i C ( 0 ) , x i ( 3 ) = x i ( 0 ) , i = 1 , 3 ¯ . (18)

Equations (15), (16) imply that x 1 ( k ) = x 2 ( k ) = x 3 ( k ) , k , and x 1 C ( k ) = x 2 C ( k ) = x 3 C ( k ) , k = 2 , 3 , Thus, in order to satisfy Equations (18), the initial conditions must be chosen so that x 1 ( 0 ) = x 2 ( 0 ) = x 3 ( 0 ) and x 1 C ( 0 ) = x 2 C ( 0 ) = x 3 C ( 0 ) . The system of Equations (15), (16) reduces to

x ( k ) = f ( 1 2 ( 3 x C ( k ) + u 1 ( k 1 ) + u 2 ( k 1 ) ) ) , (19)

x C ( k ) = 1 2 x C ( k 1 ) + x ( k 1 ) , k . (20)

We have found that the initial values x C ( 0 ) , x ( 0 ) satisfying Equations (18) are (approximately) x C ( 0 ) = 1.9634 , x ( 0 ) = 0.9810 . The first 4 values of the 3-periodic solution of Equations (15), (16) are presented in Table 1.

This solution can be found with arbitrarily high accuracy.

(b) Next, let us assume that a 11 = 1 2 , a 12 = a 22 = 1 4 , a 13 = a 21 = a 33 = 1 8 ,

Table 1. A 3-periodic solution of Equations (15), (16).

a 23 = a 31 = 1 16 , a 32 = 1 32 , the transfer functions g i , i = 1 , 4 ¯ , the weights b i j , i = 1 , 3 ¯ , j = 1 , 2 ¯ , of the input nodes and c i j , i = 1 , 4 ¯ , j = 1 , 3 ¯ , of the hidden nodes are still arbitrary.

Then, assumption A2 is still satisfied with L i = 1 4 , i = 1 , 3 ¯ , assumption A4 is also satisfied since

1 1 1 α j = 1 3 L j | a j 1 | = 5 16 , 1 1 1 α j = 1 3 L j | a j 2 | = 47 64 , 1 1 1 α j = 1 3 L j | a j 3 | = 27 32 . (21)

Finally, the matrix

A = ( 3 4 1 8 1 16 1 16 7 8 1 32 1 32 1 64 15 16 ) (22)

is an M-matrix with inverse

A 1 = ( 1.3536229 0.1951023 0.0967449 0.0983574 1.1577144 0.0451476 0.0467601 0.0257986 1.070644 ) . (23)

Since all assumptions of Theorem 1 are satisfied, the modified Elman neural network under consideration has an N-periodic output y ( k ) , k .

Let us assume that, moreover, b 11 = 1 2 , b 12 = b 21 = 1 3 , b 22 = b 31 = 1 4 , b 32 = 1 8 , and N = 3 . Then, the system of Equations (4), (5) takes the form

x i C ( k ) = 1 2 x i C ( k 1 ) + x i ( k 1 ) , k , i = 1 , 3 ¯ , (24)

x 1 ( k ) = f ( 1 2 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 8 x 3 C ( k ) + 1 3 ) , x 2 ( k ) = f ( 1 8 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 16 x 3 C ( k ) + 1 4 ) , x 3 ( k ) = f ( 1 16 x 1 C ( k ) + 1 32 x 2 C ( k ) + 1 8 x 3 C ( k ) + 1 8 ) , } k 1 ( mod 3 ) , (25)

x 1 ( k ) = f ( 1 2 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 8 x 3 C ( k ) + 7 6 ) , x 2 ( k ) = f ( 1 8 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 16 x 3 C ( k ) + 5 6 ) , x 3 ( k ) = f ( 1 16 x 1 C ( k ) + 1 32 x 2 C ( k ) + 1 8 x 3 C ( k ) + 7 8 ) , } k 2 ( mod 3 ) , (26)

x 1 ( k ) = f ( 1 2 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 8 x 2 C ( k ) + 2 ) , x 2 ( k ) = f ( 1 8 x 1 C ( k ) + 1 4 x 2 C ( k ) + 1 16 x 3 C ( k ) + 17 12 ) , x 3 ( k ) = f ( 1 16 x 1 C ( k ) + 1 32 x 2 C ( k ) + 1 8 x 3 C ( k ) + 1 2 ) , } k 0 ( mod 3 ) , (27)

It suffices to find the initial values x i ( 0 ) , x i C ( 0 ) , i = 1 , 3 ¯ , so that Equations (18) are satisfied. We have found that, approximately, x 1 C ( 0 ) = 1.8403 , x 2 C ( 0 ) = 1.614 , x 3 C ( 0 ) = 1.3689 , x 1 ( 0 ) = 0.9705 , x 2 ( 0 ) = 0.89425 , x 3 ( 0 ) = 0.77054 (see Table 2 for the first 4 values of the 3-periodic solution of Equations (24)-(27)).

The initial values in Example 1, (a) and (b) have been found after numerous experiments with different sets of possible initial values, using MATLAB. They can be found with an arbitrarily high accuracy after sufficiently many iterations.

3. Output Hidden Feedback Elman Neural Networks: Main Results

OHF Elman neural network achieves the ability to process dynamic data by adding feedback from the output layer to the hidden output context layer (second context layer) based on Elman neural network. The mathematical model of an OHF Elman neural network is [4]:

x i ( k ) = f i ( j = 1 n a i j x j C ( k ) + j = 1 r b i j u j ( k 1 ) ) , i = 1 , n ¯ , k , (28)

x i C ( k ) = α x i C ( k 1 ) + x i ( k 1 ) , i = 1 , n ¯ , k , (29)

y i ( k ) = g i ( j = 1 n c i j x j ( k ) + j = 1 m d i j y j C ( k ) ) , i = 1 , m ¯ , k , (30)

y i C ( k ) = γ y i C ( k 1 ) + y i ( k 1 ) , i = 1 , m ¯ , k , (31)

where Equations (28), (29) are the same as Equations (1), (2), the transfer functions g i , i = 1 , m ¯ , of the output layer are as in Equation (3), γ ( 0 , 1 ) is the gain factor of the self-feedback of the output layer, d i j are the connection

Table 2. A 3-periodic solution of system (1), (2) in Case (b).

Figure 2. An OHF Elman neural network with r = 2 , r = 2 and m = 4 .

weights of the second context layer nodes, and y C is the output of the second context layer. An OHF Elman neural network with r = 2 , n = 3 and m = 4 is depicted in Figure 2.

Clearly, for a given input u ( k ) , k { 0 } , and initial values x ( 0 ) , x C ( 0 ) , y ( 0 ) , y C ( 0 ) , we can find the output y ( k ) , k , from Equations (28)-(31).

Now suppose that the input u ( k ) is N-periodic for some N , that is, u ( k + N ) = u ( k ) , k { 0 } . We shall look for sufficient conditions for the existence of an N-periodic output y ( k ) , k . This means that for a suitable choice of the initial values x ( 0 ) , x C ( 0 ) , y ( 0 ) , y C ( 0 ) , the output y ( k ) is N-periodic.

In addition to assumptions A1-A5, we make the following assumptions:

A6. 0 < γ < 1 .

A7. There exist positive constants L ^ i , i = 1 , m ¯ , such that

| g i ( y i ) g i ( y ˜ i ) | L ^ i | y i y ˜ i | for all y i , y ˜ i , i = 1 , m ¯ . (32)

A8. min i = 1 , m ¯ ( 1 1 1 γ j = 1 m L ^ j | d j i | ) > 0 .

In order to formulate our main result, we introduce the m × m -matrix

D = ( δ i j L ^ i 1 γ | d i j | , i , j = 1 , m ¯ ) (33)

and assume that:

A9. The matrix D is an M-matrix.

Now, we can state our main result as the following theorem.

Theorem 2. Suppose that assumptions A1-A9 hold.Then the system of Equations (28)-(31) has at least one N-periodic solution y ( k ) .

Proof. According to Theorem 1, the system of Equations (28), (29) has an N-periodic solution x ( k ) . Then the system of Equations (30), (31) with N-periodic input x ( k ) is of the form of Equations (1), (2), thus it has at least one N-periodic solution y ( k )

As mentioned in Section 2, the transfer functions g i , i = 1 , m ¯ , of the output layer are often taken as linear functions. Without loss of generality, we can assume that:

A10. g i ( y i ) = y i + y ˜ i , i = 1 , m ¯ .

where y ˜ i are some constants. In this case, in the assumptions of Theorem 2 we have L ^ i = 1 , i = 1 , m ¯ .

Now we show that, once the N-periodic solution x ( k ) of the system of Equations (28), (29) has been found, in the case of linear transfer functions of the output layer the N-periodic solution y ( k ) can be found in a straightforward way using another sufficient condition.

For convenience, we introduce the matrices

C = ( c i j , i = 1 , m ¯ , j = 1 , n ¯ ) , D = ( d i j , i , j = 1 , m ¯ ) , (34)

I m is the m × m unit matrix. Then Equations (30), (31) can be written in a matrix form as

y ( k ) = C x ( k ) + D y C ( k ) + y ˜ , y C ( k ) = γ y C ( k 1 ) + y ( k 1 ) , k . (35)

We successively obtain

y C ( 1 ) = γ y C ( 0 ) + y ( 0 ) , y ( 1 ) = D ( γ y C ( 0 ) + y ( 0 ) ) + C x ( 1 ) + y ˜ , (36)

y C ( 2 ) = ( γ I m + D ) ( γ y C ( 0 ) + y ( 0 ) ) + C x ( 1 ) + y ˜ , (37)

y ( 2 ) = D ( γ I m + D ) ( γ y C ( 0 ) + y ( 0 ) ) + C x ( 2 ) + D C x ( 1 ) + ( D + I m ) y ˜ , (38)

y C ( 3 ) = ( γ I m + D ) 2 ( γ y C ( 0 ) + y ( 0 ) ) + C x ( 2 ) + ( γ I m + D ) C x ( 1 ) + [ D + ( γ + 1 ) I m ] y ˜ , (39)

y ( 3 ) = D ( γ I m + D ) 2 ( γ y C ( 0 ) + y ( 0 ) ) + C x ( 3 ) + D C x ( 2 ) + ( γ I m + D ) D C x ( 1 ) + [ D 2 + ( γ + 1 ) D + I m ] y ˜ . (40)

By induction, we prove that

y C ( k ) = ( γ I m + D ) k 1 ( γ y C ( 0 ) + y ( 0 ) ) + C x ( k 1 ) + F k ( C x ( k 2 ) , , C x ( 1 ) , y ˜ ) , (41)

y ( k ) = D ( γ I m + D ) k 1 ( γ y C ( 0 ) + y ( 0 ) ) + C x ( k ) + G k ( D C x ( k 1 ) , , D C x ( 1 ) , y ˜ ) , k , (42)

where F k , G k are some linear functions of their arguments.

In order to obtain an N-periodic solution y ( k ) of the system of Equations (30), (31) we need to find initial conditions y C ( 0 ) , y ( 0 ) satisfying

y C ( N ) = y C ( 0 ) , y ( N ) = y ( 0 ) . (43)

From Equations (41)-(43) we derive

D m , N ( y C ( 0 ) y ( 0 ) ) = ( C x ( N 1 ) + F N ( C x ( N 2 ) , , C x ( 1 ) , y ˜ ) C x ( N ) + G N ( D C x ( N 1 ) , , D C x ( 1 ) , y ˜ ) ) , (44)

where

D m , N = ( I m γ ( γ I m + D ) N 1 ( γ I m + D ) N 1 γ D ( γ I m + D ) N 1 I m D ( γ I m + D ) N 1 ) . (45)

If

A11. The 2 m × 2 m matrix D m , N is nonsingular, then from Equation (44) we can determine the initial values y C ( 0 ) , y ( 0 ) . Thus, we have proved.

Theorem 3. Suppose that assumptions A1-A6, A10, A11 hold. Then the system of Equations (28)-(31) has at least one N-periodic solution y ( k ) .

Example 2. Now let us consider an OHF Elman neural network given by Equations (28)-(31), with r = 2 , n = 3 and m = 4 , N = 3 , where Equation (28), (29) are as in Example 1(b), γ = 1 3 , assumption A7 is satisfied with L ^ i = 1 , i = 1 , 4 ¯ , the matrix of the connection weights of the nodes of the second context layer is

D = ( 1 3 1 9 1 27 1 81 1 27 1 9 1 81 1 243 1 81 1 243 1 27 1 9 1 9 1 81 1 243 1 81 ) , (46)

the matrix C of the weights of the hidden layer is arbitrary.

Assumption A8 is satisfied since

1 1 1 γ j = 1 4 L ^ j | d j 1 | = 7 27 , 1 1 1 γ j = 1 4 L ^ j | d j 2 | = 52 81 , 1 1 1 γ j = 1 4 L ^ j | d j 3 | = 70 81 , 1 1 1 γ j = 1 4 L ^ j | d j 4 | = 64 81 . (47)

The matrix

D = ( 1 2 1 6 1 18 1 54 1 18 5 6 1 54 1 162 1 54 1 162 17 18 1 6 1 6 1 54 1 162 53 54 ) (48)

is an M-matrix with inverse

D 1 = ( 2.0724479 0.4168759 0.1305003 0.0638851 0.143112 1.229219 0.0326251 0.0159713 0.1042683 0.0328298 1.0667989 0.1833283 0.3552811 0.0941895 0.0294854 1.0311707 ) . (49)

Since all assumptions of Theorem 2 are satisfied, the OHF Elman neural network under consideration has a 3-periodic output y ( k ) .

Now let also assumption A10 be satisfied with still arbitrary y ˜ i , i = 1 , 4 ¯ . Equation (44) takes the form

D 4 , 3 ( y C ( 0 ) y ( 0 ) ) = ( C x ( 2 ) + ( 1 3 I 4 + D ) C x ( 1 ) + ( 4 3 I 4 + D ) y ˜ C x ( 3 ) + D C x ( 2 ) + ( 1 3 I 4 + D ) D C x ( 1 ) + ( D 2 + 4 3 D + I 4 ) y ˜ ) , (50)

where

D 4 , 3 = ( I 4 1 3 ( 1 3 I 4 + D ) 2 ( 1 3 I 4 + D ) 2 1 3 D ( 1 3 I 4 + D ) 2 I 4 D ( 1 3 I 4 + D ) 2 ) . (51)

For the 4 × 4 blocks of this matrix we find

I 4 1 3 ( 1 3 I 4 + D ) 2 = ( 0.8498704 0.0412539 0.0132771 0.0056902 0.0139206 0.9327508 0.003816 0.0016935 0.008434 0.0020322 0.9539535 0.0265768 0.0376636 0.0073724 0.0024048 0.9595421 ) , (52)

( 1 3 I 4 + D ) 2 = ( 0.4503887 0.1237616 0.0398313 0.0170706 0.0417619 0.2017477 0.0114481 0.0050805 0.025301 0.0060966 0.1381395 0.0797304 0.1129909 0.0221172 0.0072143 0.1213738 ) , (53)

1 3 D ( 1 3 I 4 + D ) 2 = ( 0.0523673 0.0213897 0.0065848 0.0035687 0.0073662 0.0090555 0.0014941 0.0008935 0.0064079 0.0016805 0.0021522 0.0055569 0.0173526 0.0055134 0.0017415 0.001262 ) , (54)

I 4 D ( 1 3 I 4 + D ) 2 = ( 0.8428982 0.0641691 0.0197545 0.0107061 0.0220986 0.9728335 0.0044824 0.0026806 0.0192238 0.0050414 0.9935433 0.0166706 0.0520578 0.016540 0.0052246 0.996214 ) . (55)

We give the entries of the 8 × 8 matrix D 4 , 3 1 in the form of Table 3.

Clearly, the matrix D 4 , 3 is nonsingular. By virtue of Theorem 3, the system of Equations (28)-(31) has a 3-periodic solution y ( k ) . Moreover, for a 3-periodic solution x ( k ) , k , of Equations (28), (29), and given a 4 × 3 weight matrixC of the hidden layer and a constant vector y ˜ 4 , we can express the initial conditions y C ( 0 ) , y ( 0 ) from Equation (50) making use of the inverse matrix D 4 , 3 1 .

Now, let us recall the 3-periodic solution x ( k ) , k , of the system of Equations (28), (29):

Table 3. Entries of the matrix D 4 , 3 1 .

x ( k ) = { ( 0.868281 , 0.7315657 , 0.6173236 ) T , k 1 ( mod 3 ) , ( 0.9332127 , 0.8234114 , 0.6965444 ) T , k 2 ( mod 3 ) , ( 0.9705381 , 0.8943396 , 0.7705772 ) T , k 0 ( mod 3 ) . (56)

We assume that

C = ( 1 3 9 3 1 3 9 84 1 27 0 81 ) , y ˜ = ( 0 1 3 3 ) , (57)

and evaluate the right-hand side of Equation (50):

C x ( 2 ) + ( 1 3 I 4 + D ) C x ( 1 ) + ( 4 3 I 4 + D ) y ˜ = ( 19.667758 10.969587 113.59873 112.36785 ) , (58)

C x ( 3 ) + D C x ( 2 ) + ( 1 3 I 4 + D ) D C x ( 1 ) + ( D 2 + 4 3 D + I 4 ) y ˜ = ( 23.958132 10.929837 101.92756 95.796757 ) . (59)

Now from Equation (50) we find

( y C ( 0 ) y ( 0 ) ) = D 4 , 3 1 ( 19.667758 10.969587 113.59873 112.36785 23.958132 10.929837 101.92756 95.796757 ) = ( 57.09375 20.074371 148.44606 138.30942 39.056013 13.864602 106.60011 100.53127 ) . (60)

Table 4. A 3-periodic solution of the system of Equations (28)-(31).

Finally, we find the first four values of the approximate 3-periodic solution (see Table 4). We notice that Equations (43) are satisfied much more precisely than Equations (18) in Example 1. On the other hand, the application of the method becomes much more difficult for a greater number m of the nodes in the output layer.

4. Conclusion

We presented sufficient conditions for the existence of periodic output of modified and OHF Elman neural networks with periodic input. Examples illustrating the results obtained were given. The models considered can be applied to the services quality of experience prediction.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Elman, J.L. (1990) Finding Structure in Time. Cognitive Science, 14, 179-211.
https://doi.org/10.1207/s15516709cog1402_1
[2] Ren, G., Cao, Y., Wen, S., Huang, T. and Zeng, Z. (2018) A Modified Elman Neural Network with a New Learning Rate Scheme. Neurocomputing, 286, 11-18.
https://doi.org/10.1016/j.neucom.2018.01.046
[3] Xu, L. and Zhang, Y. (2019) Quality Prediction Model Based on Novel Elman Neural Network Ensemble. Complexity, 2019, Article ID: 9852134.
https://doi.org/10.1155/2019/9852134
[4] Xia, T., Zhuo, P., Xiao, L., Du, S., Wang, D. and Xi, L. (2021) Multi-Stage Fault Diagnosis Framework for Rolling Bearing Based on OHF Elman AdaBoost-Bagging Algorithm. Neurocomputing.
[5] Gao, X., You, D. and Katayama, S. (2012) Seam Tracking Monitoring Based on Adaptive Kalman Filter Embedded Elman Neural Network during High-Power Fiber Laser Welding. IEEE Transactions on Industrial Electronics, 59, 4315-4325.
https://doi.org/10.1109/TIE.2012.2193854
[6] Trifonova, M. and Li-Shtereva, X. (2017) Analytical Mathematical Models for Determining the Probable Sliding Surface of the Working Slope. In: Mining and Geology Today, Proceedings of the International Symposium, Mining Institute, Belgrade, 179-182.
[7] Gao, X.Z., Gao, X.M. and Ovaska, S.J. (1996) A Modified Elman Neural Network Model with Application to Dynamical Systems Identification. 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems, Vol. 2, Beijing, 1396-1381.
https://doi.org/10.1109/ICSMC.1996.571312
[8] Ding, S., Zhang, Y., Chen, J. and Jia, W. (2013) Research on Using Genetic Algorithms to Optimize Elman Neural Networks. Neural Computing and Applications, 23, 293-297.
https://doi.org/10.1007/s00521-012-0896-3
[9] Sun, W.Z. and Wang, J.S. (2017) Elman Neural Network Soft Sensor Model of Conversion Velocity in Polymerization Process Optimized by Chaos Whale Optimization Algorithm. IEEE Access, 5, 13062-13076.
https://doi.org/10.1109/ACCESS.2017.2723610
[10] Yang, L., Wang, F., Zhang, J. and Ren, W. (2019) Remaining Useful Life Prediction of Ultrasonic Motor Based on Elman Neural Network with Improved Particle Swarm Optimization. Measurement, 143, 27-38.
https://doi.org/10.1016/j.measurement.2019.05.013
[11] Shi, X.H., Liang, Y.C., Lee, H.P., Lin, W.Z., Xu, X. and Lim, L.P. (2004) Improved Elman Networks and Applications for Controlling Ultrasonic Motors. Applied Artificial Intelligence, 18, 603-629.
https://doi.org/10.1080/08839510490483279
[12] Yang, X., Liao, X., Evans, D.J. and Tang, Y. (2005) Existence and Stability of Periodic Solution in Impulsive Hopfield Neural Networks with Finite Distributed Delays. Physics Letters A, 343, 108-116.
https://doi.org/10.1016/j.physleta.2005.06.008
[13] Zhou, T., Liu, Y.H., Liu, Y.R. and Chen, A. (2007) Global Exponential Periodicity for Discrete-Time Analogues of BAM Neural Networks with Finite Distributed Delays. International Journal: Mathematical Manuscripts, 3.
[14] Zhang, Z. and Zheng, T. (2018) Global Asymptotic Stability of Periodic Solutions for Delayed Complex-Valued Cohen-Grossberg Neural Networks by Combining Coincidence Degree Theory with LMI Method. Neurocomputing, 289, 220-230.
https://doi.org/10.1016/j.neucom.2018.02.033
[15] Akça, H., Alassar, R., Covachev, V. and Yurtsever, H.A. (2007) Discrete-Time Impulsive Hopfield Neural Networks with Finite Distributed Delays. Computer Assisted Mechanics and Engineering Sciences, 14, 145-158.
[16] Akça, H., Covachev, V., Covacheva, Z. and Mohamad, S. (2009) Global Exponential Periodicity for the Discrete Analogue of an Impulsive Hopfield Neural Network with Finite Distributed Delays. Functional Differential Equations, 16, 53-72.
https://doi.org/10.1142/9789812837332_0058
[17] Akça, H., Al-Zahrani, E., Covachev, V. and Covacheva, Z. (2017) Existence of Periodic Solutions for the Discrete-Time Counterpart of a Neutral-Type Cellular Neural Network with Time-Varying Delays and Impulses. AIP Conference Proceedings of ICNAAM 2016 (Rhodes, Greece), 1863, Article ID: 140002.
https://doi.org/10.1063/1.4992309
[18] Covachev, V. and Covacheva, Z. (2018) Existence of Periodic Solutions for the Discrete-Time Counterpart of a Complex-Valued Hopfield Neural Network with Time-Varying Delays and Impulses. 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, 8-13 July 2018, 1-8.
https://doi.org/10.1109/IJCNN.2018.8489198
[19] Covachev, V. and Covacheva, Z. (2019) Existence and Global Exponential Stability of a Periodic Solution of a Hopfield-Type Neural Network with Distributed Delays and Impulses. In: Proceedings of the 10th International Conference on Nonlinear Analysis and Convex Analysis, Yokohama Publishers, Chitose, 25-39.
[20] Covachev, V. and Covacheva, Z. (2020) Existence of Periodic Solutions for a Modified Elman Neural Network. AIP Conference Proceedings on “New Trends of Differential Equations in Sciences (NTADES’20)”, Saints Constantine and Helena, 1-4 September 2020.
[21] Gaines, R.E. and Mawhin, J.L. (1977) Coincidence Degree and Nonlinear Differential Equations. Lecture Notes in Mathematics, Vol. 568, Springer, Berlin.
https://doi.org/10.1007/BFb0089537
[22] Berman, A. and Plemmons, R.J. (1994) Nonnegative Matrices in Mathematical Sciences. SIAM, Philadelphia.
https://doi.org/10.1137/1.9781611971262
[23] Fiedler, M. (1986) Special Matrices and Their Applications in Numerical Mathematics. Martinus Nijhoff, Dordrecht.
https://doi.org/10.1007/978-94-009-4335-3

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.