Event-Triggered Zero-Gradient-Sum Distributed Algorithm for Convex Optimization with Time-Varying Communication Delays and Switching Directed Topologies

Abstract

Nowadays, distributed optimization algorithms are widely used in various complex networks. In order to expand the theory of distributed optimization algorithms in the direction of directed graph, the distributed convex optimization problem with time-varying delays and switching topologies in the case of directed graph topology is studied. The event-triggered communication mechanism is adopted, that is, the communication between agents is determined by the trigger conditions, and the information exchange is carried out only when the conditions are met. Compared with continuous communication, this greatly saves network resources and reduces communication cost. Using Lyapunov-Krasovskii function method and inequality analysis, a new sufficient condition is proposed to ensure that the agent state finally reaches the optimal state. The upper bound of the maximum allowable delay is given. In addition, Zeno behavior will be proved not to exist during the operation of the algorithm. Finally, a simulation example is given to illustrate the correctness of the results in this paper.

Share and Cite:

Ye, L. (2022) Event-Triggered Zero-Gradient-Sum Distributed Algorithm for Convex Optimization with Time-Varying Communication Delays and Switching Directed Topologies. Journal of Applied Mathematics and Physics, 10, 1247-1265. doi: 10.4236/jamp.2022.104088.

1. Introduction

In today’s network information age, due to the rapid development of communication and sensing technology, the original point-to-point control system has been reorganized, and a new network control system composed of a large number of interrelated subsystems came into being. Networked control system is divided into centralized and distributed structures. In order to deal with more and more complex practical problems, distributed systems have attracted more and more attention, and also derived hot topics such as distributed synchronization [1] and distributed optimization [2]. Among them, DCOP, that is, distributed convex optimization problem, plays an important role in networked control system, so scholars pay great attention to it. The core of this problem is to discuss the following optimization objective in a network with N nodes:

x * arg min x n i = 1 N f i ( x ) , (1)

where f i : n is a local cost function and is assumed to be strongly convex; x * n is the optimal value of i = 1 N f i ( x ) . Optimization problem (1) has a wide range of application scenarios such as sensor scheduling [3] [4], source localization [5], distributed active power optimal control in power systems [6], parallel and distributed computation [7], distributed parameter estimation [8], distributed optimal resource allocation over networks [9], spectrum sensing for cognitive radio networks [10], distributed statistics and machine learning [11], emulation of swarms in biological networks [12], and distributed Lasso [13].

At present, there have been many research results on DCOP. Among them, literature [14] [15] [16] [17] give some consensus-based distributed optimization algorithms to solve the problem (1). However, this kind of algorithm has an obvious defect that its step size is attenuation step, which will lead to slow convergence speed. For this reason, researchers have proposed distributed optimization algorithms based on auxiliary-variables method [18] [19] [20] [21]. These algorithms adopt fixed step size, which improves the convergence speed and accuracy of the algorithm. But this also increases the cost of computing and communication.

In order to overcome the problems caused by attenuation step size and auxiliary-variables at the same time, Lu and Tang proposed a new algorithm called zero gradient sum algorithm (ZGS) in [22]. The main feature of the algorithm is that the initial state of each agent is its own optimal value, and in the subsequent process, the sum of the gradients of all local objective functions is always equal to 0. The advantage of ZGS algorithm is that it has fast convergence speed under the condition of ensuring asymptotic convergence or even exponential convergence. Therefore, researchers had done a lot of work to promote this result [23] [24] [25] [26]. In [23], the authors studied the distributed ZGS consensus problem with time-varying delay and the time delay is a factor that must be considered in practical application. In [24], Liu et al. studied the distributed ZGS consensus problem with time-varying topologies. And the finite-time convergence with ZGS algorithms was studied in [25] and [26].

The event-triggered mechanism can greatly reduce the communication cost, so the ZGS algorithm based on event-triggered has also been widely studied [27] [28] [29]. Sampled-data based distributed convex optimization problem with event-triggered communication was studied in [27]. In [28], the authors studied the event-triggered ZGS distributed optimization problem with time-varying topologies. In [29], Liu and Xie prove the convergence of ZGS algorithm with time-varying delay based on event-triggered mechanism.

All the above studies assume that the topological graph is undirected. In fact, due to the complexity of the real situation, it is also meaningful to study ZGS in the case of directed graph. At present, there have been some researches on ZGS algorithm in the case of directed graph [30] [31] [32], among which in [30], Guo and Chen gave sufficient conditions for the convergence of ZGS algorithm with time-varying delay and switching topology, which has been a great expansion of ZGS algorithm. In [31] and [32], the authors studied the directed graph ZGS algorithm without and with time delays, respectively. However, compared with the case of undirected graph, the research based on directed graph is still relatively few. As far as we know, no one makes the work on event-triggered ZGS optimization algorithm with time-varying delay and switching directed topologies. Considering that time delays always exist and the network topology has the possibility of switching in reality, combined with the need to reduce network communication cost, this research is of significance.

Therefore, to generalize the continuous-time ZGS optimization consensus algorithm, we discuss the convergence of ZGS algorithm with time-varying delay and switching topologies based on event-triggered mechanism under directed networks. Compared with the above related literature, the main contributions of this paper can be summarized as follows:

1) Different from the previous ZGS optimization algorithm results [22] - [32], this paper takes the first step to study the ZGS optimization algorithm with time-varying delay and switching topologies based on event-triggered mechanism under directed networks, which is more challenging and practical.

2) Compared with the work in [29], we consider the possibility of topology switching instead of discussing fixed topology, which is reasonable due to the interference of various uncertain factors in reality. What’s more, the scope of application of our algorithm changes from undirected graph to a wider balanced strongly connected directed graph. And by the Lyapunov-Krasovskii-based approach, the sufficient conditions about the maximum admissible time delays are derived.

3) Compared with the work in [30], we add event-triggered mechanism to it, which can better save network resources and reduce communication cost. To our best of knowledge, it is the first time to use event-trigger mechanism in ZGS algorithm considering time-varying delays and switching directed topologies.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries about graph theory and strongly convex functions. The distributed ZGS optimization consensus protocol and convergence analysis are derived in Section 3. Some simulation studies are performed in Section 4 to validate the effectiveness of our proposed ZGS optimization algorithm. Section 5 concludes this paper.

Notations: Let and + denote the set of real numbers and the set of positive integers, respectively. n and n × n denote the set of n × 1 real vector and n × n real matrix, respectively. Let 1 n and 0 n denote, respectively, the n × 1 column vector of all ones and zeros. I n n × n denotes the identity matrix. A T and x T represent the transpose of matrix A and vector x, respectively. The Krnoecker product of matrix A = [ a i j ] n × m and B p × q is denoted as A B = { a 11 B , , a 1 m B ; ; a n 1 B , , a n m B } n q × m p . x denotes the standard Euclidean norm of vector x and inf ( ) denotes the greatest lower bound. For a continuously differentiable function f : n , f and 2 f represent, respectively, the gradient and the Hessian matrix of f. For matrices A and B, the matrix inequalities A > B ( A B ) and A < B ( A B ) mean that A B and B A are positive (semi) definite, respectively. Besides, if not explicitly stated, matrices or vectors are assumed to have compatible dimensions.

2. Problem Description and Preliminaries

We will first introduce geometric graph theory. In this paper, we use G = ( V , E , A ) to represent the directed fixed communication network, where V = { 1,2, , N } is a finite nonempty node set, E V × V represents the edge set of ordered pairs of nodes, and A = [ a i j ] N × N ( i , j V ) is the adjacency matrix. ( j , i ) E means that there is an arc from node j to node i. The entry a i j of the adjacency matrix A is greater than zero if and only if ( j , i ) E , otherwise a i j = 0 . N i = { j V | ( j , i ) E } denotes the set of neighbors of the ith node. The in-degree of node i is defined as d i = j = 1 N a i j and the in-degree matrix D is defined as D = d i a g { d 1 , , d N } . The Laplacian matrix L associated with the graph G is defined as L = D A . For switching topology, let Θ = { G 1 , G 2 , , G l } = { V , E , A σ ( t ) } be a finite set of directed graphs. Define the switching signal σ ( t ) : [ 0, + ) ϒ = { 1,2, , l } ( l + denotes the total number of all possible graphs). For any time interval in which the k th topology is activated, we have G σ ( t ) = G k Θ , and the Laplacian matrix is described as L k .

Next, the strongly convex functions will be introduced.

Definition 1. [22] A twice continuously differentiable function f : n is said to be locally strong convex on any convex and compact set S if there exists a constant m > 0 such that the following three equivalent conditions hold for any x , y S :

{ f ( y ) f ( x ) f T ( x ) ( y x ) m 2 y x 2 , ( f ( y ) f ( x ) ) T ( y x ) m y x 2 , 2 f ( x ) m I n , (2)

wherem is called the convexity parameter of f.

And if the function f is strong convex for any S. We have the following proposition:

Proposition 1. [22] [23] Define the subset Q = { x S | f ( x ) f ( x ( 0 ) ) } , where x ( 0 ) can be chosen such that Q is closed. Combining with (2), we can get that Q is a compact set. Then, x , y Q , there exists a constant M such that the following equivent conditions hold:

{ f ( y ) f ( x ) f T ( x ) ( y x ) M 2 y x 2 , ( f ( y ) f ( x ) ) T ( y x ) M y x 2 , 2 f ( x ) M I n . (3)

Finally, We will list the important lemmas needed in this paper.

Lemma 1. [20] The following three notions are equivalent: i) G is weight-balanced, ii) 1 N T L = 0 , and iii) L + L T is positive semi-definite. Moreover, if G is weight-balanced and strongly connected, then zero is a simple eigenvalue of L + L T .

Lemma 2. [34] For any vectors x , y n and one positive matrix H n × n , the following inequality holds:

2 x T y x T H x + y T H 1 y .

Lemma 3. [35] If the positive constant matrix W n × n , the scalar τ > 0 and the concerned integrations of the vector function ω ( r ) : [ t τ , t ] n are well defined, then the following inequality is satisfied:

( t τ t ω ( r ) d r ) T W ( t τ t ω ( r ) d r ) τ t τ t ω ( r ) T W ω ( r ) d r . (4)

Lemma 4. [32] Assume that the graph G is strongly connected and balanced, then the following inequality is true for any vector x with appropriate dimension:

x T L x λ 2 ( L + L T ) 2 λ max ( L T L ) x T L T L x , (5)

where λ 2 ( L + L T ) is the minimum nonzero eigenvalue of matrix L + L T , λ max ( L T L ) denotes the maximum eigenvalue of the matrix L T L .

Lemma 5. [36] If a differential function f ( t ) satisfies f ( t ) , f ˙ ( t ) L , and f ( t ) L p for some value of p [ 1, ) , then f ( t ) 0 as t .

3. Main Results

In this section, we will elaborate the event-trigger ZGS algorithm with communication delays under switching directed networks and analyze its convergence. Firstly, we need the following two basic assumptions:

Assumption 1. For the switching network, the topology G σ ( t ) is directed. What’s more, the topology is also strongly connected and balanced for any time interval in which the kth topology is activated.

Assumption 2. The cost function f i used in (1) is twice continuously differentiable, strongly convex for i = 1 , , N . We assume there exists a convexity parameter m i > 0 such that the inequalities in (2) are satisfied. f i has an invertible locally Lipschitz Hessian matrix 2 f i ( x ) .

Proposition 2. With assumption 2, there exists a unique x * n such that for any x n , F ( x * ) F ( x ) and F ( x * ) = 0 . Therefore, problem (1) is well-posed.

In order to make the algorithm more practical, we take the ubiquitous communication delay in practical applications into account. In this paper, we assume there exists a time-varying communication delay τ ( t ) among agents which satisfies τ ( t ) [ 0, d ] and τ ˙ ( t ) h , h [ 0,1 ) . In the actual optimization process, the channel between agents may be disconnected, data packet lost, faulty or out of range due to network failure. At the same time, new communication links may appear between agents. For those reasons, we will consider both time delays and switching networks in problem (1).

Since avoiding continuous communication can greatly reduce the consumption of network resources, we want to adopt the event triggered communication mechanism in the algorithm, i.e. only when the predefined event-triggered condition satisfies, the agent i samples its new state and broadcasts it to its neighbours with transition delay τ ( t ) .

Let { t k i , k + } denote the event-triggered instants where t k i 0 and t 0 i = 0 . And x ^ i denote the latest broadcast state of agent i V , that is,

x ^ i ( t ) x i ( t k i ) , t [ t k i , t k + 1 i ) ;

thus, x ^ i ( t ) converts the discrete-time signal x i ( t k i ) into the continuous-time signal simply by holding its constant until the next event occurs. To determine the trigger instants, we first define the measurement error for agent i as

e i ( t ) = x ^ i ( t ) x i ( t ) , t [ t k i , t k + 1 i ) . (6)

Then, the trigger instants for agent i are thus defined iteratively by

t k + 1 i = inf { t : t > t k i , E i ( t ) 0 } , (7)

where the triggering function E i ( t ) is defined as follows:

E i ( t ) = e i ( t ) 2 β i 2 z ^ i ( t ) 2 c e 2 α t , (8)

for some β i , c , α > 0 and z ^ i ( t ) = j = 1 N a i j σ ( t ) ( x ^ i ( t ) x ^ j ( t ) ) , where x ^ j ( t ) = x ( t k j ) represents the latest received states from its neighbour j. Therefore, E i ( t k i ) = 0 and e i ( t ) is reset to 0. In this paper, we assume that each agent can obtain its neighbours’ information at t k i .

Now, we propose the following event-triggered ZGS optimisation algorithm with time delays and switching topologies under directed networks:

{ x ˙ i ( t ) = γ ( 2 f i ( x i ( t ) ) ) 1 j = 1 N a i j σ ( t ) ( x ^ j ( t τ ( t ) ) x ^ i ( t τ ( t ) ) ) , x i ( 0 ) = x i * , i V (9)

where x i ( t ) n denotes theith agent’s estimate of the unknown minimizer x * n ; x i ( 0 ) n is the initial state; x i * n is an optimal value of the local objective function f i defined in (1); a i j σ ( t ) is the connection weight corresponding to the graph G k ; σ ( t ) is defined in Section 2; γ is a positive gain constant used to adjust the convergence rate; τ ( t ) is the time-varying communication delays between agents. Combining with the definition of e i ( t ) in (6), we can rewrite algorithm (9) as

{ x ˙ i ( t ) = γ ( 2 f i ( x i ( t ) ) ) 1 j = 1 N a i j σ ( t ) ( e j ( t τ ( t ) ) + x j ( t τ ( t ) ) e i ( t τ ( t ) ) x i ( t τ ( t ) ) ) , x i ( 0 ) = x i * , i V . (10)

Remark 1. Inspired by the work of Liu and Xie [29] and the work of Guo and Chen [30], we got the protocol (9). And for any weight-balanced and strongly connected graph, from (10), we can easily get

d d t i = 1 N f i ( x i ( t ) ) = γ i = 1 N j = 1 N a i j σ ( t ) ( ( e i ( t τ ( t ) ) e j ( t τ ( t ) ) ) + ( x i ( t τ ( t ) ) x j ( t τ ( t ) ) ) ) = γ ( 1 N T L k I n ) ( e ( t τ ( t ) ) + x ( t τ ( t ) ) ) = 0 , (11)

where e ( t ) = [ e 1 T ( t ) , , e N T ( t ) ] T , x ( t ) = [ x 1 T ( t ) , , x N T ( t ) ] T . From (11), we know that the gradient sum i = 1 N f i ( x i ( t ) ) would remain constant along the evolution of system (10). Furthermore, we have

i = 1 N f i ( x i ( t ) ) = i = 1 N f i ( x i ( 0 ) ) = i = 1 N f i ( x i * ) = 0 , t > 0. (12)

Thus, algorithm (9) also satisfies the ZGS property.

Let ξ i ( t ) = x i ( t ) x * represent the error between the state of agenti and the optimisation value x * . According to (10), we have

ξ ˙ i ( t ) = γ ( 2 f i ( x i ( t ) ) ) 1 j = 1 N a i j σ ( t ) ( ( e i ( t τ ( t ) ) e j ( t τ ( t ) ) ) + ( ξ i ( t τ ( t ) ) ξ j ( t τ ( t ) ) ) ) = γ ( 2 f i ( x i ( t ) ) ) 1 j = 1 N L i j ( k ) ( e j ( t τ ( t ) ) + ξ j ( t τ ( t ) ) ) , (13)

where L i j ( k ) denotes the entry of the Laplacian matrix L k .

Remark 2. Guo and Chen [30] also studied ZGS algorithm with time-varying delay and switching topology for directed graphs. Different from them, this paper adopts the event triggered communication mechanism, which can reduce the communication cost. At the same time, Zeno behavior was avoided.

Remark 3. Compared with the conclusion of Liu et al. [29], we relax the condition from undirected graph to directed equilibrium graph. Because the Laplace matrix of directed graph is not symmetric matrix, L T and L cannot be regarded as the same in the proof. This will bring us new challenges.

Next, we have the following analysis of the distributed optimisation algorithm (9) based upon the common Lyapunov theory.

Theorem 1. Suppose that Assumptions 1 and 2 are satisfied. If the following inequality

d m 2 ( 1 h ) [ λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) 2 β ^ λ max ( L k + L k T ) λ max ( L k T L k ) ] 4 ε k m 2 ( 1 h ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) + 8 γ 2 λ max ( L k T L k ) (14)

holds for k = 1 , 2 , , N ,where λ max ( L k T L k ) and λ max ( L k + L k T ) respectively denote the maximum eigenvalues of the matrix L k T L k and L k + L k T , λ 2 ( L k + L k T ) represents the minimum nonzero eigenvalue of matrix L k + L k T , ε k = inf { v : L k L k T v ( L k + L k T ) } > 0 , m = min i V ( m i ) , m i is the convexity parameter of the function f i and

β ^ = max { β 1 , , β N } < λ 2 ( L k + L k T ) 2 λ max ( L k T L k ) ( λ 2 ( L k + L k T ) + λ max ( L k + L k T ) ) ,then algorithm (10) with event-triggered condition (7) (8)can solve optimisation problem (1)and the Zeno behaviour will be avoided.

Proof. In order to prove our conclusion, we choose the following Lyapunov-Krasovskii function

V ( t ) = V 1 ( t ) + V 2 ( t ) + V 3 ( t ) , (15)

where

V 1 ( t ) = 2 i = 1 N ( f i ( x * ) f i ( x i ) T f i ( x i ) ( x * x i ) ) , (16)

V 2 ( t ) = γ d 0 t + s t i = 1 N ( ξ ˙ i T ( r ) ξ ˙ i ( r ) + e ˙ i T ( r ) e ˙ i ( r ) ) d r d s , (17)

V 3 ( t ) = 2 γ 3 d ( 1 h ) m t 2 t τ ( t ) t i = 1 N ( j = 1 N ( L i j ( k ) e j ( r ) + L i j ( k ) ξ j ( r ) ) T ) × ( j = 1 N ( L i j ( k ) e j ( r ) + L i j ( k ) ξ j ( r ) ) ) d r . (18)

Firstly, from (2), we can get

V 1 ( t ) i = 1 N m i x * x i 2 . (19)

What’s more, it is easy to obtain that V 2 ( t ) 0, V 3 ( t ) 0 , for any t 0 . So the Lyapunov function above is well defined.

Taking the time derivative of V 1 ( t ) along the trajectory evolution of x ( t ) of system (10) gives

V ˙ 1 ( t ) = 2 i = 1 N x i T ( t ) 2 f i ( x i ( t ) ) x ˙ i ( t ) = 2 γ i = 1 N x i T ( t ) j = 1 N a i j σ ( t ) ( ( e i ( t τ ( t ) ) e j ( t τ ( t ) ) ) + ( x i ( t τ ( t ) ) x j ( t τ ( t ) ) ) ) = 2 γ i = 1 N x i T ( t ) j = 1 N a i j σ ( t ) ( ( e i ( t τ ( t ) ) e j ( t τ ( t ) ) ) + ( ξ i ( t τ ( t ) ) ξ j ( t τ ( t ) ) ) ) = 2 γ i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) ( ( e j ( t τ ( t ) ) + ξ j ( t τ ( t ) ) ) ) . (20)

By the Newton-Leibniz formula, we have e j ( t τ ( t ) ) = e j ( t ) t τ ( t ) t e ˙ j ( r ) d r , ξ j ( t τ ( t ) ) = ξ j ( t ) t τ ( t ) t ξ ˙ j ( r ) d r . Let ξ ( t ) = [ ξ 1 T ( t ) , , ξ N T ( t ) ] T , by using Kronecker product of matrix, (20) is rearranged as

V ˙ 1 ( t ) = 2 γ i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) ( e j ( t ) t τ ( t ) t e ˙ j ( r ) d r + ξ j ( t ) t τ ( t ) t ξ ˙ j ( r ) d r ) = 2 γ i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) e j ( t ) 2 γ i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) ξ j ( t ) + 2 γ t τ ( t ) t i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) e ˙ j ( r ) d r + 2 γ t τ ( t ) t i = 1 N j = 1 N L i j ( k ) ξ i T ( t ) ξ ˙ j ( r ) d r = 2 γ ξ T ( t ) ( L k I n ) e ( t ) 2 γ ξ T ( t ) ( L k I n ) ξ ( t ) + 2 γ t τ ( t ) t ξ T ( t ) ( L k I n ) e ˙ ( r ) d r + 2 γ t τ ( t ) t ξ T ( t ) ( L k I n ) ξ ˙ ( r ) d r . (21)

Using Young’s inequality yields

V ˙ 1 ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ τ ( t ) ξ T ( t ) ( L k L k T I n ) ξ ( t ) + γ t τ ( t ) t e ˙ T ( r ) e ˙ ( r ) d r + γ t τ ( t ) t ξ ˙ T ( r ) ξ ˙ ( r ) d r γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + γ t d t e ˙ T ( r ) e ˙ ( r ) d r + γ t d t ξ ˙ T ( r ) ξ ˙ ( r ) d r . (22)

Taking the time derivative of V 2 ( t ) along the trajectory evolution of ξ ( t ) of system (13), we have

V ˙ 2 ( t ) = γ d 0 ( i = 1 N ξ ˙ i T ( t ) ξ ˙ i ( t ) ξ ˙ i T ( t + s ) ξ ˙ i ( t + s ) ) d s + γ d 0 ( i = 1 N e ˙ i T ( t ) e ˙ i ( t ) e ˙ i T ( t + s ) e ˙ i ( t + s ) ) d s = γ d i = 1 N ξ ˙ i T ( t ) ξ ˙ i ( t ) γ d 0 i = 1 N ξ ˙ i T ( t + s ) ξ ˙ i ( t + s ) d s + γ d i = 1 N e ˙ i T ( t ) e ˙ i ( t ) γ d 0 i = 1 N e ˙ i T ( t + s ) e ˙ i ( t + s ) d s = γ d i = 1 N ξ ˙ i T ( t ) ξ ˙ i ( t ) γ t d t i = 1 N ξ ˙ i T ( r ) ξ ˙ i ( r ) d r + γ d i = 1 N e ˙ i T ( t ) e ˙ i ( t ) γ t d t i = 1 N e ˙ i T ( r ) e ˙ i ( r ) d r . (23)

Since 2 f i ( x i ( t ) ) m I n , where m = min i V ( m i ) , we know that ( 2 f i ( x i ( t ) ) ) 1 1 m I n holds. It follows that

V ˙ 2 ( t ) 2 γ 3 d m 2 i = 1 N ( j = 1 N L i j ( k ) ( e j ( t τ ( t ) ) + ξ j ( t τ ( t ) ) ) ) T × ( j = 1 N L i j ( k ) ( e j ( t τ ( t ) ) + ξ j ( t τ ( t ) ) ) ) γ t d t ξ ˙ T ( r ) ξ ˙ ( r ) d r γ t d t e ˙ T ( r ) e ˙ ( r ) d r . (24)

Taking the time derivative of V 3 ( t ) gives

V ˙ 3 ( t ) = 2 γ 3 d ( 1 h ) m 2 i = 1 N ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ( k ) ξ j ( t ) ) ) T ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ( k ) ξ j ( t ) ) ) 2 γ 3 d ( 1 h ) m 2 i = 1 N ( j = 1 N ( L i j ( k ) e j ( t τ ( t ) ) + L i j ( k ) ξ j ( t τ ( t ) ) ) ) T × ( j = 1 N ( L i j ( k ) e j ( t τ ( t ) ) + L i j ( k ) ξ j ( t τ ( t ) ) ) ) ( 1 τ ˙ ( t ) ) . (25)

Since τ ˙ ( t ) h < 1 , we have 1 τ ˙ ( t ) 1 h 1 . Then, we conclude that

V ˙ 3 ( t ) 2 γ 3 d ( 1 h ) m 2 i = 1 N ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ξ j ( t ) ) ) T ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ( k ) ξ j ( t ) ) ) 2 γ 3 d m 2 i = 1 N ( j = 1 N ( L i j ( k ) e j ( t τ ( t ) ) + L i j ( k ) ξ j ( t τ ( t ) ) ) ) T × ( j = 1 N ( L i j ( k ) e j ( t τ ( t ) ) + L i j ( k ) ξ j ( t τ ( t ) ) ) ) . (26)

Together with (22), (24), and (26), one can obtain that

V ˙ ( t ) = V ˙ 1 ( t ) + V ˙ 2 ( t ) + V ˙ 3 ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + 2 γ 3 d ( 1 h ) m 2 i = 1 N ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ( k ) ξ j ( t ) ) ) T ( j = 1 N ( L i j ( k ) e j ( t ) + L i j ( k ) ξ j ( t ) ) ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + 2 γ 3 d ( 1 h ) m 2 ( e ( t ) + ξ ( t ) ) T ( L k T L k I n ) ( e ( t ) + ξ ( t ) )

γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + 2 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) e ( t ) + 2 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t ) + 4 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) ξ ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + 2 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) e ( t ) + 2 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t )

+ 2 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) e ( t ) + 2 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t ) + 4 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t ) + γ e T ( t ) ( L k I n ) e ( t ) + 4 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) e ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + 2 γ d ξ T ( t ) ( L k L k T I n ) ξ ( t )

+ 4 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t ) + γ 2 e T ( t ) ( ( L k + L k T ) I n ) e ( t ) + 4 γ 3 d ( 1 h ) m 2 e T ( t ) ( L k T L k I n ) e ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + 2 γ d ε k ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + 4 γ 3 d ( 1 h ) m 2 ξ T ( t ) ( L k T L k I n ) ξ ( t ) + ( γ 2 λ max ( L k + L k T ) + 4 γ 3 d ( 1 h ) m 2 λ max ( L k T L k ) ) e T ( t ) e ( t ) . (27)

Next, by Lemma 4, we can get that

V ˙ ( t ) γ ξ T ( t ) ( L k I n ) ξ ( t ) + 2 γ d ε k ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + 4 γ 3 d ( 1 h ) m 2 2 λ max ( L k T L k ) λ 2 ( L k + L k T ) ξ T ( t ) ( L k I n ) ξ ( t ) + ( γ 2 λ max ( L k + L k T ) + 4 γ 3 d ( 1 h ) m 2 λ max ( L k T L k ) ) e T ( t ) e ( t ) = ( γ 2 2 γ ε k d 4 γ 3 d λ max ( L k T L k ) ( 1 h ) m 2 λ 2 ( L k + L k T ) ) ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + ( γ 2 λ max ( L k + L k T ) + 4 γ 3 d ( 1 h ) m 2 λ max ( L k T L k ) ) e T ( t ) e ( t ) . (28)

Let β ^ = max { β 1 , , β N } and based on event triggered condition (8), we can deduce

e ( t ) 2 β ^ z ^ ( t ) 2 + c e 2 α t = β ^ ( ( x ( t ) + e ( t ) ) T ( L k T L k I n ) ( x ( t ) + e ( t ) ) ) + c e 2 α t = β ^ ( ( ξ ( t ) + e ( t ) ) T ( L k T L k I n ) ( ξ ( t ) + e ( t ) ) ) + c e 2 α t = β ^ ξ T ( t ) ( L k T L k I n ) ξ ( t ) + 2 β ^ e T ( t ) ( L k T L k I n ) ξ ( t ) + β ^ e T ( t ) ( L k T L k I n ) e ( t ) + c e 2 α t

2 β ^ ξ T ( t ) ( L k T L k I n ) ξ ( t ) + 2 β ^ e T ( t ) ( L k T L k I n ) e ( t ) + c e 2 α t 2 β ^ λ max ( L k T L k ) λ 2 ( L k + L k T ) ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + 2 β ^ λ max ( L k T L k ) e ( t ) 2 + c e 2 α t . (29)

Suppose β ^ < λ 2 ( L k + L k T ) 2 λ max ( L k T L k ) ( λ 2 ( L k + L k T ) + λ max ( L k + L k T ) ) < 1 2 λ max ( L k T L k ) , it follows from (29) that

e ( t ) 2 2 β ^ λ max ( L k T L k ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + c 1 2 β ^ λ max ( L k T L k ) e 2 α t . (30)

Substituting (30) into (28), we can obtain

V ˙ ( t ) ( γ 2 2 γ ε k d 4 γ 3 d λ max ( L k T L k ) ( 1 h ) m 2 λ 2 ( L k + L k T ) γ β ^ λ max ( L k + L k T ) λ max ( L k T L k ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) 8 γ 3 d β ^ λ max 2 ( L k T L k ) m 2 ( 1 h ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) ) ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + ( γ c λ max ( L k + L k T ) 2 ( 1 2 β ^ λ max ( L k T L k ) ) + 4 c γ 3 d λ max ( L k T L k ) m 2 ( 1 h ) ( 1 2 β ^ λ max ( L k T L k ) ) ) e 2 α t (31)

Let

ϕ = γ 2 2 γ ε k d 4 γ 3 d λ max ( L k T L k ) ( 1 h ) m 2 λ 2 ( L k + L k T ) γ β ^ λ max ( L k + L k T ) λ max ( L k T L k ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) 8 γ 3 d β ^ λ max 2 ( L k T L k ) m 2 ( 1 h ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) ,

ψ = γ c λ max ( L k + L k T ) 2 ( 1 2 β ^ λ max ( L k T L k ) ) + 4 c γ 3 d λ max ( L k T L k ) m 2 ( 1 h ) ( 1 2 β ^ λ max ( L k T L k ) ) ,

so because of the conditions d m 2 ( 1 h ) [ λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) 2 β ^ λ max ( L k + L k T ) λ max ( L k T L k ) ] 4 ε k m 2 ( 1 h ) λ 2 ( L k + L k T ) ( 1 2 β ^ λ max ( L k T L k ) ) + 8 γ 2 λ max ( L k T L k ) and β ^ < λ 2 ( L k + L k T ) 2 λ max ( L k T L k ) ( λ 2 ( L k + L k T ) + λ max ( L k + L k T ) ) we have ϕ 0 , ψ > 0 . Thus, V ˙ ( t ) can be simply expressed as

V ˙ ( t ) ϕ ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) + ψ e 2 α t . (32)

Next, according to the proposition 2 proposed in the work of Chen and Ren [31], we can get that there exists a positive constant ρ k such that

0 m i = 1 N x * x i ( t ) 2 V 1 ( t ) 2 ρ k x T ( t ) ( ( L k + L k T ) I n ) x ( t ) = 2 ρ k ξ T ( t ) ( ( L k + L k T ) I n ) ξ ( t ) (33)

holds over the compact set Θ i ( k ) = { x ( t ) R | f i ( x * ) f i ( x ( t ) ) f i ( x ( t ) ) T ( x * x ( t ) ) V 1 ( x ( 0 ) ) + m N ( 1 8 + L k 2 32 ε k 2 ) } , where ε k = inf { v | L k L k T v ( L k + L k T ) } . Consequently, (33) can be rewritten as

V ˙ ( t ) ϕ ρ k 2 V 1 ( t ) + ψ e 2 α t . (34)

Integrating both sides of (36) for any t yields

V ( t ) V ( 0 ) ϕ ρ k 2 0 t V 1 ( s ) d s + ψ 2 α , (35)

i.e.

V ( t ) + ϕ ρ k 2 0 t V 1 ( s ) d s V ( 0 ) + ψ 2 α , (36)

which implies that V ( t ) and ϕ ρ k 2 0 t V 1 ( s ) d s are both bounded. It follows from V 1 ( t ) V ( t ) that V 1 ( t ) is bounded. From (19), we get i = 1 N m i x * x i ( t ) 2 = i = 1 N m i ξ i ( t ) 2 is bounded. Since i = 1 N m i ξ i ( t ) 2 m ξ ( t ) 2 , we get ξ ( t ) is bounded. It follows from (31) that e ( t ) is bounded. Hence, we further get ξ ˙ ( t ) is bounded according to (14). Therefore, d ξ ( t ) d t is bounded due to d ξ ( t ) d t ξ ˙ ( t ) . By using Lemma 5, we can get that ξ ( t ) 0 as t , i.e. x ( t ) x * 0 , which implies the distributed optimisation problem is solved in system (10).

In the following, we will show that the Zeno-behaviour of triggering time will be excluded through the whole process for i V , i.e. there exists a constant ζ > 0 such that t k + 1 i t k i ζ .

Note that when t ( t k i , t k + 1 i ) , e ˙ i ( t ) = ξ ˙ i ( t ) and ξ ˙ i ( t ) is bounded; Thus, there exists a constant η > 0 such that e ˙ i ( t ) η . Combining with e ( t k i ) = 0 , we have

t k i t e ˙ i ( t ) d t = e i ( t ) t k i t η d t = η ( t t k i ) . (37)

From the definition of triggering time sequences, we know that E i ( t ) 0 at the next trigger time instant t k + 1 i , i.e.

β i z ^ i ( t ) 2 + c e 2 α t e ( t k + 1 i ) η ( t k + 1 i t k i ) . (38)

In the evolution of the system, whether z ^ i ( t ) 2 = 0 or not, the left expression of (38) is always positive for the existence of the term c e 2 α t . So for every t = t 0 there will always exists a constant ζ ( t 0 ) > 0 such that

η ( t k + 1 i t k i ) ζ ( t 0 ) , i.e. ( t k + 1 i t k i ) ζ ( t 0 ) η > 0 , which means that Zeno-behaviour is excluded for all agents. This completes the proof.

4. Numerical Simulations

In this section, we will show the effectiveness and feasibility of our proposed theoretical results in Theorem 1. Here, we assume that there are eight nodes in the directed graph, the node states are scalars, and the local objective functions corresponding to the nodes are as follows:

f i ( x ) = ( x i ) 4 + 8 × i × ( x i ) 2 (39)

with i = 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . Obviously, the local objective function f i ( x ) satisfies Assumption 2 and the convexity parameters m i = 16 , i = 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . Our

Figure 1. The directed switching topologies: (a) G 1 ; (b) G 2 ; (c) G 3 .

goal is to solve the global optimisation problem F ( x ) = i = 1 8 f i ( x ) . In other words, we need to prove that the states of all nodes will eventually converge to the global optimal value x * = 5.1153 . We select the initial state of each node as x i * = i , i = 1 , 2 , , 8 , which is also the corresponding local optimal value.

As shown in Figure 1, three switching cases of directed topological graphs are given, all of which are strongly connected and balanced. According to the calculation, we can get ε 1 = 1 , ε 2 = 1.791 , ε 3 = 2.894 and β ^ < 0.00238 . So we choose the parameters β 1 = 0.0023 , β 2 = 0.0022 , β 3 = 0.0021 , β 4 = 0.0019 , β 5 = 0.0018 , β 6 = 0.002 , β 7 = 0.0021 , β 8 = 0.0017 . What’s more, we select the parameters γ = 50 , c = 15 , α = 0.16 , so we can select the time-varying delay as τ ( t ) = 0.000008 + 0.000008 sin ( t ) , which meets the condition (14).

All the simulation results are shown as follows and the sample time is 0.1. From Figure 2, we can see that the system state x i ( t ) and the recently broadcast state x ^ i ( t ) will eventually converge to the global optimum x * , which proves our conclusion. Figure 3 shows the switching signal σ ( t ) .

Figure 2. The trajectories of states of each node.

Figure 3. The switching signal σ ( t ) .

5. Conclusion

In this paper, ZGS algorithm with time-varying delays and switching topologies is extended from undirected graph network to directed equilibrium graph. Combined with event triggering mechanism, a new convergence result is proposed. It is proved that the agent state based on the algorithm will converge to the optimal state when the obtained conditions are satisfied. In addition, the algorithm avoids Zeno behavior. Finally, a simulation example is given to verify the effectiveness of the algorithm. In the future, we will try to solve the distributed optimization algorithm with constraints. Because constraints are often used in practical applications, this is a topic of practical significance.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Liu, X. and Qiu, X. (2021) Distributed Synchronization of Stochastic Complex Networks with Time-Varying Delays via Randomly Occurring Control. Applied Mathematics, 12, 803-817.
https://doi.org/10.4236/am.2021.129054
[2] Chen, X.B., Yan, K.X., Gao, Y., Xu, X.F., Yan, K. and Wang, J. (2020) Push-Pull Finite-Time Convergence Distributed Optimization Algorithm. American Journal of Computational Mathematics, 10, 118-146.
https://doi.org/10.4236/ajcm.2020.101008
[3] Li, C. and Elia, N. (2015) Stochastic Sensor Scheduling via Distributed Convex Optimization. Automatica, 58, 173-182.
https://doi.org/10.1016/j.automatica.2015.05.014
[4] Puccinelli, D. and Haenggi, M. (2005) Wireless Sensor Networks: Applications and Challenges of Ubiquitous Sensing. IEEE Circuits and Systems Magazine, 5, 19-31.
https://doi.org/10.1109/MCAS.2005.1507522
[5] Nedic, A. and Ozdaglar, A. (2009) Distributed Subgradient Methods for Multi-Agent Optimization. IEEE Transactions on Automatic Control, 54, 48-61.
https://doi.org/10.1109/TAC.2008.2009515
[6] Chen, G., Ren, J. and Feng, E. (2017) Distributed Finite-Time Economic Dispatch of a Network of Energy Resources. IEEE Transactions on Smart Grid, 8, 822-832.
[7] Bertsekas, D. and Tsitsiklis, J. (1997) Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Nashua.
[8] Ram, S., Nedic, A. and Veeravalli, V. (2010) Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models. IEEE Transactions on Automatic Control, 55, 488-492.
https://doi.org/10.1109/TAC.2009.2037460
[9] Madan, R. and Lall, S. (2006) Distributed Algorithms for Maximum Lifetime Routing in Wireless Sensor Networks. IEEE Transactions on Wireless Communications, 5, 2185-2193.
https://doi.org/10.1109/TWC.2006.1687734
[10] Bazerque, J. and Giannakis, G. (2010) Distributed Spectrum Sensing for Cognitive Radio Networks by Exploiting Sparsity. IEEE Transactions on Signal Processing, 58, 1847-1862.
https://doi.org/10.1109/TSP.2009.2038417
[11] Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. (2011) Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 3, 1-122.
https://doi.org/10.1561/2200000016
[12] Zhao, X., Tu, S. and Sayed, A. (2012) Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-Stationary Data. IEEE Transactions on Signal Processing, 60, 3460-3475.
https://doi.org/10.1109/TSP.2012.2192928
[13] Mateos, G., Bazerque, J. and Giannakis, G. (2010) Distributed Sparse Linear Regression. IEEE Transactions on Signal Processing, 58, 5262-5276.
https://doi.org/10.1109/TSP.2010.2055862
[14] Nedic, A., Ozdaglar, A. and Parrilo, P.A. (2010) Constrained Consensus and Optimization in Multi-Agent Networks. IEEE Transactions on Automatic Control, 55, 922-938.
https://doi.org/10.1109/TAC.2010.2041686
[15] Yuan, D., Xu, S., Zhang, B. and Rong, L. (2013) Distributed Primal-Dual Stochastic Subgradient Algorithms for Multi-Agent Optimization under Inequality Constraints. International Journal of Robust and Nonlinear Control, 23, 1846-1868.
https://doi.org/10.1002/rnc.2856
[16] Qiu, Z., Liu, S. and Xie, L. (2018) Necessary and Sufficient Conditions for Distributed Constrained Optimal Consensus under Bounded Input. International Journal of Robust and Nonlinear Control, 28, 2619-2635.
https://doi.org/10.1002/rnc.4040
[17] Rahili, S. and Ren, W. (2017) Distributed Continuous-Time Convex Optimization with Time-Varying Cost Functions. IEEE Transactions on Automatic Control, 62, 1590-1605.
https://doi.org/10.1109/TAC.2016.2593899
[18] Jakovetic, D., Xavier, J. and Moura, J.M. (2011) Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication. IEEE Transactions on Signal Process, 59, 3889-3902.
https://doi.org/10.1109/TSP.2011.2146776
[19] Duchi, J.C., Agarwal, A. and Wainwright, M.J. (2012) Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling. IEEE Transactions on Automatic Control, 57, 592-606.
https://doi.org/10.1109/TAC.2011.2161027
[20] Gharesifard, B. and Cortés, J. (2014) Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs. IEEE Transactions on Automatic Control, 59, 781-786.
https://doi.org/10.1109/TAC.2013.2278132
[21] Yuan, D., Xu, S. and Lu, J. (2015) Gradient-Free Method for Distributed Multi-Agent Optimization via Push-Sum Algorithms. International Journal of Robust and Nonlinear Control, 25, 1569-1580.
https://doi.org/10.1002/rnc.3164
[22] Lu, J. and Tang, C.Y. (2011) Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case. IEEE Transactions on Automatic Control, 57, 2348-2354.
https://doi.org/10.1109/TAC.2012.2184199
[23] Yang, Z., Pan, X., Zhang, Q. and Chen, Z. (2020) Distributed Optimization for Multi-Agent Systems with Time Delay. IEEE Access, 8, 123019-123025.
https://doi.org/10.1109/ACCESS.2020.3007731
[24] Liu, J., Chen, W. and Dai, H. (2017) Distributed Zero-Gradient-Sum (ZGS) Consensus Optimisation over Networks with Time-Varying Topologies. International Journal of Systems Science, 48, 1836-1843.
https://doi.org/10.1080/00207721.2017.1288840
[25] Song, Y. and Chen, W. (2016) Finite-Time Convergent Distributed Consensus Optimisation over Networks. IET Control Theory & Applications, 10, 1314-1318.
https://doi.org/10.1049/iet-cta.2015.1051
[26] Wu, Z. and Li, Z. (2020) Finite-Time Distributed Convex Optimization with Zero-Gradient-Sum Algorithms. IFAC-PapersOnLine, 53, 2495-2500.
[27] Liu, J., Chen, W. and Dai, H. (2016) Sampled-Data Based Distributed Convex Optimization with Event-Triggered Communication. International Journal of Control, Automation and Systems, 14, 1421-1429.
https://doi.org/10.1007/s12555-015-0133-9
[28] Liu, J., Chen, W. and Dai, H. (2018) Event-Triggered Zero-Gradient-Sum Distributed Convex Optimisation over Networks with Time-Varying Topologies. International Journal of Control, 92, 2829-2841.
https://doi.org/10.1080/00207179.2018.1460693
[29] Liu, J. and Xie, J. (2021) Event-Triggered Zero-Gradient-Sum Distributed Optimisation Algorithm with Time-Varying Communication Delays. International Journal of Systems Science, 52, 110-125.
https://doi.org/10.1080/00207721.2020.1820622
[30] Guo, Z. and Chen, G. (2018) Distributed Zero-Gradient-Sum Algorithm for Convex Optimization with Time-Varying Communication Delays and Switching Networks. International Journal of Robust and Nonlinear Control, 28, 4900-4915.
https://doi.org/10.1002/rnc.4289
[31] Chen, W. and Ren, W. (2016) Event-Triggered Zero-Gradient-Sum Distributed Consensus Optimization over Directed Networks. Automatica, 65, 90-97.
https://doi.org/10.1016/j.automatica.2015.11.015
[32] Zhao, Z. and Chen, G. (2021) Event-Triggered Scheme for Zero-Gradient-Sum Optimisation under Directed Networks with Time Delay. International Journal of Systems Science, 52, 47-56.
https://doi.org/10.1080/00207721.2020.1819467
[33] Boyd, S. and Vandenberghe, L. (2004) Convex Optimization. Cambridge University Press, New York.
https://doi.org/10.1017/CBO9780511804441
[34] Dai, H., Jia, J., Yan, L., Wang, F. and Chen, W. (2019) Even-Triggered Exponential Synchronization of Complex Dynamical Networks with Cooperatively Directed Spanning Tree Topology. Neurocomputing, 330, 355-368.
https://doi.org/10.1016/j.neucom.2018.11.013
[35] Jia, Q. and Tang, W.K.S. (2011) Leader Following of Nonlinear Agents with Switching Connective Network and Coupling Delay. IEEE Transactions on Circuits and Systems I: Regular Papers, 58, 2508-2519.
https://doi.org/10.1109/TCSI.2011.2131230
[36] Wang, Y., Cao, J., Wang, H. and Alsaadi, F.E. (2019) Event-Triggered Consensus of Multi-Agent Systems with Nonlinear Dynamics and Communication Delay. Physica A: Statistical Mechanics and Its Applications, 522, 147-157.
https://doi.org/10.1016/j.physa.2019.01.124

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.