The Second-Order Differential Equation System with the Feedback Controls for Solving Convex Programming

Abstract

In this paper, we establish the second-order differential equation system with the feedback controls for solving the problem of convex programming. Using Lagrange function and projection operator, the equivalent operator equations for the convex programming problems under the certain conditions are obtained. Then a second-order differential equation system with the feedback controls is constructed on the basis of operator equation. We prove that any accumulation point of the trajectory of the second-order differential equation system with the feedback controls is a solution to the convex programming problem. In the end, two examples using this differential equation system are solved. The numerical results are reported to verify the effectiveness of the second-order differential equation system with the feedback controls for solving the convex programming problem.

Share and Cite:

Chen, X. , Wang, L. , Sun, J. and Yuan, Y. (2022) The Second-Order Differential Equation System with the Feedback Controls for Solving Convex Programming. Open Journal of Applied Sciences, 12, 977-989. doi: 10.4236/ojapps.2022.126067.

1. Introduction

We consider the problem of convex programming, which is to find a vector x Ω such that

x arg min { f ( x ) : g ( x ) 0, x Q } , (1.1)

where f : n and g : n m are two mappings, Q is a convex closed set, “argmin” represents the set of minimum points.

The Lagrange function of the problem (1.1) is L ( x , p ) = f ( x ) + p , g ( x ) , where x Q n , p P m and P is a convex closed set. Then we know that L ( x , p ) is a function convex in x and concave in p. In the general case, if ( x , p ) is the solution to the problem (1.1), it satisfies the following inequalities

L ( x , p ) L ( x , p ) L ( x , p ) . (1.2)

More generally, the function L ( x , p ) can be a saddle function.

Convex optimization problems have important applications in many fields. Recently, Wang, Hong and Kai [1] are devoted to a novel smoothing function method for convex quadratic programming problem with mixed constraints, which has important application in mechanics and engineering science. The problem is reformulated as a system of non-smooth equations, and then a smoothing function for the system of non-smooth equations is proposed. The condition of convergences of this iteration algorithm is given. Asadi, Mansouri and Zangiabadi [2] present a neighborhood following primal-dual interior-point algorithm for solving symmetric cone convex quadratic programming problems, where the objective function is a convex quadratic function and the feasible set is the intersection of an affine subspace and a symmetric cone attached to a Euclidean Jordan algebra. Yuan, Zhang and Huang [3] propose an arc-search interior-point algorithm for convex quadratic programming with a wide neighborhood of the central path, which searches the optimizers along the ellipses that approximate the entire central path.

Antipin [4] considered the synthesis of control laws for nonlinear objects whose set of equilibrium states is defined by the problems of convex programming or degenerate saddle functions. Based on the projection operator, a first-order differential equation system with composite controls was established. Moreover, the trajectory of process of this system could be converged monotonically in norm to one of the equilibrium points. It is worth mentioning that the differential equation methods for solving the minimization problems and the variational inequalities which were studied by Antipin [5] - [11] are different from the traditional differential equation method and neural network. Without using the Lyapunov function but only applying the properties of the projection operator and the related function, the stationary of the equilibrium point of the differential equation can be proved. Thus the convergence of the solutions of the primal problems can be obtained. However, Antipin’s work on solving different types of optimization problems and variational inequality problems by using differential equation methods is theoretical results, and no numerical results are given. Based on the research of the above scholars, this paper will continue to use the differential equation method to solve a class of convex optimization problems. In addition to giving the convergence theoretical results of the solutions of variational inequalities, numerical examples will also be given to illustrate the effectiveness of the differential equation method.

Recently, inspired by the ideas of the above research results, Wang et al. [12] - [16] constructed the different differential equation systems for solving the differential variational inequalities. For example, Wang, Li and Zhang [12] considered the differential equation method for solving the box constrained variational inequality problems and proved that the equilibrium solution to the differential equation system is locally asymptotically stable by verifying the locally asymptotical stability of the equilibrium positions of the differential inclusion problems. Wang, Chen and Sun [15] established the system of differential equations based on the projection operator for the variational inequality problem with the cyclically monotone mapping. Using an important inequality for the cyclically monotone mapping, any accumulation point of the trajectory of the differential equation system was proved to be a solution to the variational inequality problem. Wang, Chen and Sun [16] constructed the second-order differential equation system with the controlled process for solving the variational inequality with constraints and proved that any accumulation point of the trajectory of the second-order differential equation system is a solution to the variational inequality with constraints. Nazemi and Sabeghi [17] [18] applied neural network model to solve convex second-order cone constrained variational inequality problems. Kwelegano et al. [19] studied an approximate solution to the problem of splitting equality variational inequality.

In next section, based on the saddle function (1.2) and the projection operator, the second-order differential equation system with the feedback controls will be established for solving the convex programming problem (1.1). We will prove that any accumulation point of the trajectory of the second-order differential equation system with the feedback controls is a solution to the convex programming problem in Section 3. At last, two examples are solved by using this differential equation system. The numerical results are reported to verify the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).

2. Preliminaries

The projection operator to a convex set is quite useful for establishing the second-order differential equation system. Now we recall the following definitions.

Let C be a convex closed set, for every x n , there is a unique x ^ in C such that

x x ^ = min { x y | y C } . (2.1)

The point x ^ is the projection of x onto C, denoted by Π C ( x ) . The projection operator Π C : n C is well defined over n and it is a nonexpensive mapping.

Lemma 2.1. [20] Let H be a real Hilbert space and C H be a closed convex set. For a given z H , u C satisfies the inequality

u z , v u 0, v C , (2.2)

if and only if u Π C ( z ) = 0 .

Assuming that the function L ( x , p ) is differentiable, it is easy to show that ( x * , p * ) is the saddle point of the inequalities (1.2) if and only if ( x * , p * ) satisfies the following system by using Lemma 2.1.

x = Π Q ( x α L x ( x , p ) ) , p = Π P ( p + α L p ( x , p ) ) , (2.3)

where Π Q ( . ) and Π P ( . ) are the projections of the vectors on the set Q and P, and L x ( x , p ) and L p ( x , p ) are the vector gradients of the function L ( x , p ) in the variables x and p, respectively. Then in view of linearity of the function in the variable p, we have L p ( x , p ) = g ( x ) , and because the set p coincides with the positive orthant, i.e. P = + m , we rewrite the system (2.3) as follows.

x = Π Q ( x α L x ( x , p ) ) , p = Π + ( p + α g ( x ) ) , (2.4)

where α > 0 , and Π + ( . ) is the operator of projection on P = + m .

Similar to Antipin [4], we establish a system of second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).

μ 1 d 2 x d t 2 + β 1 d x d t + x = Π Q ( x α L x ( x , u ¯ ) ) , x ( t 0 ) = x 0 , x ˙ ( t 0 ) = x ˙ 0 , (2.5)

μ 2 d 2 p d t 2 + β 2 d p d t + p = Π + ( p + α g ( x + μ 1 x ¨ + β 1 x ˙ ) ) , p ( t 0 ) = p 0 , p ˙ ( t 0 ) = p ˙ 0 , (2.6)

u ¯ = Π + ( p + α g ( x ) ) , (2.7)

where μ 1 > 0 , β 1 > 0 , μ 2 > 0 , β 2 > 0 and α > 0 are parameters. It is easy to see that the system (2.5)-(2.7) can be changed to the system (23)-(25) in Antipin [4] when μ 1 = 0 , β 1 = 1 , μ 2 = 0 , β 2 = 1 .

For simplicity, we denote x ¨ = d 2 x d t 2 , x ˙ = d x d t , p ¨ = d 2 p d t 2 and p ˙ = d p d t .

Using Lemma 2.1, the above Equations (2.5)-(2.7) are transformed into the following variational inequalities (2.8)-(2.10), respectively.

μ 1 x ¨ + β 1 x ˙ + α L x ( x , u ¯ ) , z x μ 1 x ¨ β 1 x ˙ 0, z Q , (2.8)

μ 2 p ¨ + β 2 p ˙ α g ( x + μ 1 x ¨ + β 1 x ˙ ) , y p μ 2 p ¨ β 2 p ˙ 0, y + m , (2.9)

u ¯ p α g ( x ) , u u ¯ 0, u + m . (2.10)

In order to prove the convergence of the solution to problem (1.1) by using the second-order differential equation system with the feedback controls (2.5)-(2.7), it is necessary that the gradient satisfy the Lipschitz condition.

Thus, suppose that

L ( x + h , p ) L ( x , p ) L x ( x , p ) , h 1 2 L 1 | h | 2 (2.11)

for all x and x + h from Q and p from P, where L 1 is a constant and

L ( x , p + h ) L ( x , p ) L p ( x , p ) , h 1 2 L 2 | h | 2 (2.12)

for all p and p + h from P and x from Q, where L 2 is a constant.

3. The Second-Order Differential Equation System

The following theorem shows that the equilibrium points of the second-order differential equations with the feedback controls (2.5)-(2.7) are asymptotically stable.

Theorem 3.1. Assume that the set of solutions to problem (1.1) is not empty, the gradients f ( x ) of the objective function and g ( x ) of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant L 0 and the vector constant L, the map g ( x ) satisfies the Lipschitz condition with the constant | g | , the trajectory u ¯ = Π + ( p + α g ( x ) ) for all t t 0 is bounded by the vector constant C, i.e., u ¯ C , and the parameter α is chosen from the condition 0 < α < M 2 + 16 | g | 2 M 4 | g | 2 , 1 2 K < β 1 < μ 1 < K β 1 2 and 2 3 < β 2 < μ 2 < 3 4 β 2 2 where M = L 0 + L , C , K = 1 α 2 M α 2 | g | 2 then the trajectory of the second-order differential equations with the feedback controls (2.5)-(2.7) converges monotonically in norm to one of the equilibrium points, i.e., x ( t ) x X and p ( t ) p P for all x 0 and p 0 .

Proof. Let z = x in (2.8), which yields that

μ 1 x ¨ + β 1 x ˙ + α L x ( x , u ¯ ) , x x μ 1 x ¨ β 1 x ˙ 0. (3.1)

Using the convexity of the function L ( x , y ) in x in the form of the inequality

L x ( x , u ¯ ) , x x L ( x , u ¯ ) L ( x , u ¯ ) , (3.2)

and we add α L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) α L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) in (3.1), then we have

μ 1 x ¨ + β 1 x ˙ 2 + μ 1 x ¨ + β 1 x ˙ , x x + α L ( x , u ¯ ) α L ( x , u ¯ ) + α L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) α L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) + α L x ( x , u ¯ ) , μ 1 x ¨ + β 1 x ˙ 0. (3.3)

Since the gradients f ( x ) of the objective function and g ( x ) of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant L 0 and the vector constant L, and the trajectory u ¯ = Π + ( p + α g ( x ) ) for all t t 0 is bounded by the vector constant C, i.e., u ¯ C , we can compute that

L ( μ 1 x ¨ + β 1 x ˙ + x , u ¯ ) L ( x , u ¯ ) L x ( x , u ¯ ) , μ 1 x ¨ + β 1 x ˙ = f ( μ 1 x ¨ + β 1 x ˙ + x ) + u ¯ , g ( x ¨ + β 1 x ˙ + x ) f ( x ) u ¯ , g ( x ) f ( x ) , μ 1 x ¨ + β 1 x ˙ g T ( x ) u ¯ , μ 1 x ¨ + β 1 x ˙ 1 2 ( L 0 + L , C ) μ 1 x ¨ + β 1 x ˙ 2 . (3.4)

It follows from the inequalities (1.2), we know that L ( x * , u ¯ ) L ( x , p * ) . Thus we have

L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) L ( x , u ¯ ) L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) L ( x + μ 1 x ¨ + β 1 x ˙ , p * ) = u ¯ , g ( x + μ 1 x ¨ + β 1 x ˙ ) p * , g ( x + μ 1 x ¨ + β 1 x ˙ ) . (3.5)

By using the above two inequalities, we can get the following inequality from the inequality (3.3).

( 1 α 2 ( L 0 + L , C ) μ 1 x ¨ + β 1 x ˙ 2 + μ 1 x ¨ + β 1 x ˙ , x x + α ( L ( x + μ 1 x ¨ + β 1 x ˙ , u ¯ ) L ( x + μ 1 x ¨ + β 1 x ˙ , p ) ) 0, (3.6)

which can be changed into that

( 1 α 2 ( L 0 + L , C ) ) μ 1 x ¨ + β 1 x ˙ 2 + μ 1 x ¨ + β 1 x ˙ , x x + α u ¯ p , g ( x + μ 1 x ¨ + β 1 x ˙ ) 0. (3.7)

Let y = p in (2.9), we can get that

μ 2 p ¨ + β 2 p ˙ α g ( x + μ 1 x ¨ + β 1 x ˙ ) , p p μ 2 p ¨ β 2 x ˙ 0, (3.8)

and let u = p + μ 2 p ¨ + β 2 p ˙ in (2.10), we yield that

u ¯ p α g ( x ) , p + μ 2 p ¨ + β 2 p ˙ u ¯ 0. (3.9)

It is easy to show that from (3.9)

u ¯ p , p + μ 2 p ¨ + β 2 p ˙ u ¯ + α g ( x + μ 1 x ¨ + β 1 x ˙ ) g ( x ) , p + μ 2 p ¨ + β 2 p ˙ u ¯ α g ( x + μ 1 x ¨ + β 1 x ˙ ) , p + μ 2 p ¨ + β 2 p ˙ u ¯ 0. (3.10)

Now, we consider the following relation

p + μ 2 p ¨ + β 2 p ˙ u ¯ = Π + ( p + α g ( x + μ 1 x ¨ + β 1 x ˙ ) ) Π + ( p + α g ( x ) ) α g ( x + μ 1 x ¨ + β 1 x ˙ ) g ( x ) α | g | μ 1 x ¨ + β 1 x ˙ . (3.11)

Using the above relation, we rewrote the inequality (3.9) as follows.

u ¯ p , p + μ 2 p ¨ + β 2 p ˙ u ¯ + α 2 | g | 2 μ 1 x ¨ + β 1 2 α g ( x + μ 1 x ¨ + β 1 x ˙ ) , p + μ 2 p ¨ + β 2 p ˙ u ¯ 0. (3.12)

Adding (3.8) and (3.12), we have

μ 2 p ¨ + β 2 p ˙ , p p μ 2 p ¨ β 2 p ˙ α g ( x + μ 1 x ¨ + β 1 x ˙ ) , p u ¯ + u ¯ p , p + μ 2 p ¨ + β 2 p ˙ u ¯ + α 2 | g | 2 μ 1 x ¨ + β 1 x ˙ 2 0. (3.13)

Using the relations

p 1 p 2 2 = p 1 p 3 2 + 2 p 1 p 3 , p 3 p 2 + p 3 p 2 2 (3.14)

and

1 4 p 1 p 2 2 1 2 p 1 p 3 2 + 1 2 p 3 p 2 2 , (3.15)

the above inequality (3.13) can be transformed into the following

3 4 μ 2 p ¨ + β 2 p ˙ 2 + μ 2 p ¨ + β 2 p ˙ , p p α 2 | g | 2 μ 1 x ¨ + β 1 x ˙ 2 + α g ( μ 1 x ¨ + β 1 x ˙ + x ) , p u ¯ 0. (3.16)

Summing (3.7) and (3.16), we get that

3 4 μ 2 p ¨ + β 2 p ˙ 2 + μ 2 p ¨ + β 2 p ˙ , p p + ( 1 α / 2 ( L 0 + L , C ) α 2 | g | 2 ) μ 1 x ¨ + β 1 x ˙ 2 + μ 1 x ¨ + β 1 x ˙ , x x 0. (3.17)

The inequality (3.17) can be calculated by using the relations (3.14) and (3.15) in the following.

3 4 μ 2 2 p ¨ 2 + 3 4 β 2 2 p ˙ 2 + 3 2 μ 2 β 2 p ¨ , p ˙ + K μ 1 2 x ¨ 2 + K β 1 2 x ˙ 2 + 2 μ 1 β 1 K x ¨ , x ˙ + μ 2 p ¨ , p p + β 2 p ˙ , p p + μ 1 x ¨ , x x + β 1 x ˙ , x x 0, (3.18)

where K = 1 α 2 M α 2 | g | 2 and M = L 0 + L , C . We have K > 0 since α is chosen from 0 < α < M 2 + 16 | g | 2 M 4 | g | 2 .

According to the following relations

1 2 d 2 d t 2 x x 2 = x ˙ 2 + x x , x ¨ , 1 2 d d t x ˙ 2 = x ˙ , x ¨ , 1 2 d d t x x 2 = x ˙ , x x , (3.19)

the inequality (3.18) can be transformed into the following

3 4 μ 2 2 p ¨ 2 + ( 3 4 β 2 2 μ 2 ) p ˙ 2 + K μ 1 2 x ¨ 2 + ( K β 1 2 μ 1 ) x ˙ 2 + 3 4 d d t μ 2 β 2 p ˙ 2 + μ 1 β 1 K d d t x ˙ 2 + μ 2 2 d 2 d t 2 p p 2 + β 2 2 d d t p p 2 + μ 1 2 d 2 d t 2 x x 2 + β 1 2 d d t x x 2 0. (3.20)

Let φ ( x ) = 1 2 x x 2 and ϕ ( p ) = 1 2 p p 2 , the inequality (3.20) means that

μ 2 d 2 d t 2 ϕ ( p ) + β 2 d d t ϕ ( p ) + μ 1 d 2 d t 2 φ ( x ) + β 1 d d t φ ( x ) + 3 4 μ 2 2 p ¨ 2 + ( 3 4 β 2 2 μ 2 ) p ˙ 2 + K μ 1 2 x ¨ 2 + ( K β 1 2 μ 1 ) x ˙ 2 + 3 4 d d t μ 2 β 2 p ˙ 2 + μ 1 β 1 K d d t x ˙ 2 0. (3.21)

The inequality (3.21) can be integrated from t0 to t as follows.

μ 1 d d t φ ( x ) + β 1 φ ( x ) + K μ 1 2 t 0 t x ¨ 2 + ( K β 1 2 μ 1 ) t 0 t x ˙ 2 + μ 1 β 1 K x ˙ 2 + μ 2 d d t ϕ ( p ) + β 2 ϕ ( p ) + 3 4 μ 2 2 t 0 t p ¨ 2 + ( 3 4 β 2 2 μ 2 ) t 0 t p ˙ 2 + 3 4 μ 2 β 2 p ˙ 2 C 0 , (3.22)

where C 0 = μ 2 d d t ϕ ( p 0 ) + β 2 ϕ ( p 0 ) + μ 1 d d t φ ( x 0 ) + β 1 φ ( x 0 ) + 3 4 μ 2 β 2 p ˙ 0 2 + 1 2 μ 1 β 1 K x ˙ 0 2 . . It follows from 1 2 K < β 1 < μ 1 < K β 1 2 and 2 3 < β 2 < μ 2 < 3 4 β 2 2 that K β 1 2 μ 1 > 0 and 3 4 β 2 2 μ 2 > 0 . Thus there exists a constant C 1 such that

μ 1 d d t φ ( x ) + β 1 φ ( x ) C 1 , (3.23)

which can be equivalent to change into

μ 1 exp ( β 1 μ 1 t ) d d t ( exp ( β 1 μ 1 ) φ ( x ) ) C 1 . (3.24)

That is,

d d t ( exp ( β 1 μ 1 t ) φ ( x ) ) C 1 1 μ 1 exp ( β 1 μ 1 t ) . (3.25)

By integrating (3.25), we have

exp ( β 1 μ 1 t ) φ ( x ) C 1 β 1 exp ( β 1 μ 1 t ) + C 2 , (3.26)

where C 2 is a constant. We conclude that

φ ( x ) C 1 β 1 + C 2 exp ( β 1 μ 1 t ) , (3.27)

which means that φ ( x ) is bounded for all t . Similarly, we get ϕ ( p ) is bounded for all t .

The function φ ( x ) and ϕ ( p ) is strongly convex, and it is well known that each of its Lebesgue sets is bounded. Thus the trajectory x ( t ) and p ( t ) is bounded. That is, there exists a constant C 3 and C 4 such that

x ( t ) x 2 C 3 , p ( t ) p 2 C 4 . (3.28)

Now we claim that t 0 t x ¨ 2 d τ < , t 0 t x ˙ 2 d τ < , t 0 t p ¨ 2 d τ < and t 0 t p ˙ 2 d τ < . We firstly show that x ˙ and p ˙ is bounded. It follows from the inequality (3.22) that

d d t φ ( x ) + β 1 μ 1 φ ( x ) + β 1 K x ˙ 2 C 5 , (3.29)

where C 5 is a constant. The above inequality means that

x ˙ , x x + β 1 μ 1 φ ( x ) + β 1 K x ˙ 2 C 5 . (3.30)

Due to x ˙ , x x = 1 2 x ˙ 1 2 x x 2 + 1 2 x ˙ + x x 2 , the above inequality infers that

( β 1 K 1 2 ) x ˙ 2 + 1 2 ( β 1 μ 1 1 ) x x 2 C 6 , (3.31)

It follows from 1 2 K < β 1 < μ 1 < K β 1 2 that β 1 K 1 2 > 0 and β 1 μ 1 1 < 0 . We conclude that x ˙ 2 is bounded in the following.

( β 1 K 1 2 ) x ˙ 2 1 2 ( 1 β 1 μ 1 ) x x 2 + C 6 1 2 ( 1 β 1 μ 1 ) C 3 2 + C 6 , (3.32)

that is, x ˙ 2 is bounded. It follows from

| d d t ϕ ( x ) | = | x ˙ , x x | x ˙ x x (3.33)

that d d t ϕ ( x ) also has lower bound. In the same way, p ˙ 2 and d d t ϕ ( p ) is also bounded. Thus there exists a constant C 7 such that

K μ 1 2 t 0 t x ¨ 2 + ( K β 1 2 μ 1 ) t 0 t x ˙ 2 + 3 4 μ 2 2 t 0 t p ¨ 2 + ( 3 4 β 2 2 μ 2 ) t 0 t p ˙ 2 C 7 , (3.34)

which yields that the integrals t 0 t x ¨ 2 d τ < , t 0 t x ˙ 2 d τ < , t 0 t p ¨ 2 d τ < and t 0 t p ˙ 2 d τ < , converge as t .

Assuming that there exists an ε > 0 such that x ¨ ( t ) ε , p ¨ ( t ) ε , x ˙ ( t ) ε , and p ˙ ( t ) ε for all t t 0 , we obtain a contradiction to the convergence of integrals. Hence, there exists a subsequence of time moments t i such that x ¨ ( t i ) 0 , p ¨ ( t i ) 0 , x ˙ ( t i ) 0 and p ˙ ( t i ) 0 . Since x ( t ) and p ( t ) are bounded, we know that x ( t i ) and p ( t i ) are bounded. We choose the subsequences x ( t i j ) and p ( t i j ) of x ( t i ) and p ( t i ) , then there exist x and p such that x ( t i j ) x , p ( t i j ) p , x ¨ ( t i j ) 0 , p ¨ ( t i j ) 0 , x ˙ ( t i j ) 0 and p ˙ ( t i j ) 0 . as j .

Let us consider the second-order differential equation system with the feedback controls (2.5)-(2.7) or the variational inequalities (2.8)-(2.10) for all t i j , and take the limit as j , we have

x = Π Q ( x α L x ( x , p ) ) , p = Π + ( p + α g ( x ) ) , (3.35)

which means that ( x , p ) is a solution of problem (1.1) from (1.2) and (2.4). This completes the proof.

4. Numerical Results

In this section, we test two examples by the system (2.5)-(2.7). The transient behaviors of the proposed second-order differential equation system with the feedback controls are demonstrated in each example. The numerical implementation is coded by Matlab R2019a running on a PC with Intel i7 7700HQ of 2.8 GHz CPU and the ordinary differential equation solver adopted is ode45, which uses a Runge-Kutta (4, 5) formula.

Example 4.1. Consider the nonlinear convex programming problem

min f ( x ) s .t . 10 x i 10, ( i = 1,2,3,4 ) . (4.1)

where f ( x ) = 100 ( x 2 x 1 2 ) + ( 1 x 1 ) 2 + 90 ( x 4 x 3 2 ) 2 + ( 1 x 3 ) 2 + 10.1 [ ( x 2 1 ) 2 + ( x 4 1 ) 2 ] + 19.8 ( x 2 1 ) ( x 4 ) 1 , which has been discussed in Xiao and Harker [21]. Its optimal solution is x = ( 1,1,1,1 ) T . For problem (4.1), g ( x ) : 4 8 can be defined by

g ( x ) = ( x 1 10 x 2 10 x 3 10 x 4 10 x 1 10 x 2 10 x 3 10 x 4 10 ) ,

and g ( x ) 0 .

Figure 1 describes the convergence behaviors of the trajectory x ( t ) of the second-order differential equation system with the feedback controls (2.5)-(2.7) from a random initial point, which shows that the trajectories of the system (2.5)-(2.7) for solving problem 1 converge to the solution x = ( 1,1,1,1 ) T .

Example 4.2. Consider the variational inequality with constraints problem

F ( x ) , y x 0, y + 5 , (4.2)

where F ( x ) = ( arctan ( x 1 1 ) arctan ( x 2 2 ) arctan ( x 3 3 ) arctan ( x 4 4 ) arctan ( x 5 5 ) ) , and its solution is x = ( 1,2,3,4,5 ) T .

The problem can be transformed into the following nonlinear convex programming problem

min f ( x ) s .t . x + 5 . (4.3)

where F ( x ) is the gradient of the f ( x ) , and g ( x ) : 5 5 can be defined by

g ( x ) = ( x 1 x 2 x 3 x 4 x 5 ) ,

and g ( x ) 0 .

For problem (4.2), Figure 2 describes the convergence behaviors of the trajectory x ( t ) of the second-order differential equation system with the feedback

Figure 1. Transient behavior of x ( t ) of the system (5)-(7) for solving problem (1).

Figure 2. Transient behavior of x ( t ) of the system (5)-(7) for solving problem (3).

controls (2.5)-(2.7) from three random initial points, which means that the trajectories of the system (2.5)-(2.7) for solving problem 3 converge to the solution x = ( 1,2,3,4,5 ) T .

It can be seen from Figure 1 and Figure 2 that the trajectories of the second-order differential equation system with the feedback controls (2.5)-(2.7) converge to the solutions of the original problem, which further illustrates the effectiveness of the second-order differential equation system with the feedback controls for solving the convex programming problem.

5. Conclusion

In this paper, we establish a system of second-order differential equation with the feedback controls based on the projection operator for solving the problem of convex programming (1.1). Firstly, we get the saddle point inequalities (1.2) by using the Lagrange function of problem (1.1). Inspired by Antipin [4], we investigate the properties of the saddle functions and we prove the accumulated points of the trajectory of the second-order differential equation system with the feedback controls are the solutions to the convex programming problem (1.1). At last, we compute two examples by using the second-order differential equation system with the feedback controls, which show that the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming.

Acknowledgements

Some of the results in this paper were presented at the Proceeding of the 11th World Congress on Intelligent Control and Automation, 2014, see https://ieeexplore.ieee.org/document/7052904. The research is supported by the National Natural Science Foundation of China under project No. 11801381 and No.11901422.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Wang, R., Hong, S., Kai, R., et al. (2014) Fixed-Point Iteration Method for Solving the Convex Quadratic Programming with Mixed Constraints. Applied Mathematics, 5, 256-262.
https://doi.org/10.4236/am.2014.52027
[2] Asadi, S., Mansouri, H. and Zangiabadi, M. (2019) A Primal-Dual Interior-Point Algorithm for Symmetrie Cone Convex Quadratic Programming Based on the Commutative Class Directions. Applied Mathematics, 35, 359-373.
https://doi.org/10.1007/s10255-018-0789-z
[3] Yuan, B., Zhang, M. and Huang, Z. (2017) A Wide Neighborhood Arc-Search Interior-Point Algorithm for Convex Quadratic Programming. Journal of Natural Science of Wuhan University, 22, 465-471.
https://doi.org/10.1007/s11859-017-1274-x
[4] Antipin, A.S. (2003) Feedback-Controlled Saddle Gradient Processes. Automation and Remote Control, 55, 311-320.
[5] Antipin, A.S. (2000) From Optima to Equilibria, Dynamics of Non-Homogeneous Systems. Proceedings of ISA RAS, 3, 35-64.
[6] Antipin, A.S. (2000) Solving Variational Inequalities with Coupling Constraints with the Use of Differential Equations. Differential Equations, 36, 1587-1596.
https://doi.org/10.1007/BF02757358
[7] Antipin, A.S. (2001) Differential Equations for Equilibrium Problems with Coupled Constraints. Nonlinear Analysis, 47, 1833-1844.
https://doi.org/10.1016/S0362-546X(01)00314-5
[8] Antipin, A.S. (2003) Minimization of Convex Functions on Convex Sets by Means of Differential Equations. Differential Equations, 30, 1365-1375.
[9] Antipin, A.S. (2003) Controlled Proximal Differential Systems for Saddle Problems. Differential Equations, 28, 1498-1510.
[10] Antipin, A.S. (1995) On Differential Prediction-Type Gradient Methods for Computing Fixed Points of Extremal Mappings. Differential Equations, 31, 1754-1763.
https://doi.org/10.1007/978-3-642-79459-9_3
[11] Antipin, A.S. (2003) On Finite Convergence of Processes to a Sharp Minimum and to a Smooth Minimum with a Sharp Derivative. Differential Equations, 30, 1703-1713.
[12] Wang, L., Li, Y. and Zhang, L. (2011) A Differential Equation Method for Solving Box Constrained Variational Inequality Problems. Journal of Industrial Management Optimization, 7, 183-198.
https://doi.org/10.3934/jimo.2011.7.183
[13] Wang, L. and Wang, S. (2014) A Second-Order Differential Equation Method for Equilibrium Programming with Constraits. Proceeding of the 11th World Congress on Intelligent Control and Automation, Shenyang, 29 June-4 July 2014, 1279-1284.
https://doi.org/10.1109/WCICA.2014.7052904
[14] Wang, L. and Wang, S. (2015) The Differential Equation Method for Variational Inequality with Constraints. ICIC Express Letters, 9, 2728-2794.
[15] Wang, L., Chen, X. and Sun, J. (2020) A Differential Equation Method for the Variational Inequality Problem with the Cyclically Monotone Mapping. Linear and Nonlinear, 6, 287-296.
[16] Wang, L., Chen, X. and Sun, J. (2021) The Second-Order Differential Equation System with the Controlled Process for Variational Inequality with Constraints. Complexity, 2021, Article ID: 9936370.
https://doi.org/10.1155/2021/9936370
[17] Nazemi, A. and Sabeghi, A. (2019) A Novel Gradient-Based Neural Network for Solving Convex Second-Order Cone Constrained Variational Inequality Problems. Journal of Computational and Applied Mathematics, 347, 343-356.
https://doi.org/10.1016/j.cam.2018.08.030
[18] Nazemi, A. and Sabeghi, A. (2020) A New Noural Network Framework for Solving Convex Second-Order Cone Constrained Variational Inequality Problems with an Application in Multi-Finger Robot Hands. Journal of Experimental and Theoretical Artificial Intelligence, 20, 181-203.
https://doi.org/10.1080/0952813X.2019.1647559
[19] Kwelegano, K., Zegeye, H. and Boikanyo, O.A. (2021) An Iterative Method for Split Equality Variational Inequality Problems for Non-Lipschitz Pseudomonotone Mappings. Rendiconti del Circolo Matematico di Palermo Series 2, 1-24.
https://doi.org/10.1007/s12215-021-00608-8
[20] Mosco, U. (1976) Implicit Variational Problems and Quasi-Variational Inequalities, Lecture Notes in Mathematics, Vol. 543, Springer-Verlag, Berlin.
https://doi.org/10.1007/BFb0079943
[21] Xiao, B. and Harker, P.T. (1994) A Nonsmooth Newton Method for Variational Inequalities, II: Numerical Results. Mathematical Programming, 65, 195-216.
https://doi.org/10.1007/BF01581696

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.