The Second-Order Differential Equation System with the Feedback Controls for Solving Convex Programming ()
1. Introduction
We consider the problem of convex programming, which is to find a vector
such that
(1.1)
where
and
are two mappings, Q is a convex closed set, “argmin” represents the set of minimum points.
The Lagrange function of the problem (1.1) is
, where
,
and P is a convex closed set. Then we know that
is a function convex in x and concave in p. In the general case, if
is the solution to the problem (1.1), it satisfies the following inequalities
(1.2)
More generally, the function
can be a saddle function.
Convex optimization problems have important applications in many fields. Recently, Wang, Hong and Kai [1] are devoted to a novel smoothing function method for convex quadratic programming problem with mixed constraints, which has important application in mechanics and engineering science. The problem is reformulated as a system of non-smooth equations, and then a smoothing function for the system of non-smooth equations is proposed. The condition of convergences of this iteration algorithm is given. Asadi, Mansouri and Zangiabadi [2] present a neighborhood following primal-dual interior-point algorithm for solving symmetric cone convex quadratic programming problems, where the objective function is a convex quadratic function and the feasible set is the intersection of an affine subspace and a symmetric cone attached to a Euclidean Jordan algebra. Yuan, Zhang and Huang [3] propose an arc-search interior-point algorithm for convex quadratic programming with a wide neighborhood of the central path, which searches the optimizers along the ellipses that approximate the entire central path.
Antipin [4] considered the synthesis of control laws for nonlinear objects whose set of equilibrium states is defined by the problems of convex programming or degenerate saddle functions. Based on the projection operator, a first-order differential equation system with composite controls was established. Moreover, the trajectory of process of this system could be converged monotonically in norm to one of the equilibrium points. It is worth mentioning that the differential equation methods for solving the minimization problems and the variational inequalities which were studied by Antipin [5] - [11] are different from the traditional differential equation method and neural network. Without using the Lyapunov function but only applying the properties of the projection operator and the related function, the stationary of the equilibrium point of the differential equation can be proved. Thus the convergence of the solutions of the primal problems can be obtained. However, Antipin’s work on solving different types of optimization problems and variational inequality problems by using differential equation methods is theoretical results, and no numerical results are given. Based on the research of the above scholars, this paper will continue to use the differential equation method to solve a class of convex optimization problems. In addition to giving the convergence theoretical results of the solutions of variational inequalities, numerical examples will also be given to illustrate the effectiveness of the differential equation method.
Recently, inspired by the ideas of the above research results, Wang et al. [12] - [16] constructed the different differential equation systems for solving the differential variational inequalities. For example, Wang, Li and Zhang [12] considered the differential equation method for solving the box constrained variational inequality problems and proved that the equilibrium solution to the differential equation system is locally asymptotically stable by verifying the locally asymptotical stability of the equilibrium positions of the differential inclusion problems. Wang, Chen and Sun [15] established the system of differential equations based on the projection operator for the variational inequality problem with the cyclically monotone mapping. Using an important inequality for the cyclically monotone mapping, any accumulation point of the trajectory of the differential equation system was proved to be a solution to the variational inequality problem. Wang, Chen and Sun [16] constructed the second-order differential equation system with the controlled process for solving the variational inequality with constraints and proved that any accumulation point of the trajectory of the second-order differential equation system is a solution to the variational inequality with constraints. Nazemi and Sabeghi [17] [18] applied neural network model to solve convex second-order cone constrained variational inequality problems. Kwelegano et al. [19] studied an approximate solution to the problem of splitting equality variational inequality.
In next section, based on the saddle function (1.2) and the projection operator, the second-order differential equation system with the feedback controls will be established for solving the convex programming problem (1.1). We will prove that any accumulation point of the trajectory of the second-order differential equation system with the feedback controls is a solution to the convex programming problem in Section 3. At last, two examples are solved by using this differential equation system. The numerical results are reported to verify the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).
2. Preliminaries
The projection operator to a convex set is quite useful for establishing the second-order differential equation system. Now we recall the following definitions.
Let C be a convex closed set, for every
, there is a unique
in C such that
(2.1)
The point
is the projection of x onto C, denoted by
. The projection operator
is well defined over
and it is a nonexpensive mapping.
Lemma 2.1. [20] Let H be a real Hilbert space and
be a closed convex set. For a given
,
satisfies the inequality
(2.2)
if and only if
.
Assuming that the function
is differentiable, it is easy to show that
is the saddle point of the inequalities (1.2) if and only if
satisfies the following system by using Lemma 2.1.
(2.3)
where
and
are the projections of the vectors on the set Q and P, and
and
are the vector gradients of the function
in the variables x and p, respectively. Then in view of linearity of the function in the variable p, we have
, and because the set p coincides with the positive orthant, i.e.
, we rewrite the system (2.3) as follows.
(2.4)
where
, and
is the operator of projection on
.
Similar to Antipin [4], we establish a system of second-order differential equation system with the feedback controls for solving the problem of convex programming (1.1).
(2.5)
(2.6)
(2.7)
where
and
are parameters. It is easy to see that the system (2.5)-(2.7) can be changed to the system (23)-(25) in Antipin [4] when
.
For simplicity, we denote
,
,
and
.
Using Lemma 2.1, the above Equations (2.5)-(2.7) are transformed into the following variational inequalities (2.8)-(2.10), respectively.
(2.8)
(2.9)
(2.10)
In order to prove the convergence of the solution to problem (1.1) by using the second-order differential equation system with the feedback controls (2.5)-(2.7), it is necessary that the gradient satisfy the Lipschitz condition.
Thus, suppose that
(2.11)
for all x and
from Q and p from P, where
is a constant and
(2.12)
for all p and
from P and x from Q, where
is a constant.
3. The Second-Order Differential Equation System
The following theorem shows that the equilibrium points of the second-order differential equations with the feedback controls (2.5)-(2.7) are asymptotically stable.
Theorem 3.1. Assume that the set of solutions to problem (1.1) is not empty, the gradients
of the objective function and
of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant
and the vector constant L, the map
satisfies the Lipschitz condition with the constant
, the trajectory
for all
is bounded by the vector constant C, i.e.,
, and the parameter
is chosen from the condition
,
and
where
,
then the trajectory of the second-order differential equations with the feedback controls (2.5)-(2.7) converges monotonically in norm to one of the equilibrium points, i.e.,
and
for all
and
.
Proof. Let
in (2.8), which yields that
(3.1)
Using the convexity of the function
in x in the form of the inequality
(3.2)
and we add
in (3.1), then we have
(3.3)
Since the gradients
of the objective function and
of the functional constraints on the convex closed set Q satisfy the Lipschitz condition with the constant
and the vector constant L, and the trajectory
for all
is bounded by the vector constant C, i.e.,
, we can compute that
(3.4)
It follows from the inequalities (1.2), we know that
. Thus we have
(3.5)
By using the above two inequalities, we can get the following inequality from the inequality (3.3).
(3.6)
which can be changed into that
(3.7)
Let
in (2.9), we can get that
(3.8)
and let
in (2.10), we yield that
(3.9)
It is easy to show that from (3.9)
(3.10)
Now, we consider the following relation
(3.11)
Using the above relation, we rewrote the inequality (3.9) as follows.
(3.12)
Adding (3.8) and (3.12), we have
(3.13)
Using the relations
(3.14)
and
(3.15)
the above inequality (3.13) can be transformed into the following
(3.16)
Summing (3.7) and (3.16), we get that
(3.17)
The inequality (3.17) can be calculated by using the relations (3.14) and (3.15) in the following.
(3.18)
where
and
. We have
since
is chosen from
.
According to the following relations
(3.19)
the inequality (3.18) can be transformed into the following
(3.20)
Let
and
, the inequality (3.20) means that
(3.21)
The inequality (3.21) can be integrated from t0 to t as follows.
(3.22)
where
. It follows from
and
that
and
. Thus there exists a constant
such that
(3.23)
which can be equivalent to change into
(3.24)
That is,
(3.25)
By integrating (3.25), we have
(3.26)
where
is a constant. We conclude that
(3.27)
which means that
is bounded for all
. Similarly, we get
is bounded for all
.
The function
and
is strongly convex, and it is well known that each of its Lebesgue sets is bounded. Thus the trajectory
and
is bounded. That is, there exists a constant
and
such that
(3.28)
Now we claim that
,
,
and
. We firstly show that
and
is bounded. It follows from the inequality (3.22) that
(3.29)
where
is a constant. The above inequality means that
(3.30)
Due to
, the above inequality infers that
(3.31)
It follows from
that
and
. We conclude that
is bounded in the following.
(3.32)
that is,
is bounded. It follows from
(3.33)
that
also has lower bound. In the same way,
and
is also bounded. Thus there exists a constant
such that
(3.34)
which yields that the integrals
,
,
and
, converge as
.
Assuming that there exists an
such that
,
,
, and
for all
, we obtain a contradiction to the convergence of integrals. Hence, there exists a subsequence of time moments
such that
,
,
and
. Since
and
are bounded, we know that
and
are bounded. We choose the subsequences
and
of
and
, then there exist
and
such that
,
,
,
,
and
. as
.
Let us consider the second-order differential equation system with the feedback controls (2.5)-(2.7) or the variational inequalities (2.8)-(2.10) for all
, and take the limit as
, we have
(3.35)
which means that
is a solution of problem (1.1) from (1.2) and (2.4). This completes the proof.
4. Numerical Results
In this section, we test two examples by the system (2.5)-(2.7). The transient behaviors of the proposed second-order differential equation system with the feedback controls are demonstrated in each example. The numerical implementation is coded by Matlab R2019a running on a PC with Intel i7 7700HQ of 2.8 GHz CPU and the ordinary differential equation solver adopted is ode45, which uses a Runge-Kutta (4, 5) formula.
Example 4.1. Consider the nonlinear convex programming problem
(4.1)
where
, which has been discussed in Xiao and Harker [21]. Its optimal solution is
. For problem (4.1),
can be defined by
and
.
Figure 1 describes the convergence behaviors of the trajectory
of the second-order differential equation system with the feedback controls (2.5)-(2.7) from a random initial point, which shows that the trajectories of the system (2.5)-(2.7) for solving problem 1 converge to the solution
.
Example 4.2. Consider the variational inequality with constraints problem
(4.2)
where
, and its solution is
.
The problem can be transformed into the following nonlinear convex programming problem
(4.3)
where
is the gradient of the
, and
can be defined by
and
.
For problem (4.2), Figure 2 describes the convergence behaviors of the trajectory
of the second-order differential equation system with the feedback
Figure 1. Transient behavior of
of the system (5)-(7) for solving problem (1).
Figure 2. Transient behavior of
of the system (5)-(7) for solving problem (3).
controls (2.5)-(2.7) from three random initial points, which means that the trajectories of the system (2.5)-(2.7) for solving problem 3 converge to the solution
.
It can be seen from Figure 1 and Figure 2 that the trajectories of the second-order differential equation system with the feedback controls (2.5)-(2.7) converge to the solutions of the original problem, which further illustrates the effectiveness of the second-order differential equation system with the feedback controls for solving the convex programming problem.
5. Conclusion
In this paper, we establish a system of second-order differential equation with the feedback controls based on the projection operator for solving the problem of convex programming (1.1). Firstly, we get the saddle point inequalities (1.2) by using the Lagrange function of problem (1.1). Inspired by Antipin [4], we investigate the properties of the saddle functions and we prove the accumulated points of the trajectory of the second-order differential equation system with the feedback controls are the solutions to the convex programming problem (1.1). At last, we compute two examples by using the second-order differential equation system with the feedback controls, which show that the effectiveness of the second-order differential equation system with the feedback controls for solving the problem of convex programming.
Acknowledgements
Some of the results in this paper were presented at the Proceeding of the 11th World Congress on Intelligent Control and Automation, 2014, see https://ieeexplore.ieee.org/document/7052904. The research is supported by the National Natural Science Foundation of China under project No. 11801381 and No.11901422.