A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations ()
1. Introduction
In this paper, we consider the constrained nonlinear semismooth equations problem: finding a vector
such that
(1)
where
is a semismooth mapping. The notation of semismoothness was introduced for the functionals by Mifflin [1] and extended to vector functions by Qi and Sun [2] .
Systems of constrained semismooth equations arise in various application, for instance complementarity problems, the box constrained variational inequality problems, the KKT system of variational inequlity problems and so on. The solution of nonlinear equations can be transformed into solving the following constrained optimization problem:
(2)
where
is continuously differentiable and its gradient denoted by
. Many researchers have studied constrained optimization problems such as (2) and given many effective algorithms. For example, a new class of adaptive non-monotone spectral gradient method is given in reference [3] , an active set projected trust region algorithm in [4] . The methods of optimization problems involve the first-order methods and the second-order methods. Classical first-order algorithms include gradient method, sub-gradient method, conjugate gradient method, etc. The main advantage of first-order method is its small storage, which is particularly suitable for large-scale problems. However, the disadvantage of first-order method is that its convergence speed is at most linear, and it can not meet the requirements of high precision. For the second-order method, it has the advantage of fast convergence speed. Under certain conditions, it can achieve superlinear convergence or even quadratic convergence. But its disadvantage is that it needs a good initial point, sometimes it even needs the initial point to approach the local optimal point.
Motivated by this, in this paper, we combine the advantages of the first-order method with those of the second-order method. We will consider the two-stage combination algorithm to solve the optimization problem. First, we use the first-order method to obtain the global convergence of the algorithm, and then use the final point obtained by the first-order method as the new initial point to turn to the second-order method to obtain the fast convergence speed. At the same time, we use projection technology to solve the constrained conditions.
2. Preliminaries
In this section, we present some definitions and theorems that are useful to our main result.
Suppose
is a locally Lipschitzian function, according to Rademacher theorem, H is differentiable almost everywhere. Denote the set of points at which H is differentiable by
. We write
for the usual
Jacobian matrix of partial derivatives whenever x is a point at which the necessary partial derivatives exists. Let
be the generalized Jacobian defined by Clarke in [5] . Then
, (3)
where the
denotes the convex hull of a set,
.
Definition2.1 [2] Suppose
is a locally Lipschitzian function, we say that H is semismooth at x if
(4)
exists for any
.
Lemma 2.2 [2] : Suppose
is a locally Lipschitzian function, the following statements are equivalent:
1) H is semismooth at x;
2) For any
,
, (5)
. (6)
Lemma 2.3 [2] : Suppose
is a locally Lipschitzian function, H is semismooth at x if each component of H is semismooth at x.
Definition 2.4 [2] : Suppose
is a locally Lipschitzian function, If for any
, (7)
where
, then we call H is p-order semismooth at x.
Lemma 2.5 [2] : suppose
is a locally Lipschitzian function, we say H is strongly BD-regular at x if all
are nonsingular.
Lemma 2.6 [6] : Suppose that
is locally Lipschitz continuous and H is BD-regular at
. Then there exist a neighborhood
of x and a constant K such that for any
and
, V is nonsingular and
.
Lemma 2.7 [6] : Suppose that
is locally Lipschitz continuous and H is BD-regular at a solution
of
. If H is semismooth at
, then there exist a neighborhood
of
and a constant
such that for any
. (8)
Lemma 2.8 [7] : The projection operator
satisfies.
1) For any
,
for all
.
2)
for all
.
Lemma 2.9 [8] : Given
and
, the function
defined by
(9)
is nonincreasing.
Lemma 2.9 actually implies that if
is a stationary point of (2), then
(10)
3. Algorithm
In order to obtain the global convergence of the algorithm, in the first stage, we adopt the non-monotone spectral projection gradient method of the first-order method. The one-dimensional search procedure of Algorithm 3.1 will be called SPG1 from now on and Algorithm 3.2 will be called SPG2 in the rest of the paper.
Given
, we define
as the orthogonal projection on
, denote
.
, integer
, a small parameter
, a large parameter
, sufficient decrease parameter
,
, initially
,
.
Algorithm 3.1 [9] (SPG1)
Step 1. If
, stop, input
.
Step 2. (Backtracking)
Step 2.1 Set
.
Step 2.2 Set
.
Step 2.3 If
(11)
Then define
,
,
,
, and go to step 3.
If (11) does not hold, define
. Set
, and go to step 2.2.
Step 3. compute
, If
, set
; else compute
,
, and go to step 1.
Algorithm 3.2 [9] (SPG2)
Step 2. (Backtracking)
Step 2.1. Compute
, Set
.
Step 2.2. Set
.
Step 2.3. If
(12)
Then define
, and go to step 3.
If (12) does not hold, define
. Set
, and go to step 2.2.
The output point of the first stage is used as the initial point of the next stage.
Algorithm 3.3 [10] (A Projected semismooth asymptotical newton method)
Step 0. Choose constants
, Let
,
.
Step 1. Choose
, compute
.
Step 2. If
is a stationary point, stop. Otherwise let
, (13)
where
, (14)
and go to step 3.
Step 3. If the linear system
(15)
has a solution
, and
, (16)
then use the direction
. Otherwise, set
.
Step 4. Let
be the smallest nonnegative integer m satisfying
, (17)
where for any
,
, (18)
(19)
and
is an optimal solution to
, (20)
the optimal solution is
, (21)
let
,
.
Step 5. Let
, and go to step 1.
4. Convergence Analysis
Theorem 4.1 [9] : Algorithm SPG1 is well defined, and any accumulation point of the sequence
that is generates is a constrained stationary point.
Theorem 4.2 [9] : Algorithm SPG2 is well defined, and any accumulation point of, and any accumulation.
Theorem 4.3 [10] : Let
be a sequence generated by Algorithm 3.3, then any accumulation point of
is a station point of (2).
5. Application
Many practical problems can be solved by transforming them into constrained semi-smooth equations. For example, mixed complement problem (MCP):
is a continuous differentiable function, finding a vectors
satisfies
, (22)
The function
with
is defined by
, (23)
where
for any
and
is the penalized Fischer-Burmeister function introduced by Chen et al. [11] and has the form:
, (24)
Here,
is an NCP function, which is given by
, (25)
The mixed complement problem can be transformed into a semi-smooth system of equations by the above functions.
Let
(26)
MCP can be reformulated as
with
(27)
Then we can use the two phase method to solve this problem.
6. Conclusion
In this paper, we proposed a two-phase method for the constrained equations. We can also combine other first-order and second-order methods. In this paper, the iteration complexity analysis of the first-order method is a meaningful work, and we will do further research.