An Implicit Smooth Conjugate Projection Gradient Algorithm for Optimization with Nonlinear Complementarity Constraints

This paper discusses a special class of mathematical programs with equilibrium constraints. At first, by using a generalized complementarity function, the discussed problem is transformed into a family of general nonlinear optimization problems containing additional variable μ. Furthermore, combining the idea of penalty function, an auxiliary problem with inequality constraints is presented. And then, by providing explicit searching direction, we establish a new conjugate projection gradient method for optimization with nonlinear complementarity constraints. Under some suitable conditions, the proposed method is proved to possess global and superlinear convergence rate.


Introduction
Mathematical programs with equilibrium constraints (MPEC) include the bilevel programming problem as its special case and have extensive applications in practical areas such as traffic control, engineering design, and economic modeling.So many scholars are interested in this kind of problems and make great achievements, (see [1]- [10]).
In this paper, we consider an important subclass of MPEC problem, which is called mathematical program with nonlinear complementarity constraints (MPCC): In order to eliminate the complementary constraints, which can not satisfy the standard constraint qualification [11], we introduce the generalized nonlinear complementary function , ,0 , ,0 0, 0, if 0, , ,0 , ,0 0, 0, if 0.   Similar to [12], we define the following penalty function ∑ where 0 c > is a penalty parameter.Therefore, our approach consists of solving an auxiliary inequality con- strained problem which is defined by

Preliminaries and Algorithm
For the sake of simplicity, we denote are linearly independent.The following definition and proposition can be refereed to in [13].Definition 2.1.Suppose that ( ) satisfies the so-called nondegeneracy condition: If there exists multipliers ( ) hold, then * s is said to be a K T − point of (1.1).Proposition 2.1.Suppose that ( ) satisfies the so-called nondegeneracy condition (2.2), then ( ) where Proof.(1) According to the property of function φ , the conclusion follows immediately from (1.3). ( w G s = and * 0 µ = , then, from (1), we see

( )
* , 0 z is a feasible point of (1.4).While, it follows from proposition 2.1 that there exists vector ( ) Then, it is easy to see, from (1.2) and the K T − system of ( 4 , : Firstly, for a given point ( ) z µ , by using the pivoting operation, we obtain an approximate active ( ) Step 1.For the current point ( ) 1 , , , , Lemma 2.1.For any iteration index k, algorithm A terminates in finite iteration.
For the current point ( ) Now we give some notations and the explicit search direction in this paper. where According to the above analysis, the algorithm for the solution of the problem (1.1) can be stated as follows.Algorithm B: Step 0. Given a starting point ( ) µ ∈ , and an initial symmetric positive definite matrix Step 1.By means of Algorithm A, compute (2) Let 1 2 Step 4; otherwise, repeat (1).
Step 4. Obtain feasible descent direction k q from (2.16), and compute β k , the first number Step 5. Define ( ) ( ) , , . Obtain 1 k B + by updating the positive definite matrix k B using some quasi- Newton formulas, and set k = k + 1. Go back to Step 1.
In the remainder of this section, we give some results to show that Algorithm B is correctly stated.

Global Convergence
In this section, we consider the global convergence of the algorithm B. Firstly, we show that k s is an exact sta- tionary point of (1.1) if the Algorithm B terminates at the current iteration point ( ) (2) If ( ) 5), then from the definition of index set J k , we know the K − T multiplier corresponding to constraints about index 1 \ k I J is 0. Thus, there exists vector ( ) Note that matrix k A is full of column rank, and k B positive definite.Thus we have ( ) By (2.14) and (3.1), we have On the other hand, it is easy to verify that 0, , .
From the positive definiteness of k B and (2.12), (2.13) and (2.14), we have which implies that ( ) In view of the definition of ( ) Since the vectors { } are linearly independent, we have ( ) In view of the definition of penalty parameter k c , from (3.4), we have ( ) Combining with (3.2) and (3.5), it holds that ( ) 3) and (3.6), we can easily see that ( ) 1).
Proof.According to the K − T system of (1.4) and the relationship of index i and j in (2.1), we see that Then, combining with Proposition 2.1 and Proposition 2.2, we can conclude that k s is a K − T point of (1.1).
In the sequel, it is assumed that the Algorithm B generates an infinite sequence ( ) { } (2) The accumulation point ( ) 2).From H 3.1 and the fact that there are only finitely many choices for sets 1 k J I ⊆ , we may assume that there exists a subsequence K, such that * * , , , , where J is a constant set.Correspondingly, the following results hold: For k K′ ∈ large enough, from Algorithm A, we have Since there are only finite possible subsets of 1 I , there must be an infinite subset K K ′′ ′ ∈ such that for any , k k K J J ′′ ′ ′ ∈ = .Thus, it follows from (3.9) that which contradicts the condition H 2.3.
(2) Suppose by contradiction, there exists a subsequence { } i k such that ( ) From the finite selectivity of k J , we can suppose without loss of generality that ( ) we can see that ( ) Let M be such an integer that ( ) a contradiction, and the result is proved.
and the continuity of i r imply that (4.5) holds.When , 0 i r z µ = .From (3.12), for k K ∈ large enough and 0 β > small enough, we have According to the analysis above, the result is true.Proof.Suppose that ( ) From (2.17), (2.18), (2.20) and Lemma 2.2, we know that Now we consider the following two cases: (1) Suppose there exists an infinite subset 1 , , , which is obtained by Step 3 and Step 5.In view of  ( ) (2) Assume the iteration ( ) ( ) is generated by Step 4 and Step 5. Suppose by contradiction that ( ) (1.5).Then, from (3.12) and Lemma 3.3, we have which is a contradiction.Thus, the claim holds., z µ of (1.5) must be the one of (1.4), where * 0 µ = .
Proof.If ( ) * * , z µ is a K T − point of (1.5), then there exists multiplier * π such that ( ) According to the definition of * c , it is clear that ( ) In addition, combining with (3.2) (3.16), we . It follows from (3.14) and (3.17) that ( ) Proof.According to Theorem 3.2 and (2.1), Proposition 2.1 and Proposition 2.2 imply * s is a K T − point of (1.1).

Superlinear Convergence
Now we discuss the convergence rate of the Algorithm B, and prove that the sequence ( ) z µ generated by the Algorithm B is one-step superlinearly convergent.For this purpose, we add some stronger regularity assumptions.
is the corresponding multiplier of ( ) Lemma 4.1.Under H 2.1-H 4.2, we have that ( ) , , generated by Step 3 and Step 5, from (2.17) and (2.18), it holds that , , generated by Step 4 and Step 5, from (2.17), (2.20) and Lemma 2.2, we have Passing to the limit k → ∞ and from (3.13), we obtain µ → → ∞ .In order to obtain the superlinear convergence rate, we make the following assumption.On the other hand, we assert that * k L I ⊆ .Otherwise, there exists some index t and infinite subset K such that   ) ( ) In addition, for k large enough, * 0, and strict complementarity condition.While, from the definition of k V , it holds that So the claim holds.
Obviously, for k large enough, * 0, Thereby, there exists a constant 0 In addition, from Lemma 4.3, we see So, the result is true.In order to obtain the superlinear convergence rate, we make another assumption.( ) Again, from ( ) Secondly, we prove that, for k large enough, (2.18) holds for 1

Conclusion
By means of perturbed technique and generalized complementarity function, we, using implicit smoothing strategy, equivalently transform the original problem into a family of general optimization problems.Based on the idea of penalty function, the discussed problem is transformed an associated problem with only inequality constraints containing parameter.And then, by providing explicit searching direction, a new variable metric

3 )
By means of the function φ , problem (1.1) is transformed equivalently into the following standard nonlinear optimization problem bounded.By (2.16), there exists constant ˆ0 c > such that ˆk k c q

H 4 . 1 .
The bounded sequence ( ) z µ , at which second-order sufficiency condition and strict complementary slackness hold, where

2 .Proof. ( 1 )
If H 2.1-H 4.2 hold, then we get that (1) for k large enough, On one hand, by Lemma 3.2, for k large enough, there exists a Algorithm A. It follows from H 4.1 and the fact ( ) a contradiction with the complementary slackness condition, which shows that

( 2 ) 3 .
According to ( Under H 2.1-H 4.2, for k large enough, 0 k d with the corresponding multiplier

H 4 . 3 .
The sequence of symmetric matrices { } k B satisfies

4 . 2 .
theorem hold.According to Lemma 4.3, Lemma 4.4 and Lemma 4.5, combining with Theorem 12.3.3 in [15], the following state holds.Theorem The Algorithm B is superlinearly convergent, i.e., 4.5.For k large enough, Algorithm B is not implemented onStep 4, and k λ = , for k large enough.