Global Convergence of a Modified Tri-Dimensional Filter Method

In this paper, a tri-dimensional filter method for nonlinear programming was proposed. We add a parameter into the traditional filter for relaxing the criterion of iterates. The global convergent properties of the proposed algorithm are proved under some appropriate conditions.


Introduction
This paper is concerned with finding a solution of a Nonlinear Programming (NLP) problem, as following where are second-order continuously differentiable.The Lagrangian function associated with problem (1) is the function where ( ) , , , is the multiplier vector.For simplicity, we denote the column vector ( ) ∈ is called a Karush-Kuhn-Tucker (KKT) point if it satisfies the following conditions: we also say that x D * ∈ is a KKT point of problem (1) if there exists a m R λ * ∈ such that ( ) . Traditionally, this question has been answered by using penalty function.But it is difficult to find a suitable penalty parameter.In order to avoid the pitfalls of penalty function, Nonlinear programming problems (NLP) filter methods were first proposed by Fletcher in a plenary talk at the SIAM Optimization Conference in Victoria in May 1996; the methods are described in [1].And soon, Global convergence proof of filter method was given in [2].Because of good global convergence and numerical results, filter methods have quickly become popular in other areas such as nonsmooth optimization, nonlinear equations and so on [3] [4].
Motivated by the ideas of filter methods above, a tri-dimensional filter method for nonliner programming was proposed as acceptance criterion to judge whether to accept a trial step in our algorithm.We have following advantages: 1) By enhancing the flexibility of filter, motivated by [5], we increase a dimension by introducing a parameter to relax the criterion of iterates.
2) The Maratos effect that makes good progress toward the solution may be rejected and has been avoided by using tri-dimensional filter method as acceptance criterion.
3) Tri-dimensional filter method can make full use of the information we get along the algorithm process.This paper is divided into 4 sections.The next section introduces the concept of a Modified tri-dimensional filter and the NCP function.In Section 3, an algorithm of line search filter is given.The global convergence properties are proved in the last section.

NCP Function
The method that based on the Fischer-Burmeister NCP function are efficient, both theoretical results and computational experience.The Fischer-Burmeister function has a very simple structure ( ) We know that: ψ is continuously differentiable everywhere except at the origin, but it is strongly semis- mooth at the origin.i.e. if 0 a ≠ or 0 b ≠ , then ψ is continuously differentiable at ( ) 2 , a b R ∈ , and ( ) if 0 a = and 0 b = , then the generalized Jacobian of ψ at ( ) , , , , , .
Clearly, the KKT optimality conditions (2) can be equivalently reformulated as the nonsmooth equations ( ) . In this case, we have where ( ) T 0, , 0,1, 0 , 0 is the ith column of the unit matrix, its ith element is 1, and other elements are 0.
( ) ( ) , , , , , , Replace the violation constrained function in filter F of Fletcher and Leyffer method, we use the where H k is a positive matrix which may be modified by BFGS update.
( ) η denotes the diagonal matrix whose j diagonal element is k ξ or k η respectively.

Definition 1.2 [1]
A filter is a list of pairs ( ) f h such that no pair dominates any other.A point ( ) f h is said to be acceptable for inclusion in the filter if it is not dominated by any point in the filter.

Definition 1.3 NCP pair and NCP functions [6]
We call a pair ( ) in the following context.It is straightforward to see that the constraint (1) is equivalent to the following equation: ( ) 0 h x = .

Tri-Dimensional Filter
A two dimensional filter is often used in traditional filter method, some information about convergent like the positions of iterates are neglected.Therefore, we aim to enhance its flexibility of filter.Motivated by [5], we adopt ( ) h f δ in which a parameter δ is used to relax the criterion of iterates.We denote the filter by k  for each iteration k.Flexible exact penalty function is introduced to promote convergence refer to [7].Given a prescribed interval, penalty parameter can be chosen as any number from it and it is extends classical penalty function methods.We generalized the idea to filter which we called Tri-dimensional filter.Different from the original two dimensional filter, we increase a dimension by introducing a parameter.
We use pairs ( ) , , j j j h f δ to constitute the elements of filter, where j δ is a non-negative parameter.Our strategy for setting j δ depends on the region in h f δ − − space to which k s moves into.Figure 1 is Distinct regions defined by the current iterate.
If k s moves into region I, which is defined as , , : 1.1 and , 0 , We say that the algorithm does not make good improvement since we do not want to accept points with larger constraint violation.Thus, we try to impose stricter acceptance criterion.Meanwhile, we do not permit k δ larger than k σ .In our algorithm, we increase k δ in the following way ( ) ( ) If s k moved into region Π which is defined as , , : 0.9 and , 0 , We say that the algorithm makes good improvement since it reduces not only the constraint violation, but also the penalty function value.So, we may loosen the acceptance criterion to wish more improvement.Here, we achieve this goal by reducing k δ by setting ( ) ( ) 1 max 0, max 0.001, 0.1 .

4)
In our algorithm, the trial step s k is accepted by filter if ) ( ) or and 0 For all ( ) , , is a constant close to 1 which sets an "envelope" around the border of the dominated part of the ( ) , , h f δ -space in which the trial step is rejected.And also in the filter if and and 0 then we say j x is dominated by k x .

Description of the Algorithm
In this section we hope that the Lagrange multiplier k λ will converge to the Lagrange multiplier λ * at the solution x * .From the KKT system of (1), a good estimate of the Lagrange multiplier is the least square solution of ( ) ( ) . In our algorithm, k λ is updated only after a trial step is accepted, and is set componentwise as Now, we consider how to update the penalty parameter.Let x * be a solution of (1) at which the LICQ is satisfied, and the second order sufficient conditions are satisfied.Then when 1 , σ λ * > x * is the strict local minimizer of penalty function.So we force the condition at each iteration: ≥ .And also, since the penalty term aims to reduce the constraint violation we double the penalty parameter if the constraint violation could not reduce by half, that is ( ) To summarize, we update the penalty parameter in the following formula: The improved algorithm is presented as following.

Algorithm
Step 0. Initialization: Give a starting point 0 n x R ∈ , 0 µ , 0 λ and a initial positive definite matrix 0 H , ( ) .0 where ( ) , by solving the following linear system in ( ) ,  ( ) Step 4. Acceptance criterion of the trial step Let

The Convergence Properties
To present a proof of global convergence of algorithm, in this section, we always assume that the following conditions hold.A1 The level set and η ≠ for all j.So, diag k η is nonsingu- lar.We have Putting ( 14) into (12), we have The fact that x is a KKT point of Problem (NLP).It is obviously to prove the conclusion holds according to the above lemmas.

Figure 1 .
Figure 1.Distinct regions defined by the current iterate.
x k as a solution and stop.Step 2. Computation of the search direction.compute 0

Lemma 3 .
* Φ is nonsingular.This lemma holds. The lemma 2 hold (see[8] Lemma 2) kx is KKT point of problem (NLP).Consider an infinite sequence iterations on which { } Suppose the theorem is not true, then exists an 0 ε > and an infinitely members of index set K such that either bounded, and for sufficiently large k,