A Nonmonotone Filter Method for Minimax Problems *

In this paper, we propose a modified trust-region filter method algorithm for Minimax problems, which based on the framework of SQP-filter method and associated with the technique of nonmonotone method. We use the SQP subproblem to acquire an attempt step, and use the filter to weigh the effect of the attempt step so as to avoid using penalty function. The algorithm uses the Lagrange function as a merit function and the nonmonotone filter to improve the effect of the algorithm. Under some mild conditions, we prove the global convergence.


Introduction
Consider the following Minimax problem: where ( ) : is a twice continuously differentiable function.
The problem (1) can be transformed into the following problem below: min .( ) The Minimax problem is one of the most important non-differentiable optimization problems.It does not only have broader applications in engineering design, electronic microcircuits programming, game theory and so on, but also has very close relationship with nonlinear equations, muti-object programming, nonlinear programmming, etc.There are some methods e.g., line search method SQP method, trust region method and the activeset method, for solving Minimax problems.C. Charalambous and A.R. Conn [1] proposed the line search method.A. Vardi [2] presented the trust region method with the active-set methods.There are many other effective algorithms, see Z. B. Zhu [3], L. Gao [4], J. L. Zhou [5] ,Y.Xue [6].
Recently, the filter method for nonlinear programming has broader applications and good numerical effects, see [7][8][9][10][11][12].The major filter methods are of two kinds: line search and trust-region methods.R. Fletcher proposed the global convergent SQP-filter trust-region method [9], based on this idea, Huang [13] proposed a filter method for Minimax problems.In [14], Ulbrich S. used the Lagrange function to replace the function and gave the local superlinear convergence proof of the SQP-filter trust-region method.
The nonmonotone technique can improve the effect of the algorithm, relax the accept criteria of the attempt step.Recently, Su [15] and Shen [16] presented the idea of using nonmonotone filter methods for nonlinear programming.Motivated by their idea, we present a modified filter-method for Minimax problems.The algorithm uses the Lagrangian function instead of the function itself as a merit function, and combines it with a nonmonotone filter technique to improve the effect of the algorithm.
Consider the SQP subproblem of problem (2): We use the following notations: is a symmetric matrix, and it is the approximate Hessian matrix of the subproblem (3).

Remark:
k H is updated by the Powell's safeguard BFGS update formula.
This paper is organized as follows.The new algorithm is described in Section 2. Basic assumptions and some important lemmas are given in Section 3. The analysis of the global convergence is given in Sections 4 and 5.

Algorithm
Now we introduce some definitions about the filter used in this paper.
Definition 1: [14] Lagrange function: Constrain violation function: For simplicity, we just use the following notations: ( , , ) , ( , , ) , , ( , , ) A filter set is a list of pairs ( , ) such that no pair dominates any other.We denote the set by for each iteration k. k  Similar to the definition in Fletcher.and Leyffer, S. [9], a point (( , ), ) x t s   can be accept to ( , ) Here we use the nonmonotone filter idea in [14] a point (( , ), ) x t s   can be accept to ( , ) where (  , ) ( , is added to the filter, then the new filter set is updated as follows: where ( ) k s  is the Lagrange multiplier of the subproblem (3).
In order to improve both the feasibility and optimality if  then we require then we add ( , )   k k l  to the filter set and update the filter set, calling it is a h-type iteration .
we also call it is a h-type iteration .( ) k   Now we describe the detailed algorithm below: Step 0 (Initialization) Give , , , , ( Step 1: Solve subproblem (3) to get an attempt step s.
Step 2: If the solution of (3) is not compatible or Then add ( , ) to the filter set and update the filter set, let: If s   , stop; else go to step 3.

The Well Definedness of the Algorithm
Lemma 5: From the definition of k  , Use the result of (10), when 4 all ave a so-th problem must h lution, and From the results of Lemma7 in [13], there must exit a neighborhood of x t , and some constants , , 0     , for all and ( , next we will prove, there must exit , when (11) holds, then and ( 12)-( 14) holds. If and the results of ( 15) we can deduce From the definition of and by Taylor Expansion we know So we can deduce ( 1)  then ( 13)-( 14) hold Thus, the Lemma We consider the following two cases: Case 1: cannot hold, so the algorithm will enter a restoration phrase at step 2 and terminates finitely.
Case 2: the result of Lemma 6, the subproblem (3) must be ible a 2) .From Lemma 5 and ( 14) we know that From compat nd (1 -( 14) holds ( , ) k k x t s  can be accept to ( , ) . So all th itions for f-type step are satisfied and the inner iteration terminates successfully.

Global Convergence
Under the assumptions A1-A4, if the algorithm does not terminate finitely, and there are infinite oints added to the filter set, then lim 0 p We consider the following two cases: Case 1: From the proof of the Lem a 3, se 2, we know that 1 0, Thus, the Lemma is proven.Theorem 1: Under the assumptions A1-A4, if the al an accumulation point which is a KKT oint.
Proof.We discuss it in two cases: se 1: there e iterations.
gorith-m doesn't terminate finitely,then there must exit p 1) Ca are infinite h-typ From Lemma 8 we know lim 0 , from the update mechanism of the filter set, there must exit a subset Without loss of generality,we can assume that
8) is true.Since is bounded below, so Thus the Lemma is proven.Some assumptions are needed for Lagrange multiplier es

T
  is chosen, succes halving sively  will ev cate a value in the inte entually (a) lo rval (15), or (b) locate a value to the right of this interval.It is obviously that  ( ) 0 pred s  u k k om the o f s, if (b) is true, note that nder case (a).Fr ptimality o creases, so we  ( ) pred s is nondecreasing if  k know in  ( ) 0 k pred s  , which means a f-type iteration will occur, and it is a contradiction.2) Case 2: there are only iterations.That means for k K  sufficiently large, no filter entries are made and finite h-type Without loss of generality,we can assume that