An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem *

In this paper, we first reformulate the max-min dispersion problem as a saddle-point problem. Specifically, we introduce an auxiliary problem whose optimum value gives an upper bound on that of the original problem. Then we propose the saddle-point problem to be solved by an adaptive custom proximal point algorithm. Numerical results show that the proposed algorithm is efficient.


Introduction
Consider the following weighted max-min dispersion problem: denote the optimal value of the problem (1).The problem aims to find a point x in a closed set χ that is furthest from a given set of points 1 , , m x x  in n  in a weighted max-min sense.It has wide applications in spa- tial management, facility location, and pattern recognition (see [1] [2] [3] [4] and references therein).In the equal weight case, i.e., 1 In the low-dimensional cases of 3 n ≤ and χ being a polyhedral set, this problem is solvable in polynomial time [4] [7].For 4 n > , heuristic approaches have been proposed [1] [4].
In paper [5], they use an optimal solution of convex relaxations from semidefinite programming (SDP) and second order cone programming (SOCP) to construct an approximate solution of (1), and prove an approximation bound of ( ) ( ) . This is the first nontrivial approximation bound for a convex relaxation of (1).Wang and Xia [6] then focus on the study of ball P and show the approximation bound of their algorithm is In this paper, we focus on the equal weight max-min dispersion problem, which is called by "max-min dispersion problem" for simplicity.Firstly, we model the max-min dispersion problem as a saddle point problem, and then we adopt an adaptive custom proximal point algorithm to obtain a ε-approximation scheme 1 .
The remainder of the paper is organized as follows.In Section 2, we reformulate max-min dispersion problem as a saddle point problem.In Section 3, we propose a new adaptive custom proximal point algorithm to approximately solve the saddle point problem and establish the convergence analysis.Section 4 presents some numerical comparisons between our proximal point algorithm and SDP-based algorithm.Conclusions are made in Section 5.

Saddle Point Model
Without loss of generality, we drop the weight parameters ω i from the objective function, since all the ω i s are equal.In the following of this paper, we consider the problem: Note that, it has been proved that this problem is NP-hard in general [5] [6].
Denote m ∆ by the unit simplex in m  , that is, with e being the all one vector, then (2) is equivalent to the following saddle point problem: 1 We call ( ) g x is the ε-approximation of ( ) is convex for x and concave for y separately, although the saddle point model is neither convex nor concave.
Define ( ) ( ) , and let * * , x y be the optimal saddle point of objective (3).Note that * x is also necessarily a minimizer of ( ) − , because such an x is necessarily a ε-approximate solution to (3).
However, ( ) is not strongly concave with respect to y.Furthermore, define the regularized saddle point problem is λ-strongly concave on y and γ-strongly convex on x.
Denote the optimal solution of (4) by ( ) The relation between the optimal value of (3) and that of (4) can be characterized in the following lemma.

g x x y x y y x y y
x y y x y y y x y y y g x y y g x y = , and we then have

Adaptive Custom Proximal Point Algorithm
In this section, we adopt an adaptive custom proximal point (ACPP) algorithm to solve (4), which is quadratic and then can be approximately solved in a short time.From the optimal conditions of the problem and the convexity of related functions, the (4) can be solved by the followed by the variational inequality: for And we denote ( ) then the variational inequality can be reduction to: find the solution u ∈ Ω  , satisfy: It's easy to verify that F is monotonous, so ( 5) is monotonous, and then the solution set is not empty.
We denote We give the details of the (ACPP) method as in Algorithm 1. Algorithm 1. A1: ACPP algorithm for the unweighted max-min dispersion model.
Step 0: Matrix Step 1 Solve the variational inequality we can get a predicted point k u  .correct ( ) Step 2 ( ) ( ) Step 3 ( ) where Step 4 if

Convergence Analysis
We present a convergence theorem for A1 in this section.In order to proof our theorem, we now give some lemmas.The following lemmas 2-4 are standard results in [8] [9] [10].
Lemma 2. For H and M in (6), assume 0, 0 s t > > , then the follow inequation is establish: where H and ( ) u is the solution of ( 7), and M is defined in (6), then for u ∀  , we have For M  and H in (6), there exist a constant 0 0 c > , can make ( ) Now we can give the theorem of the ACPP algorithm.
Theorem 1.The ACPP algorithm is a shrinkage algorithm of the saddle point problem (4), and the sequence generated by the algorithm convergence to a solution of (4). Proof. For , there exist a constant 0 c , we have ( ) On the basis of lemma 4, we have ( ) algorithm is a H-norm shrinkage algorithm of the saddle point problem (4).

Numerical Results
In this section, we do some simple numerical comparisons.All the Numerical texts are implemented in Matlab R2014a and run on a laptop with 2.30 GHz processor and 4 GB RAM.We are now ready to apply Algorithm 1 to our model We present the numerical comparison between our ACPP algorithm and SDP-based algorithm proposed in [6] for solving ball P ( { } We note that when the weighted 1 i ω = in [6], the two algorithm can comparable. We do numerical experiments on 24 random instances of dimension 5 n = , where the number of input point m varies from 6 to 30.All the input points ( ) ( ) ( )   The columns cvx_opt present optimal objection function values of the 20 instance of ball P [6].The next two columns present the statistical results over the 10 runs of the algorithm proposed in [6] and our algorithm, respectively.
The subcolumns max min , v v and ave v give the best, the worst and the average objective function values found among 10 tests, respectively.The results show that compared with the SDP-based algorithm our algorithm is competitive in most cases.
From the table, we can see that the solution of our algorithm is very close to the exact solution of the second column, which is better than the SDP algorithm.

Conclusion
In this paper, we reformulate the max-min dispersion problem as a saddle point problem and then adopt an adaptive custom proximal point algorithm to obtain an approximation scheme.It can be proved that the proposed algorithm produces a ε-approximation solution to the max-min dispersion problem with equal weight.Numerical results show that the proposed algorithm is efficient.

.
Now it suffices for us to find a point x such that ( ) ( ) * g x g x ε ≥ for 1 k k = + , do Step 2 S. Q. Tao DOI: 10.4236/apm.2018.84022404 Advances in Pure Mathematics In mathematics, the arguments of the minimum (abbreviated arg min or argmin) are the points of the domain of some function at which the function values are minimized.
when the inequation is hold, the k α  has a lower bound:

( 4 )
, which is shown in detail in Algorithm 2. and