_{1}

^{*}

In this paper, we first reformulate the max-min dispersion problem as a saddle-point problem. Specifically, we introduce an auxiliary problem whose optimum value gives an upper bound on that of the original problem. Then we propose the saddle-point problem to be solved by an adaptive custom proximal point algorithm. Numerical results show that the proposed algorithm is efficient.

Consider the following weighted max-min dispersion problem:

max x ∈ χ { f ( x ) : = min i = 1 , ⋯ , m ω i ‖ x − x i ‖ 2 } , (1)

where χ = { y ∈ ℝ n | ( y 1 2 , ⋯ , y n 2 ,1 ) T ∈ K } , K is a convex cone, x 1 , ⋯ , x m ∈ ℝ n

are m given point, ω i > 0 for i = 1 , ⋯ , m and ‖ ⋅ ‖ denotes the Euclidean norm. Let ν ( P χ ) denote the optimal value of the problem (1). The problem aims to find a point x in a closed set χ that is furthest from a given set of points x 1 , ⋯ , x m in ℝ n in a weighted max-min sense. It has wide applications in spatial management, facility location, and pattern recognition (see [

Without loss of generality, we assume that ν ( P χ ) > 0 . The weighted max-min dispersion problem is known to be NP-hard in general, even in the case of equal weights and χ = [ − 1 , 1 ] n [

In the low-dimensional cases of n ≤ 3 and χ being a polyhedral set, this problem is solvable in polynomial time [

In paper [

of 1 − O ( l n ( m ) γ * ) 2 , where γ * depends on χ. When χ = { − 1,1 } n or χ = [ − 1,1 ] n , γ * = O ( 1 n ) . This is the first nontrivial approximation bound for a convex relaxation of (1). Wang and Xia [

In this paper, we focus on the equal weight max-min dispersion problem, which is called by “max-min dispersion problem” for simplicity. Firstly, we model the max-min dispersion problem as a saddle point problem, and then we adopt an adaptive custom proximal point algorithm to obtain a ε-approximation scheme^{1}.

The remainder of the paper is organized as follows. In Section 2, we reformulate max-min dispersion problem as a saddle point problem. In Section 3, we propose a new adaptive custom proximal point algorithm to approximately solve the saddle point problem and establish the convergence analysis. Section 4 presents some numerical comparisons between our proximal point algorithm and SDP-based algorithm. Conclusions are made in Section 5.

Without loss of generality, we drop the weight parameters ω_{i} from the objective function, since all the ω_{i}s are equal. In the following of this paper, we consider the problem:

max x ∈ χ { f ( x ) : = min i = 1 , ⋯ , m ‖ x − x i ‖ 2 } . (2)

Note that, it has been proved that this problem is NP-hard in general [

max x ∈ χ m i n y ∈ Δ m ∑ i = 1 m y i ‖ x − x i ‖ 2 . (3)

ϕ ( x , y ) = ∑ i = 1 m y i ‖ x − x i ‖ 2 = ∑ i = 1 m ( y i ‖ x ‖ 2 − 2 y i ( x i ) T x + y i ‖ x i ‖ 2 ) = − 2 ( A y ) T x − 2 y T b + ∑ i = 1 m y i ‖ x ‖ 2 = − 2 ( A y ) T x − 2 y T b + γ ‖ x ‖ 2 ,

where A = [ A 1 , ⋯ , A m ] ∈ ℝ n × m , A i = x i and b = ( − 1 2 ‖ x 1 ‖ 2 , ⋯ , − 1 2 ‖ x m ‖ 2 ) T , γ = ∑ i = 1 m y i = 1 . ϕ ( x , y ) is convex for x and concave for y separately, although the saddle point model is neither convex nor concave.

Define g ( x ) = min y ∈ Δ m ϕ ( x , y ) , and let x * , y * be the optimal saddle point of objective (3). Note that x * is also necessarily a minimizer of g ( x ) and

g ( x * ) = max x ∈ χ min y ∈ Δ m ∑ i = 1 m y i ‖ x − x i ‖ 2 . Now it suffices for us to find a point x such that g ( x ) ≥ g ( x * ) − ε , because such an x is necessarily a ε-approximate solution to (3).

However, ϕ ( x , y ) is not strongly concave with respect to y. Furthermore, define the regularized saddle point problem

max x ∈ χ min y ∈ Δ m { ϕ λ ( x , y ) : = − 2 y T A T x − 2 y T b + γ ‖ x ‖ 2 − λ ‖ y ‖ } , (4)

So ϕ λ ( x , y ) is λ-strongly concave on y and γ-strongly convex on x.

Denote the optimal solution of (4) by ( x ∘ , y ∘ ) . The relation between the optimal value of (3) and that of (4) can be characterized in the following lemma.

Lemma 1. g ( x * ) − g ( x ∘ ) ≤ ε / 2 if λ ≤ ε 2 .

Proof. Denoting y ˜ = a r g m i n y ∈ Δ m ϕ ( x ∘ , y ) , we have

g ( x ∘ ) = ϕ ( x ∘ , y ˜ ) = ϕ λ ( x ∘ , y ˜ ) − λ ‖ y ˜ ‖ ≥ ϕ λ ( x ∘ , y ∘ ) − λ ‖ y ˜ ‖ ≥ ϕ λ ( x * , y ∘ ) − λ ‖ y ˜ ‖ = ϕ ( x * , y ∘ ) + λ ‖ y ∘ ‖ − λ ‖ y ˜ ‖ ≥ ϕ ( x * , y * ) + λ ‖ y ∘ ‖ − λ ‖ y ˜ ‖ = g ( x * ) + λ ‖ y ∘ ‖ − λ ‖ y ˜ ‖ ≥ g ( x * ) − λ ‖ y ∘ ‖ .

Since when y 1 = 1 , y 2 = ⋯ = y m = 0 , ‖ y ‖ max = 1 , and we then have g ( x * ) − g ( x ∘ ) ≤ λ ‖ y ‖ ≤ ε 2 .

In this section, we adopt an adaptive custom proximal point (ACPP) algorithm to solve (4), which is quadratic and then can be approximately solved in a short time. From the optimal conditions of the problem and the convexity of related functions, the (4) can be solved by the followed by the variational inequality: for x ∘ , y ∘

‖ y ‖ − ‖ y ∘ ‖ + ( x − x ∘ y − y ∘ ) T ( − 2 y ∘ T A + x ∘ − 2 A T x ∘ − 2 b ) ≥ 0

And we denote

u = ( x y ) , u ∘ = ( x ∘ y ∘ ) , F ( u ∘ ) = ( − 2 y ∘ T A + x ∘ − 2 A T x ∘ − 2 b ) , Ω = R n × R m ,

then the variational inequality can be reduction to: find the solution u ∘ ∈ Ω , satisfy:

‖ y ‖ − ‖ y ∘ ‖ + ( u − u ∘ ) T F ( u ∘ ) ≥ 0, (5)

It’s easy to verify that F is monotonous, so (5) is monotonous, and then the solution set is not empty.

We denote

M = ( t I n A T θ A s I m ) = ( t I n + ( 1 − θ ) 1 s A T A A T θ A s I m ) ( I n 0 ( θ − 1 ) 1 s A I m ) = H M ˜ , θ ∈ [ − 1 , 1 ] (6)

We give the details of the (ACPP) method as in Algorithm 1.

Algorithm 1. A1: ACPP algorithm for the unweighted max-min dispersion model.

In mathematics, the arguments of the minimum (abbreviated arg min or argmin) are the points of the domain of some function at which the function values are minimized.

Convergence AnalysisWe present a convergence theorem for A1 in this section. In order to proof our theorem, we now give some lemmas. The following lemmas 2-4 are standard results in [

Lemma 2. For H and M in (6), assume s > 0 , t > 0 , then the follow inequation is establish:

t s > 1 4 ( 1 + θ ) 2 λ max ( A T A ) . (13)

where H and 1 2 ( M + M T ) is positive definite matrix.

Lemma 3. u ˜ k is the solution of (7), and M is defined in (6), then for ∀ u ∘ , we have

( u k − u ∘ ) T M ( u k − u ∘ ) ≥ ( u k − u ˜ k ) T M ( u k − u ˜ k )

Lemma 4. For M ˜ and H in (6), there exist a constant c 0 > 0 , can make { u k } in (8) satisfy:

‖ u k + 1 − u ∘ ‖ H 2 ≤ ‖ u k − u ∘ ‖ H 2 − γ ( 2 − γ ) α k ∘ c 0 ‖ u k − u ˜ k ‖ 2

Now we can give the theorem of the ACPP algorithm.

Theorem 1. The ACPP algorithm is a shrinkage algorithm of the saddle point problem (4), and the sequence { u k = ( x k , y k ) } generated by the algorithm convergence to a solution of (4).

Proof. For M ∈ R n × n , there exist a constant c 0 , we have d T M d ≥ c 0 ‖ d ‖ 2 , ∀ d ∈ R n , when the inequation is hold, the α k ∘ has a lower bound:

α k ∘ = c 0 ‖ u k − u ˜ k ‖ 2 ‖ M ˜ ( u k − u ˜ k ) ‖ H 2 ≥ c 0 M ˜ T M .

On the basis of lemma 4, we have

‖ u k + 1 − u ∘ ‖ H 2 ≤ ‖ u k − u ∘ ‖ H 2 − γ ( 2 − γ ) c 0 2 ‖ M ˜ T M ‖ ‖ u k − u ˜ k ‖ 2

when γ = 1 , ACPP algorithm is a H-norm shrinkage algorithm of the saddle point problem (4).

In this section, we do some simple numerical comparisons. All the Numerical texts are implemented in Matlab R2014a and run on a laptop with 2.30 GHz processor and 4 GB RAM. We are now ready to apply Algorithm 1 to our model (4), which is shown in detail in Algorithm 2.

We present the numerical comparison between our ACPP algorithm and SDP-based algorithm proposed in [

Rand (‘state’, 0); X = 2 * rand (n, 450)-1;

where the rand() is a random function that produces a random number between 0 and 1. We set ε = 10 − 3 , λ = ε / 2 , and report the numerical results in

Algorithm 2. A2: ACPP algorithm for max-min dispersion problem.

m | cvx_opt | The SDP-based algorithm | 2*our algorithm | ||
---|---|---|---|---|---|

v_{max} | v_{min} | v_{ave} | |||

6 | 2.74 | 2.06 | 1.25 | 1.75 | 2.01 |

7 | 2.50 | 1.77 | 1.06 | 1.33 | 2.19 |

8 | 1.80 | 1.49 | 0.74 | 1.07 | 1.68 |

9 | 2.45 | 1.50 | 0.72 | 1.06 | 2.35 |

10 | 2.31 | 1.61 | 0.66 | 1.18 | 2.12 |

11 | 2.22 | 1.85 | 0.76 | 1.27 | 1.98 |

12 | 2.21 | 1.58 | 0.76 | 1.07 | 1.92 |

13 | 1.74 | 1.25 | 0.64 | 0.91 | 1.72 |

14 | 1.81 | 1.43 | 0.56 | 1.02 | 1.58 |

15 | 2.19 | 1.41 | 0.72 | 1.00 | 1.90 |

16 | 1.89 | 1.15 | 0.62 | 0.93 | 1.88 |

17 | 2.13 | 1.16 | 0.60 | 1.05 | 1.93 |

18 | 1.93 | 1.24 | 0.50 | 0.88 | 1.87 |

19 | 1.93 | 1.22 | 0.61 | 0.97 | 1.91 |

20 | 2.51 | 1.46 | 0.72 | 1.16 | 1.98 |

21 | 2.07 | 1.37 | 0.65 | 0.97 | 2.01 |

22 | 2.20 | 1.08 | 0.55 | 0.79 | 1.85 |
---|---|---|---|---|---|

23 | 2.13 | 0.99 | 0.55 | 0.81 | 1.95 |

24 | 1.85 | 0.98 | 0.50 | 0.55 | 1.80 |

25 | 1.92 | 1.28 | 0.70 | 0.98 | 1.91 |

26 | 1.82 | 0.86 | 0.49 | 0.68 | 1.45 |

27 | 1.88 | 1.05 | 0.54 | 0.73 | 1.76 |

28 | 1.85 | 1.45 | 0.56 | 0.87 | 1.45 |

29 | 2.39 | 1.25 | 0.44 | 1.02 | 2.30 |

30 | 1.82 | 1.14 | 0.39 | 0.84 | 1.77 |

The columns cvx_opt present optimal objection function values of the 20 instance of P b a l l [

From the table, we can see that the solution of our algorithm is very close to the exact solution of the second column, which is better than the SDP algorithm.

In this paper, we reformulate the max-min dispersion problem as a saddle point problem and then adopt an adaptive custom proximal point algorithm to obtain an approximation scheme. It can be proved that the proposed algorithm produces a ε-approximation solution to the max-min dispersion problem with equal weight. Numerical results show that the proposed algorithm is efficient.

Tao, S.Q. (2018) An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem. Advances in Pure Mathematics, 8, 400-407. https://doi.org/10.4236/apm.2018.84022