An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem*

In this paper, we first reformulate the max-min dispersion problem as a saddle-point problem. Specifically, we introduce an auxiliary problem whose optimum value gives an upper bound on that of the original problem. Then we propose the saddle-point problem to be solved by an adaptive custom proximal point algorithm. Numerical results show that the proposed algorithm is efficient.

Share and Cite:

Tao, S. (2018) An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem*. Advances in Pure Mathematics, 8, 400-407. doi: 10.4236/apm.2018.84022.

1. Introduction

Consider the following weighted max-min dispersion problem:

$\underset{x\in \chi }{\mathrm{max}}\left\{f\left(x\right):=\underset{i=1,\cdots ,m}{\mathrm{min}}{\omega }_{i}{‖x-{x}^{i}‖}^{2}\right\},$ (1)

where $\chi =\left\{y\in {ℝ}^{n}|{\left({y}_{1}^{2},\cdots ,{y}_{n}^{2},1\right)}^{\text{T}}\in \mathcal{K}\right\}$ , $\mathcal{K}$ is a convex cone, ${x}^{1},\cdots ,{x}^{m}\in {ℝ}^{n}$

are m given point, ${\omega }_{i}>0$ for $i=1,\cdots ,m$ and $‖\text{ }\cdot \text{ }‖$ denotes the Euclidean norm. Let $\nu \left({P}_{\chi }\right)$ denote the optimal value of the problem (1). The problem aims to find a point x in a closed set χ that is furthest from a given set of points ${x}^{1},\cdots ,{x}^{m}$ in ${ℝ}^{n}$ in a weighted max-min sense. It has wide applications in spatial management, facility location, and pattern recognition (see     and references therein). In the equal weight case, i.e., ${\omega }_{1}=\cdots ={\omega }_{m}$ , (1) has the geometric interpretation of finding the largest Euclidean sphere with center in ${P}_{box}$ and enclosing no given point.

Without loss of generality, we assume that $\nu \left({P}_{\chi }\right)>0$ . The weighted max-min dispersion problem is known to be NP-hard in general, even in the case of equal weights and $\chi ={\left[-1,1\right]}^{n}$  or $\chi =\left\{x|‖x‖\le 1\right\}$  . We denote the two special cases by ${P}_{box}$ and ${P}_{ball}$ , which correspond to setting $\mathcal{K}=\left\{y\in {ℝ}^{n+1}|{y}_{j}\le {y}_{n+1},j=1,\cdots ,n\right\}$ and $\mathcal{K}=\left\{y\in {ℝ}^{n+1}|{y}_{1}+\cdots +{y}_{n}\le {y}_{n+1}\right\}$ respectively.

In the low-dimensional cases of $n\le 3$ and χ being a polyhedral set, this problem is solvable in polynomial time   . For $n>4$ , heuristic approaches have been proposed   .

In paper  , they use an optimal solution of convex relaxations from semidefinite programming (SDP) and second order cone programming (SOCP) to construct an approximate solution of (1), and prove an approximation bound

of $\frac{1-O\left(\sqrt{ln\left(m\right){\gamma }^{*}}\right)}{2}$ , where ${\gamma }^{*}$ depends on χ. When $\chi ={\left\{-1,1\right\}}^{n}$ or $\chi ={\left[-1,1\right]}^{n}$ , ${\gamma }^{*}=O\left(\frac{1}{n}\right)$ . This is the first nontrivial approximation bound for a convex relaxation of (1). Wang and Xia  then focus on the study of ${P}_{ball}$ and show the approximation bound of their algorithm is $\frac{1-O\left(\sqrt{\mathrm{ln}\left(m\right)/n}\right)}{2}$ based on a linear programming relaxation.

In this paper, we focus on the equal weight max-min dispersion problem, which is called by “max-min dispersion problem” for simplicity. Firstly, we model the max-min dispersion problem as a saddle point problem, and then we adopt an adaptive custom proximal point algorithm to obtain a ε-approximation scheme1.

The remainder of the paper is organized as follows. In Section 2, we reformulate max-min dispersion problem as a saddle point problem. In Section 3, we propose a new adaptive custom proximal point algorithm to approximately solve the saddle point problem and establish the convergence analysis. Section 4 presents some numerical comparisons between our proximal point algorithm and SDP-based algorithm. Conclusions are made in Section 5.

Without loss of generality, we drop the weight parameters ωi from the objective function, since all the ωis are equal. In the following of this paper, we consider the problem:

$\underset{x\in \chi }{\mathrm{max}}\left\{f\left(x\right):=\underset{i=1,\cdots ,m}{\mathrm{min}}{‖x-{x}^{i}‖}^{2}\right\}.$ (2)

Note that, it has been proved that this problem is NP-hard in general   . Denote ${\Delta }_{m}$ by the unit simplex in ${ℝ}^{m}$ , that is, ${\Delta }_{m}=\left\{x\in {ℝ}^{m}|x\ge 0,{e}^{\text{T}}x=1\right\}$ with e being the all one vector, then (2) is equivalent to the following saddle point problem:

$\underset{x\in \chi }{\mathrm{max}}\underset{y\in {\Delta }_{m}}{min}\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{y}_{i}{‖x-{x}^{i}‖}^{2}.$ (3)

$\begin{array}{c}\varphi \left(x,y\right)=\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{y}_{i}{‖x-{x}^{i}‖}^{2}=\underset{i=1}{\overset{m}{\sum }}\left({y}_{i}{‖x‖}^{2}-2{y}_{i}{\left({x}^{i}\right)}^{\text{T}}x+{y}_{i}{‖{x}^{i}‖}^{2}\right)\\ =-2{\left(Ay\right)}^{\text{T}}x-2{y}^{\text{T}}b+\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{y}_{i}{‖x‖}^{2}\\ =-2{\left(Ay\right)}^{\text{T}}x-2{y}^{\text{T}}b+\gamma {‖x‖}^{2},\end{array}$

where $A=\left[{A}_{1},\cdots ,{A}_{m}\right]\in {ℝ}^{n×m},{A}_{i}={x}^{i}$ and $b={\left(-\frac{1}{2}{‖{x}^{1}‖}^{2},\cdots ,-\frac{1}{2}{‖{x}^{m}‖}^{2}\right)}^{\text{T}}$ , $\gamma =\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{y}_{i}=1$ . $\varphi \left(x,y\right)$ is convex for x and concave for y separately, although the saddle point model is neither convex nor concave.

Define $g\left(x\right)=\underset{y\in {\Delta }_{m}}{\mathrm{min}}\varphi \left(x,y\right)$ , and let ${x}^{*},{y}^{*}$ be the optimal saddle point of objective (3). Note that ${x}^{*}$ is also necessarily a minimizer of $g\left(x\right)$ and

$g\left({x}^{*}\right)=\underset{x\in \chi }{\mathrm{max}}\underset{y\in {\Delta }_{m}}{\mathrm{min}}\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{y}_{i}{‖x-{x}^{i}‖}^{2}$ . Now it suffices for us to find a point x such that $g\left(x\right)\ge g\left({x}^{*}\right)-\epsilon$ , because such an x is necessarily a ε-approximate solution to (3).

However, $\varphi \left(x,y\right)$ is not strongly concave with respect to y. Furthermore, define the regularized saddle point problem

$\underset{x\in \chi }{\mathrm{max}}\underset{y\in {\Delta }_{m}}{\mathrm{min}}\left\{{\varphi }_{\lambda }\left(x,y\right):=-2{y}^{\text{T}}{A}^{\text{T}}x-2{y}^{\text{T}}b+\gamma {‖x‖}^{2}-\lambda ‖y‖\right\},$ (4)

So ${\varphi }_{\lambda }\left(x,y\right)$ is λ-strongly concave on y and γ-strongly convex on x.

Denote the optimal solution of (4) by $\left({x}^{\circ },{y}^{\circ }\right)$ . The relation between the optimal value of (3) and that of (4) can be characterized in the following lemma.

Lemma 1. $g\left({x}^{*}\right)-g\left({x}^{\circ }\right)\le \epsilon /2$ if $\lambda \le \frac{\epsilon }{2}$ .

Proof. Denoting $\stackrel{˜}{y}=arg\underset{y\in {\Delta }_{m}}{min}\varphi \left({x}^{\circ },y\right)$ , we have

$\begin{array}{c}g\left({x}^{\circ }\right)=\varphi \left({x}^{\circ },\stackrel{˜}{y}\right)={\varphi }_{\lambda }\left({x}^{\circ },\stackrel{˜}{y}\right)-\lambda ‖\stackrel{˜}{y}‖\ge {\varphi }_{\lambda }\left({x}^{\circ },{y}^{\circ }\right)-\lambda ‖\stackrel{˜}{y}‖\\ \ge {\varphi }_{\lambda }\left({x}^{*},{y}^{\circ }\right)-\lambda ‖\stackrel{˜}{y}‖=\varphi \left({x}^{*},{y}^{\circ }\right)+\lambda ‖{y}^{\circ }‖-\lambda ‖\stackrel{˜}{y}‖\\ \ge \varphi \left({x}^{*},{y}^{*}\right)+\lambda ‖{y}^{\circ }‖-\lambda ‖\stackrel{˜}{y}‖=g\left({x}^{*}\right)+\lambda ‖{y}^{\circ }‖-\lambda ‖\stackrel{˜}{y}‖\\ \ge g\left({x}^{*}\right)-\lambda ‖{y}^{\circ }‖.\end{array}$

Since when ${y}_{1}=1,{y}_{2}=\cdots ={y}_{m}=0$ , ${‖y‖}_{\mathrm{max}}=1$ , and we then have $g\left({x}^{*}\right)-g\left({x}^{\circ }\right)\le \lambda ‖y‖\le \frac{\epsilon }{2}$ .

3. Adaptive Custom Proximal Point Algorithm

In this section, we adopt an adaptive custom proximal point (ACPP) algorithm to solve (4), which is quadratic and then can be approximately solved in a short time. From the optimal conditions of the problem and the convexity of related functions, the (4) can be solved by the followed by the variational inequality: for ${x}^{\circ },{y}^{\circ }$

$‖y‖-‖{y}^{\circ }‖+{\left(\begin{array}{c}x-{x}^{\circ }\\ y-{y}^{\circ }\end{array}\right)}^{\text{T}}\left(\begin{array}{c}-2{y}^{\circ \text{T}}A+{x}^{\circ }\\ -2{A}^{\text{T}}{x}^{\circ }-2b\end{array}\right)\ge 0$

And we denote

$u=\left(\begin{array}{c}x\\ y\end{array}\right),\text{\hspace{0.17em}}{u}^{\circ }=\left(\begin{array}{c}{x}^{\circ }\\ {y}^{\circ }\end{array}\right),\text{\hspace{0.17em}}F\left({u}^{\circ }\right)=\left(\begin{array}{c}-2{y}^{\circ \text{T}}A+{x}^{\circ }\\ -2{A}^{\text{T}}{x}^{\circ }-2b\end{array}\right),\text{\hspace{0.17em}}\Omega ={R}^{n}×{R}^{m},$

then the variational inequality can be reduction to: find the solution ${u}^{\circ }\in \Omega$ , satisfy:

$‖y‖-‖{y}^{\circ }‖+{\left(u-{u}^{\circ }\right)}^{\text{T}}F\left({u}^{\circ }\right)\ge 0,$ (5)

It’s easy to verify that F is monotonous, so (5) is monotonous, and then the solution set is not empty.

We denote

$\begin{array}{c}M=\left(\begin{array}{cc}t{I}_{n}& {A}^{\text{T}}\\ \theta A& s{I}_{m}\end{array}\right)=\left(\begin{array}{cc}t{I}_{n}+\left(1-\theta \right)\frac{1}{s}{A}^{\text{T}}A& {A}^{\text{T}}\\ \theta A& s{I}_{m}\end{array}\right)\left(\begin{array}{cc}{I}_{n}& 0\\ \left(\theta -1\right)\frac{1}{s}A& {I}_{m}\end{array}\right)\\ =H\stackrel{˜}{M},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\theta \in \left[-1,1\right]\end{array}$ (6)

We give the details of the (ACPP) method as in Algorithm 1.

Algorithm 1. A1: ACPP algorithm for the unweighted max-min dispersion model.

In mathematics, the arguments of the minimum (abbreviated arg min or argmin) are the points of the domain of some function at which the function values are minimized.

Convergence Analysis

We present a convergence theorem for A1 in this section. In order to proof our theorem, we now give some lemmas. The following lemmas 2-4 are standard results in    .

Lemma 2. For H and M in (6), assume $s>0,t>0$ , then the follow inequation is establish:

$ts>\frac{1}{4}{\left(1+\theta \right)}^{2}{\lambda }_{\mathrm{max}}\left({A}^{\text{T}}A\right).$ (13)

where H and $\frac{1}{2}\left(M+{M}^{\text{T}}\right)$ is positive definite matrix.

Lemma 3. ${\stackrel{˜}{u}}^{k}$ is the solution of (7), and M is defined in (6), then for $\forall {u}^{\circ }$ , we have

${\left({u}^{k}-{u}^{\circ }\right)}^{\text{T}}M\left({u}^{k}-{u}^{\circ }\right)\ge {\left({u}^{k}-{\stackrel{˜}{u}}^{k}\right)}^{\text{T}}M\left({u}^{k}-{\stackrel{˜}{u}}^{k}\right)$

Lemma 4. For $\stackrel{˜}{M}$ and H in (6), there exist a constant ${c}_{0}>0$ , can make $\left\{{u}^{k}\right\}$ in (8) satisfy:

${‖{u}^{k+1}-{u}^{\circ }‖}_{H}^{2}\le {‖{u}^{k}-{u}^{\circ }‖}_{H}^{2}-\gamma \left(2-\gamma \right){\alpha }_{k}^{\circ }{c}_{0}{‖{u}^{k}-{\stackrel{˜}{u}}^{k}‖}^{2}$

Now we can give the theorem of the ACPP algorithm.

Theorem 1. The ACPP algorithm is a shrinkage algorithm of the saddle point problem (4), and the sequence $\left\{{u}^{k}=\left({x}^{k},{y}^{k}\right)\right\}$ generated by the algorithm convergence to a solution of (4).

Proof. For $M\in {R}^{n×n}$ , there exist a constant ${c}_{0}$ , we have ${d}^{\text{T}}Md\ge {c}_{0}{‖d‖}^{2},\forall d\in {R}^{n}$ , when the inequation is hold, the ${\alpha }_{k}^{\circ }$ has a lower bound:

${\alpha }_{k}^{\circ }=\frac{{c}_{0}{‖{u}^{k}-{\stackrel{˜}{u}}^{k}‖}^{2}}{{‖\stackrel{˜}{M}\left({u}^{k}-{\stackrel{˜}{u}}^{k}\right)‖}_{H}^{2}}\ge \frac{{c}_{0}}{{\stackrel{˜}{M}}^{\text{T}}M}.$

On the basis of lemma 4, we have

${‖{u}^{k+1}-{u}^{\circ }‖}_{H}^{2}\le {‖{u}^{k}-{u}^{\circ }‖}_{H}^{2}-\gamma \left(2-\gamma \right)\frac{{c}_{0}^{2}}{‖{\stackrel{˜}{M}}^{\text{T}}M‖}{‖{u}^{k}-{\stackrel{˜}{u}}^{k}‖}^{2}$

when $\gamma =1$ , ACPP algorithm is a H-norm shrinkage algorithm of the saddle point problem (4).

4. Numerical Results

In this section, we do some simple numerical comparisons. All the Numerical texts are implemented in Matlab R2014a and run on a laptop with 2.30 GHz processor and 4 GB RAM. We are now ready to apply Algorithm 1 to our model (4), which is shown in detail in Algorithm 2.

We present the numerical comparison between our ACPP algorithm and SDP-based algorithm proposed in  for solving ${P}_{ball}$ ( $\chi =\left\{x|‖x‖\le 1\right\}$ ). We note that when the weighted ${\omega }_{i}=1$ in  , the two algorithm can comparable. We do numerical experiments on 24 random instances of dimension $n=5$ , where the number of input point m varies from 6 to 30. All the input points ${x}^{i}\left(i=1,\cdots ,m\right)$ with $m=6,\cdots ,30$ orderly form an $n×450$ matrix. We randomly generate this matrix using the following Matlab scripts:

Rand (‘state’, 0); X = 2 * rand (n, 450)-1;

where the rand() is a random function that produces a random number between 0 and 1. We set $\epsilon ={10}^{-3},\lambda =\epsilon /2$ , and report the numerical results in Table 1.

Algorithm 2. A2: ACPP algorithm for max-min dispersion problem.

Table 1. Numerical results for n = 5, m = 6 to 30.

The columns cvx_opt present optimal objection function values of the 20 instance of ${P}_{ball}$  . The next two columns present the statistical results over the 10 runs of the algorithm proposed in  and our ACPP algorithm, respectively. The subcolumns ${v}_{\mathrm{max}},{v}_{\mathrm{min}}$ and ${v}_{\text{ave}}$ give the best, the worst and the average objective function values found among 10 tests, respectively. The results show that compared with the SDP-based algorithm our algorithm is competitive in most cases.

From the table, we can see that the solution of our algorithm is very close to the exact solution of the second column, which is better than the SDP algorithm.

5. Conclusion

In this paper, we reformulate the max-min dispersion problem as a saddle point problem and then adopt an adaptive custom proximal point algorithm to obtain an approximation scheme. It can be proved that the proposed algorithm produces a ε-approximation solution to the max-min dispersion problem with equal weight. Numerical results show that the proposed algorithm is efficient.

NOTES

1We call $g\left(x\right)$ is the ε-approximation of $g\left({x}^{*}\right)$ if $g\left(x\right)\ge g\left({x}^{*}\right)-\epsilon$ .

Conflicts of Interest

The authors declare no conflicts of interest.

  Dasarthy, B. and White, L.J. (1980) A Maximin Location Problem. Operations Research, 28, 1385-1401. https://doi.org/10.1287/opre.28.6.1385  Johbson, M.E., Moore, L.M. and Ylvisaker, D. (1990) Maximin Distance Designs. Journal of Statistical Planning and Inference, 26, 131-148. https://doi.org/10.1016/0378-3758(90)90122-B  Schaback, R. (1995) Multivariate Interpolation and Approximation by Translates of A Basis Function. World Scientific, Singapore, 491-514.  White, D.J. (1996) A Heuristic Approach to a Weighted Maxmin Disperation Problem. IMA Journal of Management Mathematics, 7, 219-231. https://doi.org/10.1093/imaman/7.3.219  Haines, S., Loeppky, J., Tseng, P. and Wang, X. (2013) Convex Relaxations of the Weighted Maxmin Dispersion Problem. SIAM Journal on Optimization, 23, 2264-2294. https://doi.org/10.1137/120888880  Wang, S. and Xia, Y. (2016) On the Ball-Constrained Weighted Maximin Dispersion Problem. SIAM Journal on Optimization, 26, 1565-1588. https://doi.org/10.1137/15M1047167  Ravi, S.S., Rosenkrantz, D.J. and Tayi, G.K. (1994) Heuristic and Special Case Algorithms for Dispersion Problems. Operations Research, 42, 299-310. https://doi.org/10.1287/opre.42.2.299  He, B. and Yuan, X. (2010) A Contraction Method with Implementable Proximal Regularization for Linearly Constrained Convex Programming. Optimization Online, 1-14.  He, B., Fu, X. and Jiang, Z. (2009) A Proximal Point Algorithm Using a Linear Proximal Term. Journal of Optimization Theory and Applications, 141, 209-239. https://doi.org/10.1007/s10957-008-9493-0  Martinet, B. (1970) Regularisation dinequations variationelles par approximations succesives. Rev. Francaise dInform. Recherche Operation, 4, 154-159. 