Journal of Applied Mathematics and Physics
Vol.08 No.01(2020), Article ID:97444,13 pages
10.4236/jamp.2020.81002

Sparse Solutions of Mixed Complementarity Problems

Peng Zhang, Zhensheng Yu

College of Science, University of Shanghai for Science and Technology, Shanghai, China

Copyright © 2020 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 28, 2019; Accepted: December 24, 2019; Published: December 27, 2019

ABSTRACT

In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l 1 regularized projection minimization model for the original problem and design an extragradient thresholding algorithm (ETA) to solve the regularized model. Furthermore, we prove that any cluster point of the sequence generated by ETA is a solution of MCP. Finally, numerical experiments show that the ETA algorithm can effectively solve the l 1 regularized projection minimization model and obtain the sparse solution of the mixed complementarity problem.

Keywords:

Mixed Complementarity Problem, Sparse Solution, l 1 Regularized Projection Minimization Model, Extragradient Thresholding Algorithm

1. Introduction

Define a continuously differentiable function F : R n R n , and a nonempty set

Ω : = { x R n | a x b } ,

where a = { R { } } n , b = { R { + } } n and a < b ( a i < b i , i = 1 , 2 , , n ) . The mixed complementarity problem is to find a vector x Ω , such that

( y x ) T F ( x ) 0, y Ω . (1)

The mixed complementarity problem, also known as box constrained variational inequality problem, denoted by MCP (a, b, F). In particular, if Ω : = R + n , the mixed complementarity problem becomes a nonlinear complementarity problem (NCP), is to find a vertor x 0 , such that

F ( x ) 0 , F ( x ) T x = 0.

Moreover, if F ( x ) : = M x + q , where M R n × n , q R n , the nonlinear complementary problem reduces a linear complementary problem (LCP):

x 0, M x + q 0, ( M x + q ) T x = 0.

The set of solution to the mixed complementarity problem is denoted by SOL (F), throughout this paper, we assume S O L ( F ) .

The MCP has wide applications in fields of science and engineering [1] [2], and many results on its theories and algorithms have been developed (see e.g. [3] [4] [5] [6] ).

In recent years, the problem of recovering an unknown sparse solution from some linear constraints has been an active topic with a range of applications including signal processing, machine learning, and computer vision [7], and there are many articles available for sloving the sparse solutions of systems of linear equations [8] - [13] as well as the optimization problems [14] [15] [16].

In contrast with the fast development in sparse solutions of optimization and linear equations, there are few researches available for the sparse solutions of the complementarity problems. The sparse solution problem of linear complementarity was first studied by Chen and Xiang [17], by using the concept of minimum p ( 0 < p < 1 ) norm solution, they studied the characteristics and calculations of sparse solutions and minimum p-norm solutions for linear complementarity problems. Recently, some solution methods had been proposed for LCP and NCP, for examples, shrinkage-thresholding projection method [18], half thresholding projection algorithm [19] and extragradient thresholding method [20].

Along with the research of [17] [18] [19] [20], in this paper, we aim to design an extragradient thresholding Algorithm for the sparse solution of MCP, and which can be seen as an extension of the sparse solution algorithm for NCP.

Due to the relationship between MCP and the variational inequality, we aim to seek a vector x Ω by solving the solution of l 0 norm minimization problem:

min x 0 , s .t . ( y x ) T F ( x ) 0, (2)

for any y Ω , where x 0 stands for the number of nonzero components of x and a solution of problem (2) is called the sparse solution of MCP.

In essence, the minimization problem (2) is a sparse optimization problem with equilibrium constraints. It is not easy to get solutions due to the equilibrium constraints, even if the objective function is continuous.

To overcome the difficulty for the l 0 norm, many researchers have suggested to relax the l 0 norm and instead to consider the l 1 norm, see [21]. Motivated by this outstanding work, we consider applying l 1 norm minimization to find the sparse solution of MCP, and we obtain the following minimization problem to approximate problem (2):

min x Ω x 1 , s .t . ( y x ) T F ( x ) 0 (3)

for any y Ω and where x 1 = i = 1 n | x i | .

Given a vector x Ω , let P Ω ( x ) be the projection of x on Ω , for convenience, we write P Ω ( x ) = [ x ] Ω , it is well known (see, e.g., [22] ) that x * is a solution point of problem (1) if and only if it satisfies the following projection equation:

W ( x * ) : = x * [ x * F ( x * ) ] Ω = 0, (4)

and therefore problem (3) is equivalent to the following optimization problem

min x , z R n f ( x ) : = x 1 s .t . x = [ x F ( x ) ] Ω . (5)

In order to simplify the objective function, we introduce a new variable z R n and a regularization parameter λ > 0 and establish the corresponding regularized minimization problem as follows:

min x , z R n f λ ( x , z ) : = x z 2 + λ x 1 s .t . z = [ x F ( x ) ] Ω . (6)

We call (6) the l 1 regularized projection minimization problem.

This paper is organized as follows. In Section 2, we study the relation between the solution of model (6) and that of problem (3), and we show theoretically that (6) is a good approximation of problem (3). In Section 3, we propose an extragradient thresholding algorithm(ETA) for (6) and analyze the convergence of this algorithm. Numerical results are given in Section 4 and conclusion is described in Section 5.

2. The l1 Regularized Approximation

In this section, we study the relation between the solution of model (6) and that of model (3). The following theorem shows that model (6) is a good approximation of problem (3).

Theorem 2.1. For any fixed λ > 0 , the solution set of (6) is nonempty and bounded. Let { ( x λ k , z λ k ) } be a solution of (6), and { λ k } be any positive sequence converging to 0. If S O L ( F ) , then { ( x λ k , z λ k ) } has at least one accumulation point, and any accumulation point x of { x λ k } is a solution of (1.3).

Proof. For any fixed λ > 0 , since

f λ ( x , z ) + as ( x , z ) , (7)

which means f λ ( x , z ) is coercivity. On the other hand, it is clear that for any x R n and z R n , f λ ( x , z ) 0 . This together with (7) implies the level set

L = { ( x , z ) R n × R n | f λ ( x , z ) f λ ( x 0 , z 0 ) and z = [ x F ( x ) ] Ω }

is nonempty and compact, where x 0 R n and z 0 = [ x 0 F ( x 0 ) ] Ω are given points, which deduces that the solution set of problem (6) is nonempty and bounded since f λ ( x , z ) is continuous on L.

Now we consider the proof of the second part of this theorem. Let x ^ S O L ( F ) and z ^ = [ x ^ F ( x ^ ) ] Ω . from (5), we have x ^ = z ^ . Since ( x λ k , z λ k ) is a solution of (6) with λ = λ k , where z λ k = [ x λ k F ( x λ k ) ] Ω , it follows that

max { x λ k z λ k 2 , λ k x λ k 1 } x λ k z λ k 2 + λ k x λ k 1 x ^ z ^ 2 + λ k x ^ 1 = λ k x ^ 1 . (8)

This implies that for any λ k > 0 ,

x λ k 1 x ^ 1 . (9)

Hence the sequence { x λ k } is bounded and has at least one cluster point and so is { z λ k } due to x λ k z λ k 2 λ k x ^ 1 .

Let x and z be any cluster points of { x λ k } and { z λ k } , respectively, and z λ k = [ x λ k F ( x λ k ) ] Ω . Then there exists a subsequence of { λ k } , say { λ k j } , such that

lim k j x λ k j = x and lim k j z λ k j = z .

We can obtain z = [ x F ( x ) ] Ω by letting k j in z λ k = [ x λ k F ( x λ k ) ] Ω . Letting λ k j 0 , in

x λ k j z λ k j 2 λ k j x ^ 1

yields x = z . Consequently, x = [ x F ( x ) ] Ω , which implies x S O L ( F ) . Let k j tend to in (9), we get x 1 x ^ 1 . Then by the arbitrariness of x ^ S O L ( F ) , we know x is solution of problem (3). This completes the proof. ¢

3. Algorithm and Convergence

In this section, we give the extragradient thresholding algorithm (ETA) to solve l 1 regularization projection minimization problem (6) and give the convergence analysis of ETA.

First, we review some basic concepts about the monotone operator and the properties of the projection operator which can be found in [23].

Lemma 3.1. Let P K ( ) be a projection from R n to K, where K is a non-empty closed convex subset on R n . Then we have

(a) For y R n ,

( y P K [ y ] ) T ( P K [ y ] x ) 0, x K ; (10)

(b) for any y , z R n ,

P K [ y ] P K [ z ] 2 ( y z ) T ( P K [ y ] P K [ z ] ) . (11)

Using Lemma 3.1, we can obtain the following properties easily.

Lemma 3.2. Define a residue funtion

H ( α ) : = P K [ x α d ] , α 0 , d R n . (12)

Then the following statements are valid.

(a) For any α 0 , x H ( α ) α is non-increasing;

(b) α > 0 , d T ( x H ( α ) ) x H ( α ) 2 α ;

(c) z R n , P K [ z ] x 2 z x 2 P K [ z ] z 2 .

In this paper, we suppose the mapping F : R n R n is co-coercive on the set Ω , i.e., there exists a constant c > 0 such that

F ( x ) F ( y ) , x y c F ( x ) F ( y ) 2 , x , y Ω .

It is clear that the co-coercive mapping is monotone, namely,

F ( x ) F ( y ) , x y 0, x , y Ω .

For a given z k Ω and λ k > 0 , we consider an unconstrained minimization subproblem:

min x R n f λ k ( x , z k ) : = x z k 2 + λ k x 1 . (13)

Evidently, the minimizer x * of the model (13) satisfies the corresponding optimality condition

x * = S λ k ( z k ) , (14)

where the shrinkage operator S λ is defined by (see, e.g., [18] )

( S λ ( z ) ) i = ( z i λ 2 , z i λ 2 ; 0, λ 2 z i λ 2 ; z i + λ 2 , z i λ 2 . (15)

It demonstrates that a solution x R n of the subproblem (13) can be analytically expressed by (14).

In what follows, we construct the extragradient thresholding algorithm (ETA) to solve the l 1 regularized projection minimization problem (6).

Algorithm ETA

Step 0: Choose 0 z 0 Ω , λ 0 , γ > 0 , τ , l , μ ( 0 , 1 ) , ϵ > 0 and a positive integers n max > K 0 > 0 , set k = 0 .

Step 1: Compute x k = S λ k ( z k ) , y k = [ x k α k F ( x k ) ] Ω , where α k = γ l m k with m k being the smallest nonnegative integer satisfying

F ( x k ) F ( y k ) μ x k y k α k . (16)

Step 2: If x k z k ϵ or the number of iterations is greater than n max , then return z k , x k , y k and stop. Otherwise, compute

z k + 1 = [ x k α k F ( y k ) ] Ω ,

and update λ k + 1 by

λ k + 1 = ( τ λ k , if k + 1 is a multiple of K 0 , λ k , otherwise .

Step 3: Let k = k + 1 , then go to Step 1.

Define

x ( α ) = [ x α F ( x ) ] Ω , α 0

and

e ( x , a ) = x x ( α ) , r ( x , α ) = e ( x , α )

It is easy to see that x k is a solution for MCP if and only if e ( x k , α ) = 0 , α > 0 .

The following lemma plays an important role in the analysis of the global convergence of the Algorithm ETA.

Lemma 3.3. Suppose that mapping F is co-coercive and S O L ( F ) . If x k generated by ETA is not a solution of M C P ( F ) , then for any x ^ S O L ( F ) , we have

F ( y k ) , x k x ^ F ( y k ) , x k y k ( 1 μ ) x k y k 2 γ . (17)

Proof. Since x ^ S O L ( F ) and y k Ω , it follows that F ( x ^ ) , y k x ^ 0 . By the co-coercive of mapping F, we have F ( y k ) , y k x ^ 0 . Hence

F ( y k ) , x k x ^ = F ( y k ) , x k y k + y k x ^ F ( y k ) , x k y k = F ( x k ) , x k y k F ( x k ) F ( y k ) , x k y k 1 α k x k y k 2 μ α k x k y k 2 1 μ γ x k y k 2 ,

where the second inequality comes from Lemma 3.2(b) and (16), and the last inequality is based on α k γ

The following theorem gives the global convergence of the algorithm ETA.

Theorem 3.4. Suppose F is co-coercive and S O L ( F ) 0 . If { x k } and { y k } are infinite columns generated by the algorithm ETA, then

lim k x k y k = 0. (18)

Further, { x k } converges to a solution of the problem MCP(a, b, F).

Proof. Let x ^ S O L ( F ) , by Lemma 3.2(c) and (17), we have

z k + 1 x ^ 2 ( x k α k F ( y k ) ) x ^ 2 z k + 1 x k + α k F ( y k ) 2 = x k x ^ 2 2 α k F ( y k ) T ( x k x ^ ) 2 α k F ( y k ) T ( z k + 1 x k ) x k z k + 1 2 x k x ^ 2 2 α k F ( y k ) T ( z k + 1 y k ) x k z k + 1 2 = x k x ^ 2 x k y k 2 z k + 1 y k 2 + 2 ( x k y k α k F ( y k ) ) T ( z k + 1 y k ) (19)

Now consider the last term of Equation (19), by Lemma 3.1 (a), we have

( y k x k + α k F ( x k ) ) T ( z k + 1 y k ) 0

It follows that

2 ( x k y k α k F ( y k ) ) T ( z k + 1 y k ) 2 ( x k y k α k F ( y k ) ) T ( z k + 1 y k ) + 2 ( y k x k + α k F ( x k ) ) T ( z k + 1 y k ) = 2 α k ( F ( x k ) F ( y k ) ) T ( z k + 1 y k ) α k 2 ( F ( x k ) F ( y k ) ) 2 + z k + 1 y k 2 (20)

Replacing (20) into (19), and by (16), we deduce

z k + 1 x ^ 2 x k x ^ 2 x k y k 2 z k + 1 y k 2 + α k 2 ( F ( x k ) F ( y k ) ) 2 + z k + 1 y k 2 x k x ^ 2 x k y k 2 + μ 2 x k y k 2 = x k x ^ 2 ( 1 μ 2 ) x k y k 2 (21)

According to the definition of shrinkage operator (15), we know that

x k + 1 x ^ z k + 1 x ^ .

Hence, { x k x ^ 2 } has contraction properties, which means { x k } is bounded, and

( 1 μ 2 ) k = 0 x k y k 2 k = 0 { x k x ^ 2 x k + 1 x ^ 2 } < + ,

so we get (18) holds.

Since { x k } is bounded, the sequence { x k } has at least one cluster point, let x * be a cluster points of { x k } and a subsequence { x k i } converge to x * . Next we will show x * S O L ( F ) , we consider two cases:

Case 1: assume that there is a positive low bounded α min such that α k i α min > 0 , then by inequality

min { 1, α } e ( x ,1 ) e ( x , α ) max { 1, α } e ( x ,1 ) . (22)

the continuity of e ( x , α ) for x and (18), we get

e ( x * , 1 ) = lim k i e ( x k i , 1 ) lim k i e ( x k i , α k i ) min { 1 , α k i } lim k i e ( x k i , α k i ) min { 1 , α min } = lim k i x k i y k i min { 1 , α min } = 0.

Case 2: assume α k i 0 , for enough large k i , by the Lemma 3.2 (a) and the Arimijo search (16), we get

μ e ( x k i , 1 ) μ e ( x k i , 1 l α k i ) 1 l α k i < F ( x k i ) F ( x k i ( 1 l α k i ) ) .

Hence, we have

e ( x * , 1 ) = lim k i e ( x k i , 1 ) lim k i F ( x k i ) F ( x k i ( 1 l α k i ) ) = 0. (23)

In summary, we can get x * S O L ( F ) . Replacing this formula into (21), we have

x k + 1 x * 2 z k + 1 x * 2 x k x * 2 ( 1 μ 2 ) x k y k 2 .

Hence we get { x k } converges to the solution x * . The proof is thus complete. ¢

4. Numerical Experiments

In this section, we present some numerical experiments to demonstrate the effectiveness of our ETA algorithm, and show the algorithm can obtain the sparse solution of the MCP (a, b, F).

We will stimulate three examples to implement the ETA algorithm. They will be ran 100 times for difference dimensions, and thus average results will be recorded. In each experiment, we set z 0 = e , γ = 2 c , l = 0.1 , μ = 1 / c , n max = 2000 and other related parameters will be given in the following test example.

4.1. Test for LCPs with Z-Matrix [18]

The test is associated with the Z-matrix which has an important property, that is, there is a unique sparse solution of LCPs when M is a kind of Z-matrix. Let us consider LCP(q, M) where

M = I n 1 n e e T = ( 1 1 n 1 n 1 n 1 n 1 1 n 1 n 1 n 1 n 1 1 n )

and q = ( 1 n 1, 1 n , , 1 n ) T , here I n is the identity matrix of order n and e = ( 1,1, ,1 ) T R n . Such a matrix M is widely used in statistics. It is clear that M is a positive semidefinite Z-matrix. For any scalar α 0 , we know that vector x = α e + e 1 is a solution to LCP(q, M), since it satisfies

x 0 , M x + q = M e 1 + q = 0 , x T ( M x + q ) = 0.

Among all the solution, the vector x ^ = e 1 = ( 1 , 0 , , 0 ) T is the unique sparse solution.

We choose z 0 = e , c = 1 , λ 0 = 0.2 , γ = 2 c , τ = 0.75 , l = 0.1 , μ = 1 / c , ϵ = 1 e 6 , n max = 2000 , K 0 = 5 . We will take advantage of the recovery error x x ^ to evaluate our algorithm. Apart from that, the average cpu time (in seconds), the average number of iteration times and residual x z will also be taken into consideration on judging the performance of the method.

As indicated in Table 1, the ETA algorithm behaves very robust because the average number of times of iteration is identically equal to 205, the recovered error x x ^ and residual x z are basically similar. In addition, the sparsity x 0 of the recovered solution x is in all cases equivalents to 1, which means the recover is successful.

4.2. Text for LCPs with Positive Semidefinite Matrices

In this subsection, we test ETA for randomly created LCPs with positive semidefinite matrices. First, we state the way of constructing LCPs and their solution. Let a matrix Z R n × r ( r < n ) be generated with the standard normal distribution and M = Z Z T . Let the sparse vector x ^ be produced by choosing randomly the s = 0.01 n nonzero components whose values are also randomly generated from a standard normal distribution. After the matrix M and the sparse vector x ^ have been generated, a vector q R n can be constructed such that x ^ is a solution of the LCP (q, M). Then x ^ can be regarded as a sparse solution of the LCP (q, M). Namely,

x ^ 0, M x ^ + q 0, x ^ T ( M x ^ + q ) = 0, and x ^ 0 = 0.01 n .

To be more specific, if x ^ i > 0 then choose q i = ( M x ^ ) i , if x ^ i = 0 then

Table 1. ETA’s computational results on LCPs with Z-matrices.

choose q i = | ( M x ^ ) i | ( M x ^ ) i . Let M and q be the input to our ETA algorithm and take z 0 = e , c = max ( s v d ( M ) ) , λ 0 = 0.2 , γ = 2 c , τ = 0.75 , l = 0.1 , μ = 1 / c , ϵ = 1 e 10 , n max = 2000 , K 0 = max ( 2 , 10000 / n ) , here 10000 / n denotes the largest integer less than 10,000/n. Then ETA will output a solution x. Similarly, the average number of iteration times, average cpu time (in seconds), and the residual x z will also be taken into consideration on valuating our ETA algorithm.

As manifested in Table 2, the ETA algorithm performs quite efficiently. Furthermore, the sparsity x 0 of recovered solution x is in all cases equal to the sparsity x ^ 0 , which means the recover is exact.

4.3. Test for Co-Coercive Mixed Complementarity Problem

We now consider a co-coercive mixed complementarity problem (MCP) with

F ( x ) = D ( x ) + M x + q (24)

where D ( x ) and M x + q are the nonlinear part and the linear part of F ( x ) , generate a linear part of M x + q in a way similar to [24]. The matrix M = A T A + B , where A is an n × n matrix whose entries are randomly generated in the interval ( 5,5 ) , and a skew-symmetric matrix B is generated in the same way. In D ( x ) , the nonlinear part of F ( x ) , the components are D j ( x ) = d j arctan ( x j ) , and d j is a random variable in ( 1,0 ) , see similar example [25]. Then the sequent part of generating the sparse vector x ^ and vector q R n such that

x ^ Ω , ( y x ^ ) T F ( x ^ ) 0 , y Ω , and x ^ 0 = 0.01 n .

is similar to the procedure of Section 4.2. Let M and q be the input to our ETA algorithm and take z 0 = e , c = 150 log ( n ) , λ 0 = 0.2 , γ = 2 c , τ = 0.75 , l = 0.1 , μ = 1 / c , ϵ = 1 e 6 , n max = 2000 , K 0 = max ( 2 , 10000 / n ) , and d = r a n d ( n ,1 ) . Then ETA will output a solution x. Similariy, the average number of iteration times, the average residual x z , the average sparsity x 0 of x, and the average cpu time (in seconds) will also be taken into consideration on valuation our ETA algorithm.

It is not difficult to see from Table 3 that the ETA algorithm also performs quite efficiently in such mixed complementarity problems. The sparsity x 0 of the recovered solution x are all equal to the sparsity x ^ 0 , that is, the recover is exact.

Table 2. Results on randomly created LCPs with positive semidefinite matrices.

Table 3. Results on co-coercive mixed complementarity problems.

5. Conclusion

In this paper, we concentrate on finding sparse solution for co-coervice mixed complementarity problems (MCPs). An l 1 regularized projection minimization model is proposed for relaxation, and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a sparse solution of MCP. Preliminary numerical results indicate that the l 1 regularized model as well as the ETA is promising to find the spare solution of MCPs.

Data Availability

Since data in the Network Vector Autoregression (NAR) is not public, we have not done empirical analysis.

Acknowledgements

This paper is supported by the Top Disciplines of Shanghai.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Zhang, P. and Yu, Z.S. (2020) Sparse Solutions of Mixed Complementarity Problems. Journal of Applied Mathematics and Physics, 8, 10-22. https://doi.org/10.4236/jamp.2020.81002

References

  1. 1. Ferris, M.C. and Pang, J.S. (1997) Engineering and Economic Applications of Complementarity Problem. SIAM Review, 39, 669-713. https://doi.org/10.1137/S0036144595285963

  2. 2. Harker, P.T. and Pang, J.S. (1990) Finite-Dimensional Variational Inequality and Nonlinear Complementarity Problems: A Survey of Theory, Algorithms and Applications. Mathematical Programming, 48, 161-220. https://doi.org/10.1007/BF01582255

  3. 3. Billups, S.C., Dirkse, S.P. and Ferris, M.C. (1997) A Comparison of Algorithms for Large-Scale Mixed Complementarity Problems. Computational Optimization and Applications, 7, 3-25. https://doi.org/10.1007/978-0-585-26778-4_2

  4. 4. Facchinei, F. and Pang, J.S. (2003) Finite-Dimensional Variational Inequality and Complementarity Problems I. Springer Series in Operations Research, Springer, New York. https://doi.org/10.1007/b97544

  5. 5. Facchinei, F. and Pang, J.S. (2003) Finite-Dimensional Variational Inequality and Complementarity Problems II. Springer Series in Operations Research, Springer, New York. https://doi.org/10.1007/b97544

  6. 6. Zhou, Z.Y. and Peng, Y.C. (2019) The Locally Chen-Harker-Kanzow-Smale Smoothing Functions for Mixed Complementarity Problems. Journal of Global Optimization, 74, 169-193. https://doi.org/10.1007/s10898-019-00739-4

  7. 7. Eldar, Y.C. and Kutyniok, G. (2012) Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge.

  8. 8. Bruckstein, A.M., Donoho, D.L. and Elad, M. (2009) From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images. SIAM Review, 51, 34-81. https://doi.org/10.1137/060657704

  9. 9. Candes, E.J., Romberg, J. and Tao, T. (2006) Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 52, 489-509. https://doi.org/10.1109/TIT.2005.862083

  10. 10. Candes, E. and Tao, T. (2005) Decoding by Linear Programming. IEEE Transactions on Information Theory, 51, 4203-4215. https://doi.org/10.1109/TIT.2005.858979

  11. 11. Foucart, S. and Rauhut, H. (2013) A Mathematical Introduction to Compressive Sensing. Springer, Basel. https://doi.org/10.1007/978-0-8176-4948-7

  12. 12. Ge, D., Jiang, X. and Ye, Y. (2011) A Note on the Complexity of Minimization. Mathematical Programming, 129, 285-299. https://doi.org/10.1007/s10107-011-0470-2

  13. 13. Natarajan, B. (1995) Sparse Approximate Solutions to Linear Systems. SIAM Journal on Computing, 24, 227-234. https://doi.org/10.1137/S0097539792240406

  14. 14. Chen, X., Xu, F. and Ye, Y. (2010) Lower Bound Theory of Nonero Entries in Solutions of Minimization. SIAM Journal on Scientific Computing, 32, 2832-2852. https://doi.org/10.1137/090761471

  15. 15. Beck, A. and Eldar, Y. (2013) Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms. SIAM Journal on Optimization, 23, 1480-1509. https://doi.org/10.1137/120869778

  16. 16. Lu, Z. (2014) Iterative Hard Thresholding Methods for Regularized Convex Cone Programming. Mathematical Programming, 147, 125-154. https://doi.org/10.1007/s10107-013-0714-4

  17. 17. Chen, X. and Xiang, S. (2016) Sparse Solutions of Linear Complementarity Problems. Mathematical Programming, 159, 539-556. https://doi.org/10.1007/s10107-015-0950-x

  18. 18. Shang, M. and Nie, C. (2014) A Shrinkage-Thresholding Projection Method for Sparsest Solutions of LCPs. Journal of Inequalities and Applications, 2014, Article No. 51. https://doi.org/10.1186/1029-242X-2014-51

  19. 19. Shang, M., Zhang, C., Peng, D. and Zhou, S. (2015) A Half Thresholding Projection Algorithm for Sparse Solutions of LCPs. Optimization Letters, 9, 1231-1245. https://doi.org/10.1007/s11590-014-0834-7

  20. 20. Shang, M., Zhou, S. and Xiu, N. (2015) Extragradient Thresholding Methods for Sparse Solutions of Co-Coercive NCPs. Journal of Inequalities and Applications, 2015, Article No. 34. https://doi.org/10.1186/s13660-015-0551-5

  21. 21. Shang, M., Zhang, C. and Xiu, N. (2014) Minimal Zero Norm Solutions of Linear Complementarity Problems. Journal of Optimization Theory and Applications, 163, 795-814. https://doi.org/10.1007/s10957-014-0549-z

  22. 22. Gabriel, S.A. and More, J.J. (1997) Smoothing of Mixed Complementarity Problems in Complementarity and Variational Problems.

  23. 23. Zarantonello, E.H. (1971) Projections on Convex Sets in Hilbert Space and Spectral Theory. In: Zarantonello, E.H., Ed., Contributions to Nonlinear Functional Analysis, Academic Press, New York.

  24. 24. Harker, P.T. and Pang, J.S. (1990) A Damped-Newton for the Linear Complementarity Problems. In: Allgower, E.L. and Georg, K., Eds., Computational Solution of Nonlinear Systems of Equations: AMS Lectures on Applied Mathematics, 26th Edition, American Mathematical Society, Providence, 265-284.

  25. 25. Taji, K., Fukushima, M. and Ibaraki, T. (1993) A Globally Convergent Newton Method for Solving Strongly Monotone Variational Inequalities. Mathematical Programming, 58, 369-383. https://doi.org/10.1007/BF01581276