Sparse Solutions of Mixed Complementarity Problems

In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation 1 l regularized projection minimization model for the original problem and design an extragradient thresholding algorithm (ETA) to solve the regularized model. Furthermore, we prove that any cluster point of the sequence generated by ETA is a solution of MCP. Finally, numerical experiments show that the ETA algorithm can effectively solve the 1 l regularized projection minimization model and obtain the sparse solution of the mixed complementarity problem.


Introduction
Define a continuously differentiable function : n n F R R → , and a nonempty set The mixed complementarity problem is to find a vector x ∈ Ω , such that The mixed complementarity problem, also known as box constrained variational inequality problem, denoted by MCP (a, b, F). In particular, if The set of solution to the mixed complementarity problem is denoted by SOL (F), throughout this paper, we assume ( ) SOL F ≠ ∅ . The MCP has wide applications in fields of science and engineering [1] [2], and many results on its theories and algorithms have been developed (see e.g. [3] [4] [5] [6]).
In recent years, the problem of recovering an unknown sparse solution from some linear constraints has been an active topic with a range of applications including signal processing, machine learning, and computer vision [7], and there are many articles available for sloving the sparse solutions of systems of linear equations [8]- [13] as well as the optimization problems [14] [15] [16].
In contrast with the fast development in sparse solutions of optimization and linear equations, there are few researches available for the sparse solutions of the complementarity problems. The sparse solution problem of linear complementarity was first studied by Chen and Xiang [17], by using the concept of minimum  [18], half thresholding projection algorithm [19] and extragradient thresholding method [20].
Along with the research of [17] [18] [19] [20], in this paper, we aim to design an extragradient thresholding Algorithm for the sparse solution of MCP, and which can be seen as an extension of the sparse solution algorithm for NCP.
Due to the relationship between MCP and the variational inequality, we aim to seek a vector x ∈ Ω by solving the solution of 0 l norm minimization problem: for any y ∈ Ω , where 0 x stands for the number of nonzero components of x and a solution of problem (2) is called the sparse solution of MCP.
In essence, the minimization problem (2) is a sparse optimization problem with equilibrium constraints. It is not easy to get solutions due to the equilibrium constraints, even if the objective function is continuous.
To overcome the difficulty for the 0 l norm, many researchers have suggested to relax the 0 l norm and instead to consider the 1 l norm, see [21]. Motivated by this outstanding work, we consider applying 1 l norm minimization to find the sparse solution of MCP, and we obtain the following minimization problem to approximate problem (2): , it is well known (see, e.g., [22]) that * x is a solution point of problem (1) if and only if it satisfies the following projection equation: and therefore problem (3) is equivalent to the following optimization problem In order to simplify the objective function, we introduce a new variable n z R ∈ and a regularization parameter We call (6) the 1 l regularized projection minimization problem.
This paper is organized as follows. In Section 2, we study the relation between the solution of model (6) and that of problem (3), and we show theoretically that (6) is a good approximation of problem (3). In Section 3, we propose an extragradient thresholding algorithm(ETA) for (6) and analyze the convergence of this algorithm. Numerical results are given in Section 4 and conclusion is described in Section 5.

The l1 Regularized Approximation
In this section, we study the relation between the solution of model (6) and that of model (3). The following theorem shows that model (6) This implies that for any 0 Hence the sequence { } k x λ is bounded and has at least one cluster point and Let x * and z * be any cluster points of { } k x λ and { } k z λ , respectively, and Let j k tend to ∞ in (9), we get . Then by the arbitrariness of ( ) x SOL F ∈ , we know x * is solution of problem (3). This completes the proof. 

Algorithm and Convergence
In this section, we give the extragradient thresholding algorithm (ETA) to solve 1 l regularization projection minimization problem (6) and give the convergence analysis of ETA.
First, we review some basic concepts about the monotone operator and the properties of the projection operator which can be found in [23].
Then the following statements are valid.
(a) For any In this paper, we suppose the mapping : n It is clear that the co-coercive mapping is monotone, namely, For a given k z ∈ Ω and 0 k λ > , we consider an unconstrained minimization subproblem: Evidently, the minimizer * x of the model (13) where the shrinkage operator S λ is defined by (see, e.g., [18]) It demonstrates that a solution n x R ∈ of the subproblem (13) can be analytically expressed by (14).
In what follows, we construct the extragradient thresholding algorithm (ETA) to solve the 1 l regularized projection minimization problem (6).

Algorithm ETA
Step 0: Choose and a positive integ- Step 1: Compute Step 2: If Proof. Since

( )
x SOL F ∈ and k y ∈ Ω , it follows that By the co-coercive of mapping F, we have where the second inequality comes from Lemma 3.2(b) and (16), and the last inequality is based on k α γ ≤ . The following theorem gives the global convergence of the algorithm ETA.

Proof. Let
( ) x SOL F ∈ , by Lemma 3.2(c) and (17), we have Now consider the last term of Equation (19), by Lemma 3.1 (a), we have Replacing (20) into (19), and by (16) According to the definition of shrinkage operator (15), we know that In summary, we can get Replacing this formula into (21), we have ( ) converges to the solution * x . The proof is thus complete. 

Numerical Experiments
In this section, we present some numerical experiments to demonstrate the effectiveness of our ETA algorithm, and show the algorithm can obtain the sparse solution of the MCP (a, b, F).
We will stimulate three examples to implement the ETA algorithm. They will be ran 100 times for difference dimensions, and thus average results will be recorded.
In each experiment, we set 0 z e = , 2c and other related parameters will be given in the following test example.

Test for LCPs with Z-Matrix [18]
The test is associated with the Z-matrix which has an important property, that is, there is a unique sparse solution of LCPs when M is a kind of Z-matrix. Let us consider LCP(q, M) where We will take advantage of the recovery error to evaluate our algorithm. Apart from that, the average cpu time (in seconds), the average number of iteration times and residual x z − will also be taken into consideration on judging the performance of the method.
As indicated in Table 1, the ETA algorithm behaves very robust because the average number of times of iteration is identically equal to 205, the recovered and residual x z − are basically similar. In addition, the sparsity 0 x of the recovered solution x is in all cases equivalents to 1, which means the recover is successful.

Text for LCPs with Positive Semidefinite Matrices
In this subsection, we test ETA for randomly created LCPs with positive semide- To be more specific, if ˆ0 i x > then choose Similarly, the average number of iteration times, average cpu time (in seconds), and the residual x z − will also be taken into consideration on valuating our ETA algorithm.
As manifested in Table 2, the ETA algorithm performs quite efficiently. Furthermore, the sparsity 0 x of recovered solution x is in all cases equal to the sparsity 0 x , which means the recover is exact.

Test for Co-Coercive Mixed Complementarity Problem
We now consider a co-coercive mixed complementarity problem (MCP) with . Then ETA will output a solution x. Similariy, the average number of iteration times, the average residual x z − , the average sparsity 0 x of x, and the average cpu time (in seconds) will also be taken into consideration on valuation our ETA algorithm.
It is not difficult to see from Table 3 that the ETA algorithm also performs quite efficiently in such mixed complementarity problems. The sparsity 0 x of the recovered solution x are all equal to the sparsity 0 x , that is, the recover is exact.

Conclusion
In this paper, we concentrate on finding sparse solution for co-coervice mixed complementarity problems (MCPs).

Data Availability
Since data in the Network Vector Autoregression (NAR) is not public, we have not done empirical analysis.