1. Introduction
Define a continuously differentiable function
, and a nonempty set
where
and
. The mixed complementarity problem is to find a vector
, such that
(1)
The mixed complementarity problem, also known as box constrained variational inequality problem, denoted by MCP (a, b, F). In particular, if
, the mixed complementarity problem becomes a nonlinear complementarity problem (NCP), is to find a vertor
, such that
Moreover, if
, where
, the nonlinear complementary problem reduces a linear complementary problem (LCP):
The set of solution to the mixed complementarity problem is denoted by SOL (F), throughout this paper, we assume
.
The MCP has wide applications in fields of science and engineering [1] [2], and many results on its theories and algorithms have been developed (see e.g. [3] [4] [5] [6] ).
In recent years, the problem of recovering an unknown sparse solution from some linear constraints has been an active topic with a range of applications including signal processing, machine learning, and computer vision [7], and there are many articles available for sloving the sparse solutions of systems of linear equations [8] - [13] as well as the optimization problems [14] [15] [16].
In contrast with the fast development in sparse solutions of optimization and linear equations, there are few researches available for the sparse solutions of the complementarity problems. The sparse solution problem of linear complementarity was first studied by Chen and Xiang [17], by using the concept of minimum
norm solution, they studied the characteristics and calculations of sparse solutions and minimum p-norm solutions for linear complementarity problems. Recently, some solution methods had been proposed for LCP and NCP, for examples, shrinkage-thresholding projection method [18], half thresholding projection algorithm [19] and extragradient thresholding method [20].
Along with the research of [17] [18] [19] [20], in this paper, we aim to design an extragradient thresholding Algorithm for the sparse solution of MCP, and which can be seen as an extension of the sparse solution algorithm for NCP.
Due to the relationship between MCP and the variational inequality, we aim to seek a vector
by solving the solution of
norm minimization problem:
(2)
for any
, where
stands for the number of nonzero components of x and a solution of problem (2) is called the sparse solution of MCP.
In essence, the minimization problem (2) is a sparse optimization problem with equilibrium constraints. It is not easy to get solutions due to the equilibrium constraints, even if the objective function is continuous.
To overcome the difficulty for the
norm, many researchers have suggested to relax the
norm and instead to consider the
norm, see [21]. Motivated by this outstanding work, we consider applying
norm minimization to find the sparse solution of MCP, and we obtain the following minimization problem to approximate problem (2):
(3)
for any
and where
.
Given a vector
, let
be the projection of x on
, for convenience, we write
, it is well known (see, e.g., [22] ) that
is a solution point of problem (1) if and only if it satisfies the following projection equation:
(4)
and therefore problem (3) is equivalent to the following optimization problem
(5)
In order to simplify the objective function, we introduce a new variable
and a regularization parameter
and establish the corresponding regularized minimization problem as follows:
(6)
We call (6) the
regularized projection minimization problem.
This paper is organized as follows. In Section 2, we study the relation between the solution of model (6) and that of problem (3), and we show theoretically that (6) is a good approximation of problem (3). In Section 3, we propose an extragradient thresholding algorithm(ETA) for (6) and analyze the convergence of this algorithm. Numerical results are given in Section 4 and conclusion is described in Section 5.
2. The l1 Regularized Approximation
In this section, we study the relation between the solution of model (6) and that of model (3). The following theorem shows that model (6) is a good approximation of problem (3).
Theorem 2.1. For any fixed
, the solution set of (6) is nonempty and bounded. Let
be a solution of (6), and
be any positive sequence converging to 0. If
, then
has at least one accumulation point, and any accumulation point
of
is a solution of (1.3).
Proof. For any fixed
, since
(7)
which means
is coercivity. On the other hand, it is clear that for any
and
,
. This together with (7) implies the level set
is nonempty and compact, where
and
are given points, which deduces that the solution set of problem (6) is nonempty and bounded since
is continuous on L.
Now we consider the proof of the second part of this theorem. Let
and
. from (5), we have
. Since
is a solution of (6) with
, where
, it follows that
(8)
This implies that for any
,
(9)
Hence the sequence
is bounded and has at least one cluster point and so is
due to
.
Let
and
be any cluster points of
and
, respectively, and
. Then there exists a subsequence of
, say
, such that
We can obtain
by letting
in
. Letting
, in
yields
. Consequently,
, which implies
. Let
tend to
in (9), we get
. Then by the arbitrariness of
, we know
is solution of problem (3). This completes the proof. ¢
3. Algorithm and Convergence
In this section, we give the extragradient thresholding algorithm (ETA) to solve
regularization projection minimization problem (6) and give the convergence analysis of ETA.
First, we review some basic concepts about the monotone operator and the properties of the projection operator which can be found in [23].
Lemma 3.1. Let
be a projection from
to K, where K is a non-empty closed convex subset on
. Then we have
(a) For
,
(10)
(b) for any
,
(11)
Using Lemma 3.1, we can obtain the following properties easily.
Lemma 3.2. Define a residue funtion
(12)
Then the following statements are valid.
(a) For any
,
is non-increasing;
(b)
,
;
(c)
,
.
In this paper, we suppose the mapping
is co-coercive on the set
, i.e., there exists a constant
such that
It is clear that the co-coercive mapping is monotone, namely,
For a given
and
, we consider an unconstrained minimization subproblem:
(13)
Evidently, the minimizer
of the model (13) satisfies the corresponding optimality condition
(14)
where the shrinkage operator
is defined by (see, e.g., [18] )
(15)
It demonstrates that a solution
of the subproblem (13) can be analytically expressed by (14).
In what follows, we construct the extragradient thresholding algorithm (ETA) to solve the
regularized projection minimization problem (6).
Algorithm ETA
Step 0: Choose
and a positive integers
, set
.
Step 1: Compute
, where
with
being the smallest nonnegative integer satisfying
(16)
Step 2: If
or the number of iterations is greater than
, then return
and stop. Otherwise, compute
and update
by
Step 3: Let
, then go to Step 1.
Define
and
It is easy to see that
is a solution for MCP if and only if
.
The following lemma plays an important role in the analysis of the global convergence of the Algorithm ETA.
Lemma 3.3. Suppose that mapping F is co-coercive and
. If
generated by ETA is not a solution of
, then for any
, we have
(17)
Proof. Since
and
, it follows that
. By the co-coercive of mapping F, we have
. Hence
where the second inequality comes from Lemma 3.2(b) and (16), and the last inequality is based on
.¢
The following theorem gives the global convergence of the algorithm ETA.
Theorem 3.4. Suppose F is co-coercive and
. If
and
are infinite columns generated by the algorithm ETA, then
(18)
Further,
converges to a solution of the problem MCP(a, b, F).
Proof. Let
, by Lemma 3.2(c) and (17), we have
(19)
Now consider the last term of Equation (19), by Lemma 3.1 (a), we have
It follows that
(20)
Replacing (20) into (19), and by (16), we deduce
(21)
According to the definition of shrinkage operator (15), we know that
Hence,
has contraction properties, which means
is bounded, and
so we get (18) holds.
Since
is bounded, the sequence
has at least one cluster point, let
be a cluster points of
and a subsequence
converge to
. Next we will show
, we consider two cases:
Case 1: assume that there is a positive low bounded
such that
, then by inequality
(22)
the continuity of
for x and (18), we get
Case 2: assume
, for enough large
, by the Lemma 3.2 (a) and the Arimijo search (16), we get
Hence, we have
(23)
In summary, we can get
. Replacing this formula into (21), we have
Hence we get
converges to the solution
. The proof is thus complete. ¢
4. Numerical Experiments
In this section, we present some numerical experiments to demonstrate the effectiveness of our ETA algorithm, and show the algorithm can obtain the sparse solution of the MCP (a, b, F).
We will stimulate three examples to implement the ETA algorithm. They will be ran 100 times for difference dimensions, and thus average results will be recorded. In each experiment, we set
,
,
,
,
and other related parameters will be given in the following test example.
4.1. Test for LCPs with Z-Matrix [18]
The test is associated with the Z-matrix which has an important property, that is, there is a unique sparse solution of LCPs when M is a kind of Z-matrix. Let us consider LCP(q, M) where
and
, here
is the identity matrix of order n and
. Such a matrix M is widely used in statistics. It is clear that M is a positive semidefinite Z-matrix. For any scalar
, we know that vector
is a solution to LCP(q, M), since it satisfies
Among all the solution, the vector
is the unique sparse solution.
We choose
,
,
,
,
,
,
,
,
,
. We will take advantage of the recovery error
to evaluate our algorithm. Apart from that, the average cpu time (in seconds), the average number of iteration times and residual
will also be taken into consideration on judging the performance of the method.
As indicated in Table 1, the ETA algorithm behaves very robust because the average number of times of iteration is identically equal to 205, the recovered error
and residual
are basically similar. In addition, the sparsity
of the recovered solution x is in all cases equivalents to 1, which means the recover is successful.
4.2. Text for LCPs with Positive Semidefinite Matrices
In this subsection, we test ETA for randomly created LCPs with positive semidefinite matrices. First, we state the way of constructing LCPs and their solution. Let a matrix
be generated with the standard normal distribution and
. Let the sparse vector
be produced by choosing randomly the
nonzero components whose values are also randomly generated from a standard normal distribution. After the matrix M and the sparse vector
have been generated, a vector
can be constructed such that
is a solution of the LCP (q, M). Then
can be regarded as a sparse solution of the LCP (q, M). Namely,
To be more specific, if
then choose
, if
then
![]()
Table 1. ETA’s computational results on LCPs with Z-matrices.
choose
. Let M and q be the input to our ETA algorithm and take
,
,
,
,
,
,
,
,
,
, here
denotes the largest integer less than 10,000/n. Then ETA will output a solution x. Similarly, the average number of iteration times, average cpu time (in seconds), and the residual
will also be taken into consideration on valuating our ETA algorithm.
As manifested in Table 2, the ETA algorithm performs quite efficiently. Furthermore, the sparsity
of recovered solution x is in all cases equal to the sparsity
, which means the recover is exact.
4.3. Test for Co-Coercive Mixed Complementarity Problem
We now consider a co-coercive mixed complementarity problem (MCP) with
(24)
where
and
are the nonlinear part and the linear part of
, generate a linear part of
in a way similar to [24]. The matrix
, where A is an
matrix whose entries are randomly generated in the interval
, and a skew-symmetric matrix B is generated in the same way. In
, the nonlinear part of
, the components are
, and
is a random variable in
, see similar example [25]. Then the sequent part of generating the sparse vector
and vector
such that
is similar to the procedure of Section 4.2. Let M and q be the input to our ETA algorithm and take
,
,
,
,
,
,
,
,
,
, and
. Then ETA will output a solution x. Similariy, the average number of iteration times, the average residual
, the average sparsity
of x, and the average cpu time (in seconds) will also be taken into consideration on valuation our ETA algorithm.
It is not difficult to see from Table 3 that the ETA algorithm also performs quite efficiently in such mixed complementarity problems. The sparsity
of the recovered solution x are all equal to the sparsity
, that is, the recover is exact.
![]()
Table 2. Results on randomly created LCPs with positive semidefinite matrices.
![]()
Table 3. Results on co-coercive mixed complementarity problems.
5. Conclusion
In this paper, we concentrate on finding sparse solution for co-coervice mixed complementarity problems (MCPs). An
regularized projection minimization model is proposed for relaxation, and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a sparse solution of MCP. Preliminary numerical results indicate that the
regularized model as well as the ETA is promising to find the spare solution of MCPs.
Data Availability
Since data in the Network Vector Autoregression (NAR) is not public, we have not done empirical analysis.
Acknowledgements
This paper is supported by the Top Disciplines of Shanghai.