An Efficient Projected Gradient Method for Convex Constrained Monotone Equations with Applications in Compressive Sensing

In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.


Introduction
This paper is dedicated to solving the following nonlinear convex constrained monotone equations: where : n n F R R → is a continuous nonlinear mapping and the feasible region n R Ω ⊂ is a nonempty closed convex set, e.g. an n-dimensional box, namely, cal results gained in [6] [7] [8] [9] indicate that the projected conjugate gradient type methods for solving problem (1)  In Section 2, the modified PRP-type conjugate gradient projected method is proposed, and some preliminary properties are studied. The global convergence results are established in Section 3. The numerical experiments, and the applications of the obtained method for 1 l -norm regularized compressive sensing problems are discussed in Section 4. Finally, we have a conclusion section.

The Proposed Method and Corresponding Algorithm
We firstly introduce the definition of the projection operator where is continuously differentiable. It generates the iteration se- where k x is the current iteration point, 0 k α > is a step-length, and k d is the search direction given by where Combining the projected technique of Solodov and Svaiter [4] with the PRP method formed by Equation (5) and Equation (6), the following modified PRP formula is defined given in this paper It is show be noted that the proposed direction formula Equation (7) . There are some conjugate gradient methods with similar idea concerning Equation (7) have been studied in the papers [12]- [19].
The corresponding modified PRP conjugate gradient projection algorithm for solving problem (1) starts as follows.
Algorithm 1: Step 0 Choose any initial point 0 x ∈ Ω , and select constants Step Step Step strictly separates the current point k x from the solution set of the problem.
The above facts and Step 3 indicate that the next iteration

Convergence Analysis
In this section, we are going to discuss the convergence property of the given method. Before that, there are some basic assumptions on problem (1) needs to been given.  Then, for all and ( ) Proof: For 0 k = , Equation (12) and Equation (13) follows from the direct application of ( ) . For 1 k ≥ , using Equation (7), the definition of the , where the last inequality follows from the fact In the remaining part of this paper, we assume that 0 otherwise, the solution of the problem (1) has been found.  (12) and Assumption 1 we have The above result Equation (14) shows that the line search procedure Equation Moreover, we have and Particularly, Equation (15) implies that Proof: * x S ∈ denotes any arbitrary solution of the problem (1). The monotonicity of F and the line search Equation (8) Equation (3), Equation (9) and Equation (18) imply From the Cauchy-Schwarz inequality, the line search Equation (8), the monotonicity of F and Equation (18), it follows that Based on Equation (23) and Assumption 1 it follows ( ) Substituting the above relationship into Equation (19), it deduces which implies From the definition of k z and Equation (15), it holds that Combining the definition of k β , Equation (3), and the Cauchy-Schwarz in- which together with Equation (15), proves Equation ( From Equation (12) and Equation (27), On the other hand, Equation (13), Equation (21) and the definition of k d Finally, from Equation (14), Equation (27) and Equation (28), min , which contradicts with Equation (17). Thus, Equation (26) holds.

Numerical Experiments
The numerical performances of the proposed Algorithm 1 for large scale nonlinear convex constrained monotone equations with various dimensions and different initial points are studied in this section. Furthermore, the given Algorithm 1 is extended to solve the 1 l -norm regularized problems which de-code a sparse signal in compressive sensing. The algorithm is coded in MATLAB R2015a and run on a PC with Core i5 CPU and 4 GB memory.

Experiments on Nonlinear Convex Constrained Monotone Equations
The testing problems are listed as follows.
Problem 1. (Wang et al. [5]) The elements of ( ) Problem 2. The example is taken from [7]. The elements of ( )    Tables 1-4 indicate that the dimension of the problem has little effect on the number of iterations of the algorithm. However, the computing time is relatively large in high dimension cases. Moreover, we can see from the results of Tables 1-4 that Algorithm 1 is more competitive than CGD algorithm as Algorithm 1 can get the solution of all the test data at a smaller number of iterations and smaller CPU time. So the results of Tables 1-4 show that our method is very efficient.
The numerical performances of the both methods are also evaluated by using the performance profile tool of tool of Dolan and Moré [21]. Figure 1 shows the performance of two methods, it is obviously that the proposed MPRP method is more efficient and robust than CGD method.

Experiments on the l 1 -Norm Regularization Problem
The problem of the combination of 2 l and 1 l norms in the cost function often emerges for the signal reconstruction, i.e.: The optimization problems of the form Equation (28) appear in several signal reconstruction problems, such as sparse signal de-blurring [22], medical image reconstructions [23], compressed sensing [24], and super-resolution [25]. Iterative line search method or fixed point iteration schemes are commonly used to solve problem (28). By using the technique proposed by Figueiredo et al. [26], we can reformulate problem (28) as a convex quadratic program problem. Let Furthermore, the problem (29) can be rewritten as a standard convex quadratic program problem: where τ is forced to decrease as the measure in. The experiment starts at the measurement image, i.e.
T 0 x A y = , and terminates when the relative change of the iteration satisfies: We compare the proposed MPRP method with CGD method for this problem.
In both methods, the parameters are taken as 10 ξ = ,   Table 5. We report the   number of iterations (Niter) and the CPU time (in second) required for the whole testing process. From Table 5, we can see that MPRP method is better than CGD method. For example, the new method's iteration number and CPU time are much less than those of the CGD method. To summarize, these experiment results show that the proposed algorithm MPRP can work well in an efficient manner.

Conclusion
In this paper, we proposed a conjugate gradient projection algorithm for solving large-scale nonlinear convex constrained monotone equations based on the well-known Polak-Ribière-Polyak conjugate gradient method which is one of the most effective conjugate gradient methods to solve the unconstrained optimization problems. The algorithm combines CG technique with projection scheme and is a derivative-free method, so it can be applied to solve large-scale non-smooth equations for its low storage requirement. Under some technical conditions, we have established the global convergence. Another contribution of this paper is to use the given method to solve the 1 l -norm regularized problems in compressive sensing.