^{1}

^{*}

^{1}

In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.

This paper is dedicated to solving the following nonlinear convex constrained monotone equations:

F ( x ) = 0 , x ∈ Ω , (1)

where F : R n → R n is a continuous nonlinear mapping and the feasible region Ω ⊂ R n is a nonempty closed convex set, e.g. an n-dimensional box, namely, Ω = x ∈ R n : l ≤ x ≤ u . Monotone means that

〈 F ( x ) − F ( y ) , x − y 〉 ≥ 0 , ∀ x , y ∈ R n , (2)

where the 〈 ⋅ , ⋅ 〉 denotes the inner product of vectors. The problems (1) emerges in many fields such as economic equilibrium problems [

In Section 2, the modified PRP-type conjugate gradient projected method is proposed, and some preliminary properties are studied. The global convergence results are established in Section 3. The numerical experiments, and the applications of the obtained method for l 1 -norm regularized compressive sensing problems are discussed in Section 4. Finally, we have a conclusion section.

We firstly introduce the definition of the projection operator P Ω [ ⋅ ] which is defined as the mapping from R n to Ω ,

P Ω [ x ] = arg min { ‖ y − x ‖ | y ∈ Ω } , ∀ x ∈ R n ,

where ‖ ⋅ ‖ denotes the Euclidean norm of vectors, Ω is a nonempty closed convex subset of R n .

The projection operator is non-expansive, namely, for any x , y ∈ R n , the following condition holds

‖ P Ω [ y ] − P Ω [ x ] ‖ ≤ ‖ x − y ‖ . (3)

Let’s review the Polak-Ribière-Polyak [

min { f ( x ) | x ∈ R n } , (4)

where f : R n → R is continuously differentiable. It generates the iteration sequence { x k } in the form

x k + 1 = x k + α k d k , (5)

where x k is the current iteration point, α k > 0 is a step-length, and d k is the search direction given by

d k = { − g k + β k − 1 P R P d k − 1 , if k > 0 , − g k , if k = 0 , (6)

where β k − 1 P R P = g k T y k − 1 ‖ g k − 1 ‖ 2 , y k − 1 = g k − g k − 1 .

Combining the projected technique of Solodov and Svaiter [

d k = { − g k + g k T y k − 1 d k − 1 − d k − 1 T g k y k − 1 max { 2 γ ‖ d k − 1 ‖ ‖ y k − 1 ‖ , d k − 1 T y k − 1 , ‖ g k − 1 ‖ 2 } , if k > 0 − g k , if k = 0 , (7)

where y k − 1 = g k − g k − 1 and γ > 0 is a constant.

It is show be noted that the proposed direction formula Equation (7) reduces to PRP formula if the exact line search is used. Furthermore, the sufficient descent condition automatically holds for all k, since d k T g ( x k ) = − ‖ g ( x k ) ‖ 2 . There are some conjugate gradient methods with similar idea concerning Equation (7) have been studied in the papers [

The corresponding modified PRP conjugate gradient projection algorithm for solving problem (1) starts as follows.

Algorithm 1:

Step 0 Choose any initial point x 0 ∈ Ω , and select constants ρ ∈ ( 0 , 1 ) , γ > 0 , σ > 0 , ξ > 0 , ϵ ∈ ( 0 , 1 ) and d 0 = − F ( x 0 ) . Let k : = 0 .

Step 1 If ‖ F ( x k ) ‖ ≤ ϵ , stop. Otherwise compute search direction d k by Equation (7) with g k and g k − 1 replaced by F k and F k − 1 , respectively.

Step 2 Let z k = x k + α k d k , where α k = max { ξ ρ i | i = 0 , 1 , ⋯ } such that

− 〈 F ( x k + α k d k ) , d k 〉 ≥ σ α k ‖ d k ‖ 2 . (8)

Step 3 If ‖ F ( z k ) ‖ ≤ ϵ , stop and let x k + 1 = z k . Otherwise compute the next iteration by

x k + 1 = P Ω [ x k − β k F ( z k ) ] , (9)

where

β k = 〈 F ( z k ) , x k − z k 〉 ‖ F ( z k ) ‖ 2 (10)

Step 4 Let k : = k + 1 , and go to Step 1.

Remark 1: In the algorithm 1, the step size α k given by Equation (8) satisfies

〈 F ( z k ) , x k − z k 〉 > 0 ,

where z k = x k + α k d k , d k is the search direction. Moreover, for any x * such that F ( x * ) = 0 ,

〈 F ( z k ) , x * − z k 〉 ≤ 0.

comes from the monotonicity property of F ( x ) . This means that the hyperplane

H k = { x ∈ R n | 〈 F ( z k ) , x − z k 〉 = 0 }

strictly separates the current point x k from the solution set of the problem. The above facts and Step 3 indicate that the next iteration x k + 1 is computed by projecting x k onto the intersection of the feasible set Ω with the halfspace H k .

In this section, we are going to discuss the convergence property of the given method. Before that, there are some basic assumptions on problem (1) needs to been given.

Assumption 1: The mapping F is Lipschitz continuous with constant L > 0 in a set Ω , written F ∈ Lip ( Ω ) , for every x , y ∈ Ω ,

‖ F ( x ) − F ( y ) ‖ ≤ L ‖ x − y ‖ . (11)

Assumption 2: The solution set of the problem (1), denoted by S, is nonempty convex.

For conjugate gradient method, the sufficient descent property is essential in the convergence analysis, the following lemma shows that the search direction { d k } generated by Algorithm 1 satisfies the sufficient descent condition independent of line search.

Lemma 1: Let the sequence { x k } and { d k } be generated by Algorithm 1. Then, for all k ≥ 0 ,

F ( x k ) T d k = − ‖ F ( x k ) ‖ 2 , (12)

and

‖ d k ‖ ≤ ( 1 + 1 γ ) ‖ F ( x k ) ‖ . (13)

Proof: For k = 0 , Equation (12) and Equation (13) follows from the direct application of d 0 = − g ( x 0 ) . For k ≥ 1 , using Equation (7), the definition of the search direction d k + 1 , it follows that

d k + 1 T F k + 1 = − ‖ F k + 1 ‖ 2 + [ F k + 1 T y k d k − d k T F k + 1 y k max { 2 γ ‖ d k ‖ ‖ y k ‖ , d k T y k , ‖ F k ‖ 2 } ] T F k + 1 = − ‖ F k + 1 ‖ 2 ,

similarly,

‖ d k + 1 ‖ = ‖ − F k + 1 + F k + 1 T y k d k − d k T F k + 1 y k max { 2 γ ‖ d k ‖ ‖ y k ‖ , d k T y k , ‖ F k ‖ 2 } ‖ ≤ ‖ F k + 1 ‖ + ‖ F k + 1 ‖ ‖ y k ‖ ‖ d k ‖ + ‖ d k ‖ ‖ F k + 1 ‖ ‖ y k ‖ max { 2 γ ‖ d k ‖ ‖ y k ‖ , d k T y k , ‖ F k ‖ 2 } ≤ ( 1 + 1 γ ) ‖ F k + 1 ‖ ,

where the last inequality follows from the fact

max { 2 γ ‖ d k ‖ ‖ y k ‖ , ‖ F k ‖ 2 } ≥ 2 γ ‖ d k ‖ ‖ y k ‖ .

In the remaining part of this paper, we assume that F k ≠ 0 for all ∀ k ≥ 0 , otherwise, the solution of the problem (1) has been found.

Lemma2: Let the sequence { x k } and { z k } be generated by Algorithm 1. Suppose that the Assumption 1 holds. Then there exists a positive number α k satisfying Equation (8) for all k ≥ 0 .

Proof: The line search ensure that if α k ≠ ξ , then α ′ k = ρ − 1 α k does not satisfy Equation (8), namely,

− 〈 F ( z ′ k ) , d k 〉 < σ α ′ k ‖ d k ‖ 2 ,

where z ′ k = x k + α ′ k d k . From Equation (12) and Assumption 1 we have

‖ F k ‖ 2 = − 〈 F k , d k 〉 = 〈 F ( z ′ k ) − F ( x k ) , d k 〉 − 〈 F ( z ′ k ) , d k 〉 ≤ L α ′ k ‖ d k ‖ 2 + σ α ′ k ‖ d k ‖ 2 ≤ ρ − 1 α k ( L + σ ) ‖ d k ‖ 2

which means that

α k ≥ min { ξ , ρ L + σ ‖ F k ‖ 2 ‖ d k ‖ 2 } . (14)

The above result Equation (14) shows that the line search procedure Equation (8) always terminates in a finite number of steps.

Lemma3: Let sequences { x k } and { z k } be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then both { x k } and { z k } are bounded. Moreover, we have

lim k → ∞ ‖ x k − z k ‖ = 0 , (15)

and

lim k → ∞ ‖ x k + 1 − x k ‖ = 0. (16)

Particularly, Equation (15) implies that

lim k → ∞ α k ‖ d k ‖ = 0. (17)

Proof: x * ∈ S denotes any arbitrary solution of the problem (1). The monotonicity of F and the line search Equation (8) deduce

〈 F ( z k ) , x k − x * 〉 ≥ 〈 F ( z k ) , x k − z k 〉 ≥ σ α k 2 ‖ d k ‖ 2 ≥ 0. (18)

Equation (3), Equation (9) and Equation (18) imply

‖ x k + 1 − x * ‖ 2 = ‖ P Ω [ x k − β k F ( z k ) ] − x * ‖ 2 ≤ ‖ x k − β k F ( z k ) − x * ‖ 2 = ‖ x k − x * ‖ 2 − 2 β k 〈 F ( z k ) , x k − x * 〉 + β k 2 ‖ F ( z k ) ‖ 2 ≤ ‖ x k − x * ‖ 2 − 2 β k 〈 F ( z k ) , x k − z k 〉 + β k 2 ‖ F ( z k ) ‖ 2 ≤ ‖ x k − x * ‖ 2 − 〈 F ( z k ) , x k − z k 〉 2 ‖ F ( z k ) ‖ 2 ≤ ‖ x k − x * ‖ 2 − σ 2 ‖ x k − z k ‖ 4 ‖ F ( z k ) ‖ 2 . (19)

Since the sequence { ‖ x k − x * ‖ } is decreasing and convergent, the sequence { x k } is bounded. Equation (19) shows that ‖ x k − x * ‖ ≤ ‖ x 0 − x * ‖ for all k. Then, by Assumption 1, we have

‖ F ( x k ) ‖ = ‖ F ( x k ) − F ( x * ) ‖ ≤ L ‖ x k − x * ‖ ≤ L ‖ x 0 − x * ‖ . (20)

Let M 1 = L ‖ x 0 − x * ‖ ,

‖ F ( x k ) ‖ ≤ M 1 , ∀ k ≥ 0. (21)

From the Cauchy-Schwarz inequality, the line search Equation (8), the monotonicity of F and Equation (18), it follows that

0 < σ ‖ x k − z k ‖ 2 ≤ 〈 F ( z k ) , x k − z k 〉 ≤ 〈 F ( x k ) , x k − z k 〉 ≤ ‖ F ( x k ) ‖ ‖ x k − z k ‖ .

σ ‖ x k − z k ‖ ≤ ‖ F ( x k ) ‖ ≤ M 1 , (22)

which shows that the sequence { z k } is bounded. Furthermore, the sequence { ‖ z k − x * ‖ } is also bounded, there exists M 2 > 0 , k 0 ≥ 0 , such that

‖ z k − x * ‖ ≤ M 2 , ∀ k ≥ k 0 . (23)

Based on Equation (23) and Assumption 1 it follows

‖ F ( z k ) ‖ = ‖ F ( z k ) − F ( x * ) ‖ ≤ L ‖ z k − x * ‖ ≤ L M 2 . (24)

Substituting the above relationship into Equation (19), it deduces

σ 2 ( L M 2 ) 2 ∑ k = 0 ∞ ‖ x k − z k ‖ 4 ≤ ∑ k = 0 ∞ ( ‖ x k − x * ‖ 2 − ‖ x k + 1 − x * ‖ 2 ) < ∞ , (25)

which implies

lim k → ∞ ‖ x k − z k ‖ = 0.

From the definition of z k and Equation (15), it holds that

lim k → ∞ α k ‖ d k ‖ = 0.

Combining the definition of β k , Equation (3), and the Cauchy-Schwarz inequality, we have

‖ x k + 1 − x k ‖ = ‖ P Ω [ x k − β k F ( z k ) ] − x k ‖ ≤ ‖ x k − β k F ( z k ) − x k ‖ = 〈 F ( z k ) , x k − z k 〉 ‖ F ( z k ) ‖ ≤ ‖ x k − z k ‖

which together with Equation (15), proves Equation (16).

Theorem1: Let sequences { x k } and { z k } be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then

lim k → ∞ inf ‖ F k ‖ = 0. (26)

Proof: We prove this Theorem by contradiction. Assume that Equation (26) does not hold, namely, there exists ε > 0 such that

‖ F k ‖ ≥ ε , ∀ k ≥ 0. (27)

From Equation (12) and Equation (27),

‖ d k ‖ 2 = ‖ d k + F k − F k ‖ 2 = ‖ d k + F k ‖ 2 − 2 〈 d k + F k , F k 〉 + ‖ F k ‖ 2 ≥ − 2 〈 d k , F k 〉 − ‖ F k ‖ 2 = ‖ F k ‖ 2 ,

which implies

‖ d k ‖ ≥ ε , ∀ k ≥ 0. (28)

On the other hand, Equation (13), Equation (21) and the definition of d k deduce

‖ d k ‖ ≤ ( 1 + 1 γ ) ‖ F k ‖ ≤ ( 1 + 1 γ ) M 1 , ∀ k ≥ 0.

Finally, from Equation (14), Equation (27) and Equation (28),

α k ‖ d k ‖ ≥ min { ξ , ρ L + σ ‖ F k ‖ 2 ‖ d k ‖ 2 } ‖ d k ‖ ≥ min { ξ ε , ρ ε 2 ( L + σ ) ( 1 + γ − 1 ) M 1 }

which contradicts with Equation (17). Thus, Equation (26) holds.

The numerical performances of the proposed Algorithm 1 for large scale nonlinear convex constrained monotone equations with various dimensions and different initial points are studied in this section. Furthermore, the given Algorithm 1 is extended to solve the l 1 -norm regularized problems which decode a sparse signal in compressive sensing. The algorithm is coded in MATLAB R2015a and run on a PC with Core i5 CPU and 4 GB memory.

The testing problems are listed as follows.

Problem 1. (Wang et al. [

F i ( x ) = e x i − 1 , i = 1 , 2 , 3 , ⋯ , n .

and Ω = R + n .

Problem 2. The example is taken from [

F i ( x ) = 2 x i − sin ( x i ) , i = 1 , 2 , 3 , ⋯ , n .

and Ω = R + n .

Problem 3. The example is taken from [

g 1 ( x ) = x 1 − e cos ( x 1 + x 2 n + 1 ) , g i ( x ) = x i − e cos ( x i − 1 + x i + x i + 1 n + 1 ) , i = 2 , 3 , ⋯ , n − 1 , g n ( x ) = x n − e cos ( x n − 1 + x n n + 1 ) .

and Ω = R + n .

Problem 4. The example is taken from [

F i ( x ) = x i − sin ( | x i − 1 | ) , i = 1 , 2 , 3 , ⋯ , n .

and Ω = { x ∈ R n | ∑ i = 1 n x i ≤ n , x i ≥ − 1 , i = 1 , 2 , ⋯ , n } .

For convenience, MPRP denotes the proposed Algorithm 1. We compare the MPRP method with CGD method [

Numerical results are shown in Tables 1-4, in which Init (Dim), NI and NF denote initial points (dimension), the number of iterations and the number of function evaluations respectively. ‖ F ( x ) ‖ is the final Euclidean norm of the function values, and CPU-time in seconds.

Tables 1-4 indicate that the dimension of the problem has little effect on the number of iterations of the algorithm. However, the computing time is relatively large in high dimension cases. Moreover, we can see from the results of Tables 1-4 that Algorithm 1 is more competitive than CGD algorithm as Algorithm 1 can get the solution of all the test data at a smaller number of iterations and smaller CPU time. So the results of Tables 1-4 show that our method is very efficient.

The numerical performances of the both methods are also evaluated by using the performance profile tool of tool of Dolan and Moré [

MPRP | CGD | |||
---|---|---|---|---|

Init (Dim) | NI/ NF/||f(x)|| | Time | NI/ NF/||f(x)|| | Time |

x_{1}(10000) x_{2}(10000) x_{3}(10000) x_{4}(10000) x_{5}(10000) | 11/25/3.84830e-006 5/11/9.76367e-006 11/24/2.98167e-006 11/27/6.30895e-006 14/31/8.16510e-006 | 0.13 0.08 0.15 0.14 0.15 | 16/185/5.84301e-006 5/11/9.59824e-006 10/21/4.63815e-006 12/28/5.36760e-006 30/61/9.55151e-006 | 0.50 0.09 0.13 0.16 0.28 |

x_{1}(50000) x_{2}(50000) x_{3}(50000) x_{4}(50000) x_{5}(50000) | 11/25/3.84897e-006 5/11/4.36715e-006 11/24/6.66722e-006 12/29/3.52681e-006 15/33/5.67816e-006 | 0.42 0.21 0.45 0.56 0.63 | 19/261/5.42831e-006 5/11/4.29309e-006 11/23/2.07424e-006 13/30/2.40047e-006 32/65/7.69276e-006 | 2.92 0.25 0.49 0.62 1.26 |

x_{1}(100000) x_{2}(100000) x_{3}(100000) x_{4}(100000) x_{5}(100000) | 11/25/3.84906e-006 5/11/3.08810e-006 11/24/9.42888e-006 12/29/4.98767e-006 15/33/8.06362e-006 | 0.78 0.39 0.86 1.03 1.22 | 16/192/7.19276e-006 5/11/3.03573e-006 11/23/2.93342e-006 13/30/3.39477e-006 33/67/6.52318e-006 | 3.26 0.47 0.94 1.21 2.61 |

MPRP | CGD | |||
---|---|---|---|---|

Init (Dim) | NI/ NF/||f(x)|| | Time | NI/ NF/||f(x)|| | Time |

x_{1}(10000) x_{2}(10000) x_{3}(10000) x_{4}(10000) x_{5}(10000) | 10/21/5.34065e-006 5/11/3.20000e-006 10/21/3.73273e-006 9/20/5.61741e-006 13/27/3.34488e-006 | 0.09 0.07 0.09 0.11 0.11 | 17/141/7.20891e-006 5/11/9.60000e-006 11/23/4.43164e-006 11/23/4.83649e-006 31/63/9.40955e-006 | 0.25 0.07 0.11 0.10 0.21 |

x_{1}(50000) x_{2}(50000) x_{3}(50000) x_{4}(50000) x_{5}(50000) | 10/21/5.34094e-006 4/9/7.15542e-006 10/21/8.34663e-006 10/22/2.51218e-006 13/27/7.50144e-006 | 0.21 0.12 0.21 0.22 0.27 | 16/125/9.89342e-006 5/11/4.29325e-006 11/23/9.90944e-006 12/25/2.16294e-006 19/67/3.81389e-006 | 0.61 0.14 0.24 0.28 0.47 |

x_{1}(100000) x_{2}(100000) x_{3}(100000) x_{4}(100000) x_{5}(100000) | 10/21/5.34097e-006 4/9/5.05964e-006 11/23/2.36078e-006 10/22/3.55276e-006 14/29/2.96451e-006 | 0.12 0.06 0.14 0.13 0.18 | 16/125/8.77333e-006 5/11/3.03579e-006 12/25/2.80281e-006 12/25/3.05886e-006 19/95/7.73838e-006 | 1.17 0.22 0.51 0.48 1.03 |

MPRP | CGD | |||
---|---|---|---|---|

Init (Dim) | NI/ NF/||f(x)|| | Time | NI/ NF/||f(x)|| | Time |

x_{1}(10000) x_{2}(10000) x_{3}(10000) x_{4}(10000) x_{5}(10000) | 13/27/4.52180e-006 13/27/4.52370e-006 13/27/2.86185e-006 12/25/4.29188e-006 13/27/4.47547e-006 | 0.24 0.25 0.23 0.23 0.24 | 13/62/5.75833e-006 13/70/3.46424e-006 20/41/4.30230e-006 12/46/6.14493e-007 14/62/7.25107e-006 | 0.40 0.43 0.34 0.32 0.42 |

x_{1}(50000) x_{2}(50000) x_{3}(50000) x_{4}(50000) x_{5}(50000) | 13/27/7.25565e-006 13/27/7.25860e-006 13/27/4.59974e-006 12/25/6.91496e-006 13/27/8.32203e-006 | 1.00 0.98 0.99 0.93 1.00 | 13/54/8.81290e-006 12/39/9.12282e-006 13/43/2.72069e-006 13/49/6.63736e-006 13/64/4.25399e-006 | 1.61 1.23 1.36 1.52 1.82 |

x_{1}(100000) x_{2}(100000) x_{3}(100000) x_{4}(100000) x_{5}(100000) | 13/27/9.92424e-006 14/29/2.85975e-006 13/27/6.47855e-006 12/25/9.70603e-006 14/29/2.82988e-006 | 1.91 2.06 1.96 1.82 2.11 | 13/38/3.45225e-006 13/34/1.70785e-006 13/41/8.71814e-006 13/51/2.66088e-006 14/46/6.28570e-006 | 2.42 2.30 2.60 2.99 2.80 |

MPRP | CGD | |||
---|---|---|---|---|

Init (Dim) | NI/ NF/||f(x)|| | Time | NI/ NF/||f(x)|| | Time |

x_{1}(10000) x_{2}(10000) x_{3}(10000) x_{4}(10000) x_{5}(10000) | 16/49/3.81881e-006 11/34/4.33069e-006 11/34/2.49346e-006 11/32/8.70634e-006 13/40/3.26840e-006 | 0.17 0.11 0.11 0.10 0.12 | 17/83/4.26081e-006 18/54/5.36860e-006 19/56/7.93477e-006 20/58/9.36855e-006 31/91/6.47378e-006 | 0.21 0.17 0.17 0.17 0.23 |

x_{1}(50000) x_{2}(50000) x_{3}(50000) x_{4}(50000) x_{5}(50000) | 16/49/9.28932e-006 11/34/9.68593e-006 11/34/5.57554e-006 12/35/4.06815e-006 13/40/7.33466e-006 | 0.37 0.26 0.25 0.28 0.36 | 18/66/4.06062e-006 19/57/4.81014e-006 20/59/7.11258e-006 21/61/8.39779e-006 43/127/9.90609e-006 | 0.52 0.49 0.50 0.51 0.95 |

x_{1}(100000) x_{2}(100000) x_{3}(100000) x_{4}(100000) x_{5}(100000) | 17/52/4.28438e-006 12/37/2.86250e-006 11/34/7.88501e-006 12/35/5.75324e-006 14/43/2.89653e-006 | 0.25 0.18 0.18 0.17 0.20 | 19/59/1.32958e-006 19/57/6.80218e-006 21/62/4.03228e-006 22/64/4.76089e-006 18/52/7.84045e-006 | 0.95 0.98 0.98 1.09 0.89 |

The problem of the combination of l 2 and l 1 norms in the cost function often emerges for the signal reconstruction, i.e.:

min 1 2 ‖ y − A x ‖ 2 2 + λ ‖ x ‖ 1 , (28)

where ‖ . ‖ 2 is the Euclidean norm, and

‖ x ‖ 1 = ∑ j = 1 m | x j |

is the l 1 norm, A is a system matrix, y ∈ R m is the observed data, x ∈ R n is the signal to be reconstructed, and λ is a positive regularization parameter.

The optimization problems of the form Equation (28) appear in several signal reconstruction problems, such as sparse signal de-blurring [

min u , v 1 2 ‖ y − A ( u − v ) ‖ 2 2 + λ e n T u + λ e n T v , s .t . u ≥ 0 , v ≥ 0. (29)

Furthermore, the problem (29) can be rewritten as a standard convex quadratic program problem:

min z 1 2 z T B z + c T z , s .t . z ≥ 0 , (30)

where

z = ( u v ) , c = λ e 2 n + ( u v ) , b = A T y , B = ( A T A − A T A − A T A A T A ) ,

B is a semi-definite positive matrix. Recently, the problem (30) was reformulated as a linear variable inequality (LVI) problem by Xiao et al. [

F ( z ) = min { z , B z + c } = 0 , (31)

where F ( z ) is Lipschitz continuous. This result indicates that problem (28) can be solved by MPRP projection method.

In this part of numerical experiments, a compressive sensing scenario is considered, which aims to reconstruct a length-n sparse signal from significantly fewer m observations, where m ≪ n . The quality of restoration is measured by the mean of squared error (MSE) to the original signal x ¯ , that is

MSE = 1 n ‖ x ¯ − x * ‖ ,

where x * is the restored signal. In practice, n = 2 12 and m = 2 10 , and the original contains 2^{6} randomly non-zero elements. A is the Gaussian matrix generated by Matlab’s code rand ( m , n ) , the measurement y contains noise,

y = A x ¯ + ω ,

where ω is the Gaussian noise distributed as N ( 0 , 10 − 4 ) . The merit function is

f ( x ) = 1 2 ‖ y − A x ‖ 2 2 + τ ‖ x ‖ 1 ,

where τ is forced to decrease as the measure in. The experiment starts at the measurement image, i.e. x 0 = A T y , and terminates when the relative change of the iteration satisfies:

Tol = ‖ f k − f k − 1 ‖ ‖ f k − 1 ‖ < 10 − 5 ,

where f k is the function value at x k .

We compare the proposed MPRP method with CGD method for this problem. In both methods, the parameters are taken as ξ = 10 , σ = 10 − 4 and ρ = 0.5 . The same initial point and continuation technique on parameter τ are used in both methods.

MSE | Niter | CPU(s) | MSE | Niter | CPU(s) |
---|---|---|---|---|---|

9.152e-006 | 119 | 2.69 | 2.278e-005 | 227 | 6.73 |

1.562e-005 | 120 | 3.23 | 6.210e-005 | 172 | 4.72 |

6.780e-006 | 127 | 3.47 | 2.520e-005 | 209 | 5.83 |

8.236e-006 | 124 | 3.05 | 3.367e-005 | 236 | 7.48 |

1.446e-005 | 120 | 3.09 | 8.207e-005 | 167 | 4.64 |

9.091e-006 | 110 | 2.25 | 4.870e-005 | 221 | 6.09 |

8.346e-006 | 122 | 3.31 | 5.382e-005 | 174 | 5.47 |

8.669e-006 | 117 | 3.23 | 4.233e-005 | 216 | 5.91 |

6.977e-006 | 123 | 3.33 | 3.839e-005 | 210 | 5.78 |

8.973e-006 | 122 | 3.70 | 3.789e-005 | 225 | 6.88 |

1.050e-005 | 119 | 3.30 | 5.531e-005 | 208 | 5.86 |

1.204e-005 | 128 | 2.63 | 5.370e-005 | 204 | 5.27 |

6.265e-006 | 111 | 3.52 | 1.873e-005 | 202 | 6.66 |

8.977e-006 | 129 | 3.70 | 3.035e-005 | 222 | 6.28 |

7.975e-006 | 126 | 3.47 | 6.946e-005 | 172 | 4.78 |

number of iterations (Niter) and the CPU time (in second) required for the whole testing process. From

In this paper, we proposed a conjugate gradient projection algorithm for solving large-scale nonlinear convex constrained monotone equations based on the well-known Polak-Ribière-Polyak conjugate gradient method which is one of the most effective conjugate gradient methods to solve the unconstrained optimization problems. The algorithm combines CG technique with projection scheme and is a derivative-free method, so it can be applied to solve large-scale non-smooth equations for its low storage requirement. Under some technical conditions, we have established the global convergence. Another contribution of this paper is to use the given method to solve the l 1 -norm regularized problems in compressive sensing.

This work was supported by the Scientific Research Project of Tianjin Education Commission (No. 2019KJ232).

The authors declare no conflicts of interest regarding the publication of this paper.

Hu, Y.P. and Wang, Y.J. (2020) An Efficient Projected Gradient Method for Convex Constrained Monotone Equations with Applications in Compressive Sensing. Journal of Applied Mathematics and Physics, 8, 983-998. https://doi.org/10.4236/jamp.2020.86077