^{1}

^{*}

^{2}

^{3}

^{3}

^{3}

The generalized l_{1} greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results showed that this algorithm is superior to the reweighted l_{1}-minimization and l_{1} greedy algorithms in reconstructing these medical images. In this paper the effectiveness of the generalized l_{1} greedy algorithm in finding random sparse signals from underdetermined linear systems is investigated. A series of numerical experiments demonstrate that the generalized l_{1} greedy algorithm is superior to the reweighted l_{1}-minimization and l_{1} greedy algorithms in the successful recovery of randomly generated Gaussian sparse signals from data generated by Gaussian random matrices. In particular, the generalized l_{1} greedy algorithm performs extraordinarily well in recovering random sparse signals with nonzero small entries. The stability of the generalized l_{1} greedy algorithm with respect to its parameters and the impact of noise on the recovery of Gaussian sparse signals are also studied.

In signal processing one wants to reconstruct a signal from highly incomplete sets of linear measurements of the signal, that is, the number of measurements is much smaller than the dimension of the signal. More precisely, assuming with, one wants to reconstruct an unknown signal from a set of m measurements b = Ax_{0}. This requires one to solve the system of linear equations Ax = b (1)

to determine the solution that is exactly equal to x_{0}. Since system (1) is consistent and underdetermined, it has infinitely many solutions making it difficult to find the correct solution x_{0}. In many actual applications, such as image reconstruction and decoding, however, the signal one wants to reconstruct is known to be sparse (or nearly sparse) in the sense that its coefficients in some orthonormal basis are mostly zero (or approximately zero). The theory of compressed sensing [1-5] reveals that signals that have sparse representations can be reconstructed with high precision from far fewer measurements than the dimension of the signal itself. In fact, if the columns of A are chosen from a suitable distribution and the signal is sufficiently sparse, then the signal can be exactly recovered by solving the following standard l_{1}-norm minimization problem:

where. This optimization problem of a convex objection function can be solved effectively and it has broad applications [6-10]. But the iterative l_{1}-minimization method has a shortcoming in finding the sparsest solution. Since the larger entries of x in each iteration skew the l_{1}-norm, they are more heavily penalized in the l_{1}-minimization process. To address this imbalance, weighted algorithms were introduced to reduce the influence of the larger entries. Two major algorithms designed for this purpose are the reweighted l_{1}-minimization and l_{1} greedy algorithms [11,12].

Suppose that is the sequence of vectors generated by l_{1}-minimization. In the k-th iteration of the reweighted l_{1}-minimization method [

Observe that the weights in (3) are roughly inversely proportional to the sizes of the entries of the previous iterate x^{k−}^{1}. So the larger entries are weighted down to rectify their undue influence in the next iteration of the l_{1}-minimization process. Numerical experiments [_{1}-minimization recovers random sparse signals with a much higher probability than the standard l_{1}-minimization in (2). The reweighted l_{1}-minimization algorithm has been extensively studied in recent years. The l_{q}-minimization problem, 0 < q ≤ 1, was discussed and implemented using the reweighted l_{1}-minimization scheme [13,14]. A two-step reweighted l_{1}-minimization was introduced to improve the recovery of sparse signals [_{1}- minimization for a nonuniform sparsity model was proposed [_{1}-minimization with noisy data was also rigorously analyzed [_{1}-minimization for a special family of measurement matrices were studied [

In the l_{1} greedy algorithm [_{1} greedy algorithm are defined by

where, x^{0} is generated by the standard l_{1}-minimization, and. Numerical experiments showed that the l_{1} greedy algorithm outperforms both the unweighted and reweighted l_{1}-minimization algorithms in recovering random sparse signals [12,19].

A generalized l_{1} greedy algorithm in the compressed sensing framework was recently introduced by the authors of [_{1} greedy algorithm to counteract the influence of large entries but also assigns significantly large weights to the smallest nonzero entries to speed up the identification of nonzero entries. Moreover, in contrast to the l_{1} greedy algorithm where the remaining entries are assigned a neutral weight, the remaining entries in the generalized l_{1} greedy algorithm receive weight roughly inversely proportional to their magnitudes as in the reweighted l_{1}-minimization algorithm. Thus the generalized l_{1} greedy algorithm not only incorporates features of both the l_{1} greedy and reweighted l_{1}-minimization algorithms but also enhances the impact of the small entries in the l_{1}-minimization process.

Generalized l_{1} greedy algorithm

1) Generate x^{0} by the reweighted l_{1}-minimization;

2) For k = 1 to k_{max};

a) Update the weight matrix W^{k}; where

b) Solve weighted l_{1}-minimization problem:

c) Return if a stopping criterion is met.

In [_{1} greedy algorithm was applied to the problem of reconstructing essentially piecewise constant medical images in computerized tomography (CT) in the compressed sensing framework via total variation minimization. Tested with the Shepp-Logan phantom and a real cardiac CT image, the generalized l_{1} greedy algorithm was shown to perform better than the reweighted l_{1}-minimization and l_{1} greedy algorithms. In particular, it was observed that in the context of reconstructing these two images the generalized l_{1} greedy algorithm was superior to the others at distinguishing small gradients. However, to show that the generalized l_{1} greedy algorithm is truly superior to the other two algorithms at detecting small entries in general we should compare the performance of the three algorithms in recovering random sparse signals. So in this paper, following [11,12], we present a series of rigorous numerical studies of the performance of the generalized l_{1} greedy algorithm in the general setting of Gaussian random matrices A and random sparse signals x. The rest of this paper is organized as follows. In Section 2, the relative frequencies of successful recovery of random Gaussian sparse signals for the reweighted l_{1}-minimization, l_{1} greedy, and generalized l_{1} greedy algorithms are compared. Section 3 presents the stability of the generalized l_{1} greedy algorithm with respect to its parameters. Section 4 studies the performance of the generalized l_{1} greedy algorithm on noisy data. Section 5 shows that the generalized l_{1} greedy algorithm is better at detecting smaller entries in the general setting than the other two algorithms. Section 6 concludes with a brief summary of the generalized l_{1} greedy algorithm and the results.

In our first experiment we want to determine how well each of the three algorithms can recover random Gaussian sparse signals from random Gaussian measurements b = Ax. Following the same approach taken in [_{1}-MAGIC software package developed by E. Candes and J. Romberg (available at www.l1-magic.org). We set m = 128 and n = 256. For each trial, a random matrix with i.i.d. Gaussian entries is selected and its columns are normalized. A random k-sparse signal is also selected in such a way that the k nonzero positions are randomly distributed and the nonzero components satisfy the standard Gaussian distribution. We run 150 trials for each sparsity level k between 50 and 90. The total number of iterations (excluding the initial l_{1}-minimization step) for each of the three algorithms is set to 16. For the generalized l_{1} greedy algorithm we start with 4 iterations of the reweighted l_{1}-minimization. The criterion for successful recovery for all three algorithms is set to

where x is the reconstruction of x_{0} by the algorithm. The parameters chosen for the three algorithms are listed as follows:

1) In the reweighted l_{1}-minimization:.

2) In the l_{1} greedy algorithm:;; others are the same as in 1).

3) In the generalized l_{1} greedy algorithm:; s = 0.8;; others are the same as in 2).

The settings in this section will be used throughout the paper unless changes are explicitly stated otherwise.

The output of this experiment is presented in _{1} greedy algorithm is higher than in the both cases of the reweighted l_{1}-minimization and l1 greedy algorithms. On average, the l_{1} greedy algorithm and the generalized l_{1} greedy algorithm recover about 14% and 18% more entries than the reweighted l_{1}-minimization method, respectively, for 50 ≤ k ≤ 90. Furthermore, on average, the generalized l_{1} greedy algorithm recovers about 6% more entries than the l_{1} greedy algorithm.

An empirical analysis of the reweighted l_{1}-minimization

algorithm determined that the algorithm is robust with respect to (chosen from a suitable range) and that much of the improvement in recovery comes from the first few reweighting iterations [_{1} greedy algorithm, and our results indicate that the algorithm is stable with respect to each of its parameters (within a certain range of values). We illustrate this behavior with the parameter using the same settings as in Section 2 for the remaining parameters. From _{max} of the generalized l_{1} greedy algorithm has minimal affect on the performance of the algorithm when k_{max} ≥ 10. So in practice only a few iterations are needed to achieve the best performance of the generalized l_{1} greedy algorithm.

In real life applications measured data are often corrupted by a small amount of noise. Thus one needs to recover the original signal x_{0} from noisy data

;

where is an unknown noise term. The signal-tonoise ratio (SNR) in dB is defined by

where b = Ax_{0} is noise-free data. In this section we show how white Gaussian noise at SNR levels 40 dB and 60 dB, respectively, affect the performance of the generalized l_{1} greedy algorithm. We also compare the performance of the reweighted l_{1}-minimization, l_{1} greedy, and generalized l_{1} greedy algorithms on noisy data with an SNR of 60 dB. As in [_{1} greedy algorithm is very robust with respect to noise at an SNR level 60 dB and fairly robust with respect to noise at an SNR level 40 dB for 30 ≤ k ≤ 90. _{1} greedy algorithms outperform the other two algorithms. Moreover, for noisy data with an SNR of 60 dB, on average, the generalized l_{1} greedy algorithm recovers about 17% more entries than the reweighted l_{1}-minimization algorithm and about 5% more entries than the l_{1} greedy algorithm for 50 ≤ k ≤ 90.

It is known that the l_{1} greedy algorithm outperforms the reweighted l_{1} minimization algorithm in finding spare signals [12,19]. However, the reweighted l_{1}-minimization algorithm was designed to help speed up the detection of small entries [_{1} greedy algorithm of [_{1} greedy algorithm while enhancing the power of the reweighted l_{1}-minimization algorithm in detecting small entries. In fact, the generalized l_{1} greedy algorithm appears to be superior to the other two algorithms in distinguishing small gradients in the task of reconstructing images via total variation minimization [_{1} greedy algorithm would perform in recuperating random sparse signals with a guaranteed percentage of small entries. More precisely, in our last experiment we want to determine the extent to which the ratio of very small entries in the sparse signals affects the probability of successful recovery by each of the algorithms under consideration. The entries of the sparse signal in each trial are obtained from a mixed Gaussian distribution as follows: a random 30% of the entries are generated using a Gaussian distribution with mean 0 and standard deviation 0.01 while the remaining 70% of the entries are generated using the standard Gaussian distribution. We need to fine tune the generalized l_{1} greedy algorithm to make it most efficient at detecting the small entries in the range we set. Experimental trials show that setting results in the best performance. The values of the other parameters are left unchanged. We then run 150 trials for each sparsity level k, 50 ≤ k ≤ 105, and set the criterion for successful recovery to. As one can see from _{1} greedy algorithm vastly outperforms both the reweighted l_{1}-minimization and the l_{1} greedy algorithms in recovering sparse solutions containing a few nonzero small entries. Moreover, on average, the generalized l_{1} greedy algorithm recovers 32% more entries than the reweighted l_{1}-minimization algorithm and 11% more entries than the l_{1} greedy algorithm for 50 ≤ k ≤ 105.

Our statistical experiments indicate that the generalized l_{1} greedy algorithm outperforms the reweighted l_{1}-minimization and l_{1} greedy algorithms in recovering random sparse signals from random Gaussian measurements. In fact, the generalized l_{1} greedy algorithm recovers more entries than the other two algorithms. Moreover, the performance of the algorithm is robust with respect to its parameters and to noisy data at different noise levels. Finally, the generalized l_{1} greedy algorithm performs

extremely well in detecting small entries of unknown sparse signals thereby dramatically speeding up their recovery via l_{1}-minimization. It is expected that more details of signals could be recovered by using the generalized l_{1} greedy algorithm without extra cost.

F. Arroyo was supported by a faculty research grant from Francis Marion University. X. Li and J. Zhu were partially supported by a Faculty Research Award and a COSM Interdisciplinary Pilot Fund from Georgia Southern University.