An Alternating Direction Nonmonotone Approximate Newton Algorithm for Inverse Problems

In this paper, an alternating direction nonmonotone approximate Newton algorithm (ADNAN) based on nonmonotone line search is developed for solving inverse problems. It is shown that ADNAN converges to a solution of the inverse problems and numerical results provide the effectiveness of the proposed algorithm.


Introduction
We consider inverse problems that can be expressed in the form ( )  [4] or model reduction [5]. We assume that the functions in (1) are strictly convex, so both problems has an unique solution * x .
In Hong-Chao Zhang's paper [6], he uses the Alternating Direction Approximate Newton method (ADAN) based on Alternating Direction Method (ADMM) which originaly in [7] to solve (1). He employs the BB approximation to increase the iterations. In many applications, the optimization problems in ADMM are either easily resolvable, since ADMM iterations can be performed at a low computational cost. Besides, com-bine different Newton-based methods with ADMM have become a trend, see [6] [8] [9], since those methods may achieve the high convergent speed.
In alternating direction nonmonotone approximate Newton (ADNAN) algorithm developed in this paper, we adopt the nonmonotone line search to replace the traditional Armijo line search in ADAN, because the nonmonotone schemes can improve the likelihood of finding a global optimum and improve convergence speed in cases where a monotone scheme is forced to creep along the bottom of a narrow curved valley in [10].
In the latter context, the first subproblem is to solve the unconstrained minimization problems with Alternating Direction Nonmonotone Approximate Newton algorithm. The purpose is to accelerate the speed of convergence, and then to project or the scale the unconstrained minimizer into the box { } : m w R l w u ∈ ≤ ≤ , The second subproblem is a bound-constrained optimization problem. The rest of the paper is organized as follows. In Section 2, we give a review of the alternating direction approximate Newton method. In Section 3, we introduce the new algorithm ADNAN. In Section 4, we introduce the gradient-based algorithm of the second subproblem. A preliminary convergence analysis for ADNAN and gradientbased algorithm (GRAD) is given in Section 5. Numerical results presented in Section 6 explain the effectiveness of ADNAN and GRAD.

Review of Alternating Direction Approximate Newton Algorithm
In this section, we briefly review the well-known Alternating Direction Approximate Newton (ADAN) method which has been studied in the areas of convex programming and image reconstruction see [4] [6] and references therein.
We introduce a new variable w to obtain the split formulation of (1): The augmented Lagrangian function associated with (2) is where 0 β > is the penalty parameter, m b R ∈ is a Lagrangian multiplier associated with the constraint w Bx = . In ADMM, each iteration minimizes over x holding w fixed, minimizes over w holding x fixed, and updates an estimate for the multiplier b.
More specifically, if k λ is the current approximation to the multiplier, then ADMM And (4) can be written as follows: For any Hermitian matrix • is a norm. The proximal version of (4) is Here, ( ) (2) and min 0 δ > is a positive lower bound for k δ . Hence, the Hessian is approximated by can be accomplished relatively quickly. After replacing T A A by k I δ , the iteration becomes (2) and solving for the minimizer, we would get exactly the same formula for the minimizer as that given in (5). When the search direction is determined suitable step size k α along this direction should be found to determine the next iterative point such as The inner product between k d and the objective gradient at k x is ( ) It follows that k d is a descent direction. Since f is a quadratic, the Taylor expansion of ( ) x is as follows: In this section, we adopt a nonmonotone line search method [9]. The step size k α is chosen in an ordinary Armijo line search which could not admit the more faster speed in unconstrained problems [12]. In contrast, nonmonotone schemes can not only improve the likelihood of finding a global optimum but also improve convergence speed. Initialization: Choose starting guess 0 x , and parameters or the (nonmonotone) Armijo conditions: and k h is the largest integer such that (11) holds and k α µ ≤ .
Observe that

Alternating Direction Nonmonotone Approximate Newton Algorithm
In Algorithm 1, we could get the x at each iteration which can be combined with Algorithm 2. Then, we use the Algorithm 2 to solve the first subproblem in this paper which is an unconstrained minimization problem with ADNAN, then to project or the scale the unconstrained minimizer into the box ( ) the iteration is as follows: Step 4: Update x which generated from Algorithm 1.
Step 5: Step 6: Step 7: If a stopping criterion is satisfied, terminate the algorithm, Otherwise k = k + 1 and go to Step 1.

Convergence Analysis
In this section, we show the convergence of proposed algorithms. Obviously, the proofs of the two algorithms are almost the same, and we only prove the convergence of algorithm 2. , ,

Parameter Settings
In Algorithm 2, the parameters β , the penalty in the augmented Lagrangian (3), are common to these two algorithms, ADAN and ADNAN. Besides β has a vital impact on convergence speed. We choose is large enough to ensure invertibility.
The search directions were generated by the L-BFGS method developed by No-cedal in [16] and Liu and Nocedal in [1]. We choose the step size k α to satisfy the Wolfe conditions with m = 0.09 and σ = 0.9. In addition we employ a fixed value In addition, we timed how long it took ADNAN to reduce the objective error to within 1% of the optimal objective value. The algorithms are coded in MATLAB, version 2011b, and run on a Dell version 4510U with a 2.8 GHz Intel i7 processor.
In Algorithm 3, a 256-by-256 gray-scale image was considered, which is compared to the experiment by J. Zhang [8]. . The experiments on image deblurring problems show that GRAD algorithm is also effective in terms of quality of the image resolution.

Experiments Results
This section compares the performance of the ADNAN to ADAN. The main difference between the ADNAN algorithm and the ADAN algorithm is the computation of there seems to be a significant benefit from using a value for k δ smaller than the largest eigenvalue of T A A . The initial guess for 1 x , 1 w and 1 λ was zero for two algorithms. Figures 1-3 show the objective values and objective error as a function of CPU time. Moreover, we give the comparison of objective values and objective error versus CPU time/s for different Ψ conditions. It is observed that ADNAN is slightly stable than ADAN although ADNAN and ADAN are competitive. The ADNAN not only could get more smaller objective error but also get more fast convergence speed (see Figure 3). In addition, ADNAN objective value could get more smaller after a few iterations than ADAN. As a whole, the effect of ADNAN is superior to ADAN.    Original image blurry image deblurred image Original image blurry image deblurred image Figure 6. Original image blurry image deblurred image

Conclusions
According to the Figures 1-3, we can conclude that the nonmonotone line search could accelerate the convergence speed, furthermore ADNAN could get the objective values more stable and fast during the iterations when compared to ADAN. On the other hand, the validness of GRAD is verified. Experiments results on image deblurring problems in Figures 4-7 show that difference constraints on x can also get effective deblurred images.