^{1}

^{1}

^{1}

In this paper, an alternating direction nonmonotone approximate Newton algorithm (ADNAN) based on nonmonotone line search is developed for solving inverse problems. It is shown that ADNAN converges to a solution of the inverse problems and numerical results provide the effectiveness of the proposed algorithm.

We consider inverse problems that can be expressed in the form

where

In Hong-Chao Zhang’s paper [

In alternating direction nonmonotone approximate Newton (ADNAN) algorithm developed in this paper, we adopt the nonmonotone line search to replace the traditional Armijo line search in ADAN, because the nonmonotone schemes can improve the likelihood of finding a global optimum and improve convergence speed in cases where a monotone scheme is forced to creep along the bottom of a narrow curved valley in [

In the latter context, the first subproblem is to solve the unconstrained minimization problems with Alternating Direction Nonmonotone Approximate Newton algorithm. The purpose is to accelerate the speed of convergence, and then to project or the scale the unconstrained minimizer into the box

The rest of the paper is organized as follows. In Section 2, we give a review of the alternating direction approximate Newton method. In Section 3, we introduce the new algorithm ADNAN. In Section 4, we introduce the gradient-based algorithm of the second subproblem. A preliminary convergence analysis for ADNAN and gradient- based algorithm (GRAD) is given in Section 5. Numerical results presented in Section 6 explain the effectiveness of ADNAN and GRAD.

In this section, we briefly review the well-known Alternating Direction Approximate Newton (ADAN) method which has been studied in the areas of convex programming and image reconstruction see [

We introduce a new variable w to obtain the split formulation of (1):

The augmented Lagrangian function associated with (2) is

where

And (4) can be written as follows:

For any Hermitian matrix

In ADAN, the choice

Here,

and

Note that by substituting

The inner product between

It follows that

where

In this section, we adopt a nonmonotone line search method [

Initialization: Choose starting guess

Convergence test: If

line search update: set

or the (nonmonotone) Armijo conditions:

Cost update: Choose

Observe that _{k} and_{k} is a convex combination of the function values

then

scheme with

Lemma 2.1 If

In Algorithm 1, we could get the x at each iteration which can be combined with Algorithm 2. Then, we use the Algorithm 2 to solve the first subproblem in this paper which is an unconstrained minimization problem with ADNAN, then to project or the scale the unconstrained minimizer into the box

the iteration is as follows:

Later we give the existence and uniqueness result for (1).

The solution

with P being the projection map onto the box

Algorithm 2. Alternating Direction Nonmonotone Approximate Newton algorithm.

Parameter:

Step 1: If

Step 2: If

Step 3: If

Step 4: Update x which generated from Algorithm 1.

Step 5:

Step 6:

Step 7: If a stopping criterion is satisfied, terminate the algorithm, Otherwise k = k + 1 and go to Step 1.

Lemma 3.1: we show some criteria that are only satisfied a finite number of times, so

Uniformly in k, we have

Lemma 3.2: If

Next, we consider the second subproblem which is about bound-constrained optimization problem as

And the iteration is similar with (4) (5) (6) as follows:

Compute the solution

Compute the solution

Set

In this section, we show the convergence of proposed algorithms. Obviously, the proofs of the two algorithms are almost the same, and we only prove the convergence of algorithm 2.

Lemma 3.1: Let L be the function in (3). The vector

Lemma 3.2: Let L be the function in (3),

the sequence

Theorem 3.1: Let

then

Proof From Lemma 3.1, 3.2, we obtain that

Since we have a unique minimizer in

In Algorithm 2, the parameters

The search directions were generated by the L-BFGS method developed by No-cedal in [

In addition, we timed how long it took ADNAN to reduce the objective error to within 1% of the optimal objective value. The algorithms are coded in MATLAB, version 2011b, and run on a Dell version 4510U with a 2.8 GHz Intel i7 processor.

In Algorithm 3, a 256-by-256 gray-scale image was considered, which is compared to the experiment by J. Zhang [

This section compares the performance of the ADNAN to ADAN. The main difference between the ADNAN algorithm and the ADAN algorithm is the computation of

The initial guess for

On the other hand, the experiment results about Algorithm 3 are as follows:

Original image blurry image deblurred image

Original image blurry image deblurred image

Original image blurry image deblurred image

Original image blurry image deblurred image

According to the Figures 1-3, we can conclude that the nonmonotone line search could accelerate the convergence speed, furthermore ADNAN could get the objective values more stable and fast during the iterations when compared to ADAN.

On the other hand, the validness of GRAD is verified. Experiments results on image deblurring problems in Figures 4-7 show that difference constraints on x can also get effective deblurred images.

This work is supported by Innovation Program of Shanghai Municipal Education Commission (No. 14YZ094).

Zhang, Z.H., Yu, Z.S. and Gan, X.Y. (2016) An Alternating Direction Nonmonotone Approximate Newton Algorithm for Inverse Problems. Journal of Applied Mathematics and Physics, 4, 2069- 2078. http://dx.doi.org/10.4236/jamp.2016.411206