_{1}

^{*}

In this paper, an approximate smoothing approach to the non-differentiable exact penalty function is proposed for the constrained optimization problem. A simple smoothed penalty algorithm is given, and its convergence is discussed. A practical algorithm to compute approximate optimal solution is given as well as computational experiments to demonstrate its efficiency.

Many problems in industry design, management science and economics can be modeled as the following constrained optimization problem:

( P ) min f ( x ) s .t . g j ( x ) ≤ 0, j = 1,2, ⋯ , m , (1)

where f , g j : ℜ n → ℜ , j = 1 , 2 , ⋯ , m are continuously differentiable functions. Let F 0 be the feasible solution set, that is, F 0 = { x ∈ ℜ n | g j ( x ) ≤ 0 , j = 1 , 2 , ⋯ , m } . Here we assume that F 0 is nonempty.

The penalty function methods based on various penalty functions have been proposed to solve problem (P) in the literatures. One of the popular penalty functions is the quadratic penalty function with the form

F 2 ( x , ρ ) = f ( x ) + ρ ∑ j = 1 m max { g j ( x ) , 0 } 2 , (2)

where ρ > 0 is a penalty parameter. Clearly, F 2 ( x , ρ ) is continuously differentiable, but is not an exact penalty function. In Zangwill [

F 1 ( x , ρ ) = f ( x ) + ρ ∑ j = 1 m max { g j ( x ) , 0 } . (3)

The corresponding penalty problem is

( P ρ ) min F 1 ( x , ρ ) s .t . x ∈ ℜ n . (4)

We say that F 1 ( x , ρ ) is an exact penalty function for Problem (P) partly because it satisfies one of the main characteristics of exactness, that is, under some constraint qualifications, there exists a sufficiently large ρ * such that for each ρ > ρ * , the optimal solutions of Problem ( P ρ ) are all the feasible solutions of Problem (P), therefore, they are all the optimal solution of (P) (Di Pillo [

The obvious difficulty with the exact penalty functions is that it is nondifferentiable, which prevents the use of efficient minimization methods that are based on Gradient-type or Newton-type algorithms, and may cause some numerical instability problems in its implementation. In practice, an approximately optimal solution to (P) is often only needed. Differentiable approximations to the exact penalty function have been obtained in different contexts such as in BeaTal and Teboulle [

F 3 ( x , ρ ) = f ( x ) + ρ ∑ i = 1 m max { g j ( x ) , 0 } . (5)

In Wu et al. [

Moreover, smoothed penalty methods can be applied to solve optimization problems with large scale such as network-structured problems and minimax problems in [

In this paper, we consider another simpler method for smoothing the exact penalty function F 1 ( x , ρ ) , and construct the corresponding smoothed penalty problem. We show that our smooth penalty function can approximate F 1 ( x , ρ ) well and has better smoothness. Based on our smooth penalty function, we give for (P) a simple smoothed penalty algorithm which is different from the existing literature in that the convergence of it can be obtained without the compactness of the feasible region of (P). We also give an approximate algorithm which enjoys some convergence under mild conditions.

The rest of this paper is organized as follows. In Section 2, we propose a method for smoothing the l 1 exact penalty function (3). The approximation function we give is convex and smooth. We give some error estimates among the optimal objective function values of the smoothed penalty problem, of the nonsmooth penalty problem and of the original constrained optimization problem. In Section 3, we present an algorithm to compute a solution to (P) based on our smooth penalty function and show the convergence of the algorithm. In particular, we give an approximate algorithm. Some computational aspects are discussed and some experiment results are given in Section 4.

We define a function P ρ ε ( t ) :

P ρ ε ( t ) = { 1 2 ε e ρ t ε , if t ≤ 0 ; ρ t + 1 2 ε e − ρ t ε , if t > 0 , (6)

given any ε > 0 , ρ > 0 .

Let P ρ ( t ) = ρ max { t , 0 } , for any ρ > 0 . It is easy to show that lim ε → 0 P ρ ε ( t ) = P ρ ( t ) .

The function P ρ ε ( t ) is different from the function P ε ( t ) given in [

(I)

and

(II)

Property (II) can follow from (I) immediately.

Note that

where

The corresponding penalty problem to

Since

The following Lemma is easily to prove.

Lemma 2.1 For any given

Two direct results of Lemma 2.1 are given as follows.

Theorem 2.1 Let

Theorem 2.2 Let

It follows from this conclusion that

Theorem 2.1 and Theorem 2.2 show that an approximate solution to (

Definition 2.1 A point

Under this definition, we get the following result.

Theorem 2.3 Let

Proof Since

Since

Then by Theorem 2.2, we get

Thus,

Therefore, by (10), we obtain that

This completes the proof.

By Theorem 2.3, if an approximate optimal solution of (

For

(*) There exists a

From the above conclusion, we can get the following result.

Theorem 2.4 For the constant

Proof Suppose the contrary that the theorem does not hold, then there exists a

Since

Because

On the other side,

which contradicts (11).

Theorem 2.4 implies that any optimal solution of the smoothed penalty problem is an approximately feasible solution of (P).

In this section, we give an algorithm based on the smoothed penalty function given in Section 2 to solve the nonlinear programming problem (P).

For

For Problem (P), let

Algorithm 3.1

Step 1. Given

Step 2. Take

Step 3. Let

Let

We now give a convergence result for this algorithm under some mild conditions. First, we give the following assumption.

(A1) For any

By this assumption, we obtain the following lemma firstly.

Lemma 3.1 Suppose that (A1) holds. Let

Proof Suppose the contrary that there exists a subset

Then there exists

This contradicts

Remark 3.1 From Lemma 3.1 we know that

Lemma 3.2 Suppose that (A1) holds. Let

Lemma 3.3 Suppose that (A1) holds. Let

From Lemma 3.2 and Lemma 3.3, we have the following theorem.

Theorem 3.1 Suppose that (A1) holds. Let

Before giving another conclusion, we need the following assumption.

(A2) The function

Theorem 3.2 Suppose that (A1) and (A2) holds. Let

Proof By Lemma 3.1, there exists

Therefore,

From Assumption (A2), we know that

On the other side, by Lemma 3.3, when

Then

Therefore, from (15) and (16),

Therefore,

The above theorem is different from the conventional conclusion in other literatures with respect to the convergence of penalty method.

In the following we give an approximate smoothed penalty algorithm for Problem (P).

Algorithm 3.2

Step 1. Let

Step 2. Take

Step 3. If

if

if

Let

Remark 3.1 By the analysis of the error estimates in Section 2, We know that whenever the penalty parameter

In this section, we will discuss some computational aspects and give some numerical results.

We apply Algorithm 3.2 to nonconvex nonlinear programming problem (P), for which we do not need to compute a global optimal solution but a local one. And in this case, we can also obtain the convergence by the following theorem.

For

Theorem 4.1 Suppose Algorithm 3.2 does not terminate after finite iterations and the sequence

Proof First we show that

Suppose to the contrary that

Then we get

which results in a contradiction since f is coercive.

We now show that any limit point of

Suppose to the contrary that

Note that

If

It follows from (18) that

which contradicts the assumption that

We now show that (17) holds.

Since for

that is,

For

then,

It follows from (19) and (20) that

Let

Then we have

When

For

Theorem 4.1 implies that the sequence generated by Algorithm 3.2 may converge to a FJ point [

Example 4.1 (Hock and Schittkowski [

The optimal solution to (P4.1) is given by

Example 4.2 (Hock and Schittkowski [

The optimal solution to (P4.2) is given by

Example 4.3 (Hock and Schittkowski [

k | |||||
---|---|---|---|---|---|

1 | 1 | 1 | −23.007125 | (4.008345, 2.852342, 2.012314) | 0.536164 |

2 | 1 | 0.1 | −22.664486 | (3.999987, 2.839677, 1.995346) | 5.305913E−002 |

3 | 1 | 0.01 | −22.630667 | (3.999131, 2.838421, 1.993677) | 5.315008E−003 |

4 | 1 | 0.001 | −22.627288 | (3.999045, 2.838296, 1.993510) | 5.459629E−004 |

5 | 1 | 0.0001 | −22.626939 | (3.999036, 2.838283, 1.993493) | 5.355349E−005 |

k | |||||
---|---|---|---|---|---|

1 | 4 | 1 | −43.803614 | (1.714448E−002, 0.988781, 2.003621, −0.941664) | −2.005386E−002 |

2 | 4 | 0.1 | −43.981817 | (−3.180834E−003, 0.994377, 2.006306, −0.986584) | −8.705388E−005 |

3 | 4 | 0.01 | −43.997686 | (−5.896973E-003, 0.994811, 2.005919, −0.993193) | 1.858380E−005 |

4 | 4 | 0.001 | −43.999246 | (−6.175823E−003, 0.994855, 2.005868, −0.993889) | 2.131278E−006 |

k | |||||
---|---|---|---|---|---|

1 | 1 | 1 | 658.071632 | (2.380946, 2.034365, −0.383740, 4.714288, −0.608364, 1.018706, 1.617855) | 21.195303 |

2 | 2 | 21.195303 | 681.505337 | (2.060105, 1.967376, −0.366454, 4.424977, −0.621624, 0.964168, 1.679359) | 1.279039 |

3 | 2 | 2.119530 | 680.823561 | (2.278954, 1.954036, −0.432778, 4.375586, −0.624030, 1.027213, 1.607750) | 0.154300 |

4 | 2 | 0.211953 | 680.651942 | (2.325791, 1.951432, −0.453430, 4.366594, −0.624490, 1.038047, 1.594178) | 1.593473E−002 |

5 | 2 | 2.119530E−002 | 680.633110 | (2.330656, 1.951086, −0.456864, 4.365882, −0.624528, 1.039155, 1.592410) | 1.602348E−003 |

6 | 2 | 2.119530E−003 | 680.631250 | (2.331044, 1.951023, −0.456929, 4.365892, −0.624526, 1.039303, 1.592087) | 1.601163E−004 |

7 | 2 | 2.119530E−004 | 680.631062 | (2.331083, 1.951017, −0.456935, 4.365893, −0.624526, 1.039318, 1.592054) | 1.813545E−005 |

8 | 2 | 2.119530E−005 | 680.631044 | (2.331087, 1.951017, −0.456936, 4.365893, −0.624526, 1.039319, 1.592051) | 3.983831E−006 |

The optimal solution to (P4.3) is given by

with the optimal objective function value 680.6300573. Let

From the above classical examples, we can see that our approximate algorithm can produce the approximate optimal solutions of the corresponding problem successfully. But the convergent speed can be improved if we use the Newton-type method in Step 2 of Algorithm 3.2, which will be researched in our future work.

This research is supported by the Natural Science Foundations of Shandong Province (ZR2015AL011).

The author declares no conflicts of interest regarding the publication of this paper.

Liu, B.Z. (2019) A Smoothing Penalty Function Method for the Constrained Optimization Problem. Open Journal of Optimization, 8, 113-126 https://doi.org/10.4236/ojop.2019.84010