Fixed-Point Iteration Method for Solving the Convex Quadratic Programming with Mixed Constraints ()
Keywords:Fixed-Point Iteration; Convex Quadratic Programming Problem; Convergence; Smoothing Function
1. Introduction
Consider the following convex quadratic programming problem
![](https://www.scirp.org/html/htmlimages\5-7401944x\d6d719fd-9354-42d9-b144-52701720ad99.png)
(1)
where
is an
symmetrical positive semi-definite partitioned matrix,
is an
partitioned matrix,
and
are vectors in
and
, respectively. And
,
are the partitioned matrix of
and
respectively.
As is well known, this problem models a large spectrum of applications in computer science. Operation research and engineering as the nonlinear programming is solved eventually based on the convex quadratic programming problem. Therefore, the study on its effective solvers becomes a prolonged research subject.
Common effective algorithm is proposed related to certain schemes of convex quadratic programming problem, see [1-5]. Moreover, these methods are not polynomial algorithm in [6]. And the theory and algorithm of quadratic programming are presented completely. According to the optimization condition (KKT) and the duality theory of quadratic programming, the fixed-point iteration method is obtained for equality constrained (see [7]) and inequality constrained (see [8]) convex quadratic programming problem respectively. In fact, equality and inequality can be converted with inputting artificial variables and slack variables. The main default is to increase the scale and numerical difficulty while introduces these variables. Motivated by smoothing methods of
[9-11] and fixed-point method of [7,8], the paper mainly concerns about the mixed constraint quadratic programming and the fixed-point iteration method is given. And also the rank of the coefficient matrix is not full. The method considering is fulfilled efficiently. Thus the proposing question in [8] is solved.
2. Scheme of Algorithm
According to the dual theory, the dual problem of primal problem (1) is
![](https://www.scirp.org/html/htmlimages\5-7401944x\d86a93cb-4d72-4ab2-aef8-4266ba343d94.png)
(2)
where
.
Let
,
,
, we know if
is the optimal solution of (1),there exists a vector
, an
symmetrical positive semi-definite partitioned matrix, and
is the optimal solution of (2).
Let us define the following notation
![](https://www.scirp.org/html/htmlimages\5-7401944x\00784267-ad5e-4040-a3d6-8ba5a44290f6.png)
is the optimal solution of (1) and
is the optimal solution of (2) }.
It is easy to verify that
is a close set.
Suppose that the feasible region
is not empty. A necessary and sufficient condition for a optimal solution is that:
1) Primal feasibility
![](https://www.scirp.org/html/htmlimages\5-7401944x\de15ceee-0152-461f-a047-cc06ea4a91d0.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\2654fbb7-10cc-4005-8f14-14b54160537b.png)
2) Dual feasibility
![](https://www.scirp.org/html/htmlimages\5-7401944x\b9279a34-1c3c-4c8a-8611-c38dc6e45672.png)
3) Complementary condition
![](https://www.scirp.org/html/htmlimages\5-7401944x\d02c678f-1ba5-4c77-b62f-a851a4d30834.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\7341ae07-1fdb-48d8-b7c2-1a2a120651a0.png)
The scheme are rewritten below
![](https://www.scirp.org/html/htmlimages\5-7401944x\d30a3f36-c9cb-464a-9d3b-a1f1b34850df.png)
(3)
Where
![](https://www.scirp.org/html/htmlimages\5-7401944x\70ea25a1-59c7-4a92-9ce5-cd48dd08a36a.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\49c10c3e-4e1a-4066-9a1e-e65b03eb48f0.png)
More formally, the following link fixed-point iteration problem is performed
(4)
The component form is
(5)
Where
.
The question (5) is non-smooth optimization problem, and the Newton-type methods cannot be applied directly. In the paper, we converted the problem into a smooth optimization.
At the beginning, we introduce a function mapping
into
, and have the following properties.
Property 1 Let
be a real function, and such that:
1)
is strictly convex and differentiable;
2)
;
3)
;
4) ![](https://www.scirp.org/html/htmlimages\5-7401944x\544295c3-5bd8-4084-ac2c-4b8316e8a0f0.png)
We assumed that
is a convex and differentiable function, which indicate one can apply the classical Newton-type algorithm directly. And the assumption
means that 0 is not a sub-gradient of
at 0. The hypothesis (3) and (4) indicate the approximate function
has the same numerical properties as the max-function.
We consider the following function, and
describe as above
(6)
For any
,
satisfies the following properties:
Theorem 1 Let be a function given as (6), then for any arbitrary
,
is strictly convex and differentiable, and we have
.
Proof According to property 1 and (6), the convexity and differentiability are straightforward.
Nothing that
![](https://www.scirp.org/html/htmlimages\5-7401944x\76e8be7c-400c-4374-81e2-3d5b31b8d217.png)
The property (2) implies
.
Theorem 2 Let
be a function given as (6), then
. That is, the function
uniformly converges to the max-function, when the adjustable parameter
approaches
.
Proof For
fixed, we can compute
![](https://www.scirp.org/html/htmlimages\5-7401944x\773db5f6-5a73-44ff-ab09-8fd7185b94f7.png)
If
, then
; otherwise, change variable
,
![](https://www.scirp.org/html/htmlimages\5-7401944x\02eb69ad-baa0-4923-9f62-e8a29604ea83.png)
Hence,
for all
.
From the discussion above, we introduce the following function in the paper
![](https://www.scirp.org/html/htmlimages\5-7401944x\59013725-6bd4-4cb2-9795-bab756039032.png)
and
![](https://www.scirp.org/html/htmlimages\5-7401944x\3a68bf49-b3e8-4964-8428-fed8e4f40d09.png)
It is easily known that satisfies the properties (1) to (4), together with theorem 1 and 2. Thus, the max-function in (3) replaced by
, and the fixed-point iteration which basing on smoothing function is obtained.
Algorithm
Step 1 Set a initial point
, choose
,
,
, let
;
Step 2 Apply fixed-point algorithms to solve
![](https://www.scirp.org/html/htmlimages\5-7401944x\a460b4ff-cf5b-428d-8343-575c85f97e04.png)
(7)
Step 3 Terminal condition:
, stop; otherwise, go to Step 4;
Step 4 Let
,
, go to step 2.
The Jacobian matrix in (7) is shown
(8)
Where
![](https://www.scirp.org/html/htmlimages\5-7401944x\8bbb5c43-6bd5-44d0-8967-7d1c3f5dadde.png)
3. Convergence Analyses
Now we discuss the convergence of the algorithm. To begin with, two simple lemmas are introduced that is the basis of our theoretical analysis. Since their proof can be found in reference [12], we here omit the proof due to space.
Lemma 1 [12] A necessary and sufficient condition for the fixed-point iteration method to be convergent is that the spectral radius of Jacobian matrix B satisfies
, where the spectral radius
be defined as
, and denotes the eigenvalue of matrix
.
Lemma 2 [12] Suppose
is a matrix, whose eigenvalues can be ordered from the smallest to largest, i.e.
, and V is diagonal matrix with nonnegative diagonal elements, then the maximum norm of the matrix
’s eigenvalue such that
.
Theorem 3 suppose
be the eigenvalues of the Jacobian matrix
, if one of the following conditions is true, then we can select the proper parameter
makes the algorithm convergent:
(1) The real part of
is positive, that is
;
(2) All of the eigenvalues
of matrix
are real.
Proof Suppose
are eigenvalues of the Jacobian matrix in (8), and matrix
eigenvalues denote as
.
Let
(where
).
Note that
![](https://www.scirp.org/html/htmlimages\5-7401944x\ceff6327-155f-4e2f-8277-d2ab032aa816.png)
If
, for all positive number of
; while
, we know
.
1) If
is positive, we let
, and have ![](https://www.scirp.org/html/htmlimages\5-7401944x\cccea2ed-e5c1-4bbd-b0c3-8b94324e63e0.png)
;
2) If all of the eigenvalues
of matrix
are real, we let
, and have
.
For all discussed above, associate with Lemma 1, we know that the matrix
has the eigenvalues of
, such that
![](https://www.scirp.org/html/htmlimages\5-7401944x\bf9b3eb4-b6d0-4f05-83db-9c25edab23e7.png)
According to the theorem above, the algorithm is always convergent if the parameter
is chosen properly. Thus the recent relevant results in reference [8] are extended.
4. Numerical Experiment and Conclusion
In this section, we implement algorithm for solving system of inequalities in MATLAB7.0 in order to demonstrate the behavior of the algorithm. Due to page limit, we omit to test the effectiveness of our method in generating highly quadratic programming problems.
Example 1 Consider
![](https://www.scirp.org/html/htmlimages\5-7401944x\541d6494-a194-4c8e-999b-7c50dd405bdb.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\d8591673-e873-4403-8a8c-b42dfa456035.png)
The exact optimal solution is
, and the optimal value of objective function is
.
Example 2 Consider
![](https://www.scirp.org/html/htmlimages\5-7401944x\40063135-7f18-407d-9103-da0bedf7c9d3.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\977331ed-9199-421b-adf9-11b342ac7b56.png)
The exact optimal solution is
, and the optimal value of objective function is
.
The results are shown in Table1
From the table, we can see that the algorithm is very simple and only applies the traditional fixed-point method. The implementation of this algorithm is rather easy.
Example 3 Consider
![](https://www.scirp.org/html/htmlimages\5-7401944x\3a031a8b-729e-4c2c-8436-b2227ea1ebe2.png)
![](https://www.scirp.org/html/Images/Table_Tmp.jpg)
Table 1. Experiments and calculated data.
![](https://www.scirp.org/html/Images/Table_Tmp.jpg)
Table 2. Experiments and calculated data.
![](https://www.scirp.org/html/htmlimages\5-7401944x\c95e909c-26d0-464f-97f9-57b2a0277021.png)
And
![](https://www.scirp.org/html/htmlimages\5-7401944x\064afa58-3c8b-4b20-922a-531e122320d1.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\42dc71fe-2ed3-4409-849d-0b7bd8c884d5.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\9a869698-2597-4e52-bf0d-75c67bd64970.png)
![](https://www.scirp.org/html/htmlimages\5-7401944x\f3633ae9-9671-434c-9298-69d28c606570.png)
The exact optimal solution is
and the optimal value of objective function is
.
Appling the method proposed, the iterations consuming time are shown in Table2
Acknowledgements
This work is supported by the Beijing Municipal Commission of Education Science and Technology Program (KM201310017006) and the Beijing URT Program (2012J00018).