A Continuous Approach to Binary Quadratic Problems

Abstract

This paper presents a continuous method for solving binary quadratic programming problems. First, the original problem is converted into an equivalent continuous optimization problem by using NCP (Nonlinear Complementarity Problem) function, which can be further carry on the smoothing processing by aggregate function. Therefore, the original combinatorial optimization problem could be transformed into a general differential nonlinear programming problem, which can be solved by mature optimization technique. Through some numerical experiments, the applicability, robustness, and solution quality of the approach are proved, which could be applied to large scale problems.

Share and Cite:

Liu, Z. , Yu, Z. and Wang, Y. (2018) A Continuous Approach to Binary Quadratic Problems. Journal of Applied Mathematics and Physics, 6, 1720-1732. doi: 10.4236/jamp.2018.68147.

1. Introduction

Binary quadratic programming (BQP) problem is a kind of typical combinatorial optimization problem, and has a variety of applications in computer aided design, traffic management, cellular mobile communication frequency allocation, operations research and engineering. And many combinatorial optimization problems with constraints can be transformed into BQP problems form by certain transformation, so these problems can be on behalf of kinds of important problems in combinatorial optimization. Hammer [1] pointed out that any integer programming problems where the objective function is quadratic or linear and the constraint condition is linear can be described as BQP problems. Glover et al. [2] have successfully used this transformation method to transform the quadratic knapsack problem into BQP problem for solving. Considering the feasibility of this transformation, BQP problem has more extensive practical application. Due to the wide application background of this problem and the difficulties caused by NP (Non-Deterministic Polynomial) hard properties, it is of great academic value to study effective algorithms for solving such problems.

Binary quadratic programming can be uniformly written in the following general

min f ( x ) = x T Q x s .t . x i { l i , u i } , i = 1 , 2 , , n (1)

where Q is a real symmetric n × n matrix, and the objective function can be followed by another linear term, but it can be omitted because there is no substantial influence on the method introduced later. Of course, in the following discussion, we will mainly focus on the so-called 0 - 1 quadratic programming:

min f ( x ) = x T Q x s .t . x i { 0 , 1 } , i = 1 , 2 , , n (2)

However, the continuous method proposed later in this paper is also applicable to the general binary quadratic programming (1), which can be easily accomplished by simple variable transformation. It is well known that 0 - 1 programming problem will increase exponentially with the increase of the scale of the problem, and within limited computer resources and time the traditional solution method will be difficult to realize. Because of the wide application prospect and difficult characteristic of 0 - 1 programming, how to solve this kind of combinatorial optimization problem effectively has been the focus of many scholars. The classical methods for solving 0 - 1 programming problems are mainly divided into two types: accurate algorithms and random search algorithms. The accurate algorithms include implicit enumeration method, branch and bound algorithm, cutting plane algorithm, dynamic programming method, etc. Random search algorithms include genetic algorithm, simulated algorithm, artificial neural network algorithm, etc. [3] [4] [5] [6] [7] . But continuous method has become a new tendency of the 0 - 1 programming problem in recent years, whose advantage is to avoid inherent characteristics of the combinatorial optimization problem, and no longer limited to the size of the problem. This method is to transform the combinatorial optimization Aproblem into equivalent continuous optimization problem for solving, thus effectively avoid the so-called combination explosion problem, which can solve some large combinatorial optimization problems efficiently. While this approach is still in the exploratory stage with few research work in this area, but we think it is a noteworthy new research direction. Interested readers can see the literature [8] - [13] .

In this article, we will first turn the binary constraints of the 0 - 1 programming problem into equivalent complementarity problem, that is to say, a mathematical programming problem with equilibrium constraints (MPEC); then NCP function is used to transform the complementary condition into equivalent equation. Finally, the non-smooth equation is smoothed by using the aggregate function and we finally get a nonlinear smooth optimization problem.

2. Continuous Formulation of BQP

The simplest and most intuitive way to solve 0 - 1 planning problem is to adopt the round integration technology, that is, to consider 0 - 1 variable as a continuous variable, and to replace the variable constraint x i { 0 , 1 } , i = 1 , 2 , , n in the original problem with the following interval constraint:

0 x i 1 (3)

Then a relaxation optimization problem is obtained, and the components of the relaxation solution are rounded to the nearest discrete value. Although this method is easy to implement, but the theory of this technology is not strict. It cannot guarantee that the optimal solution obtained is the global optimal solution, or even a feasible solution.

The other method is the penalty function method, which is easy to see that x i { 0 , 1 } , i = 1 , 2 , , n is always equivalent to a complementarity condition:

x i ( 1 x i ) = 0 , i = 1 , 2 , , n (4)

Therefore, the punishment function can be constructed as follows:

P ( α , x ) = α i = 1 n x i ( 1 x i ) (5)

where α > 0 is a large enough penalty parameter. Adding this penalty function to the object function can obtain an equivalent continuous optimization problem:

min f ( x ) + α x T ( 1 x ) (6)

Due to the strong concavity of α x T ( 1 x ) , the object function in (6) is concave. The equivalence of (2) and (6) is based on the fact that the concavity function obtains the minimum value at a vertex, as well as x T ( 1 x ) = 0 has included x = 0 and x = 1 the sum is included. Although the vertex of the feasible domain is not necessarily the vertex of the unit hypercube, if α is sufficiently large, the global minimum can only be obtained while x T ( 1 x ) = 0 .

However, the specific value of α cannot be determined. If the value is too small, the penalty term will not play its role. If the value is too large, due to the strong concavity of the function, this penalty function method cannot effectively obtain the optimal solution of the original problem, so it is not an effective method. It is pointed out in literature [14] , but only a new viewpoint is provided, and no more effective method is given to solve the original problem. Therefore, it is not ideal to solve the original problem directly with (6). Great-Ming Ng later constructed a logarithmic barrier term:

Ψ ( x ) = i = 1 n [ ln x i + ln ( 1 x i ) ] (7)

This barrier term can be used to replace the boxed constraint conditions 0 x i 1 , i = 1 , 2 , , n as well as to avoid falling into the local minimum during the iteration, so as to find the global optimal solution for the original problem. At this point, the following unconstrained optimization problems can be obtained:

min f ( x ) + α x T ( 1 x ) μ i = 1 n [ ln x i + ln ( 1 x i ) ] (8)

where α > 0 is a penalty parameter and μ > 0 is a barrier parameter. In a sense, (8) is an improvement on (6), but in essence they both replace x i { 0 , 1 } , i = 1 , 2 , , n constraints with penalty terms or logarithmic barrier terms.

For the BQP problem, another continuous method is proposed in this paper. Consider the following two sets of constraints:

0 x i 1 , i = 1 , 2 , , n , x i ( 1 x i ) = 0 , i = 1 , 2 , , n . (9)

Obviously, these two sets of constraint conditions and binary constraint conditions are equivalent. In fact, these two constraints skillfully reform complementary constraints:

x i 0 , 1 x i 0 , x i ( 1 x i ) = 0 , i = 1 , 2 , , n (10)

The advantage of constructing constraint conditions in this way is that: the two constraint conditions of (9) work at the same time, and to each variable there is a unique complementary constraint condition x i 0 , 1 x i 0 , x i ( 1 x i ) = 0

And more importantly, complementary constraints (10) and the general constraint conditions are different: for general complementary constraints, each component is a function about x = { x 1 , x 2 , , x n } , and for (10), the value of every F i ( x ) = x i ( 1 x i ) only has relationship with the values of x i , nothing to do with the other variables. Namely, each of the complementary constraint of (10) is independent of each other.

Through the above, the problem (2) can be equivalently transformed into the following mathematical programming problems with complementary constraints:

min f ( x ) = x T Q x x i 0 , 1 x i 0 , x i ( 1 x i ) = 0 , i = 1 , 2 , , n (11)

For the above complementary constraints, we can use NCP function [15] to replace them further. To better describe NCP functions, we first introduce the concept of them:

Definition (NCP function): If a function is set ϕ : R 2 R , and ϕ ( a , b ) = 0 a 0 , b 0 , a b = 0 , then the function of ϕ is called an NCP function.

Therefore, we can replace the complementary constraint in (11) with a series of equation equations. Finally, we get the continuous optimization model as follows:

min f ( x ) = x T Q x s .t . ϕ ( x i , 1 x i ) = 0 , i = 1 , 2 , , n (12)

Thus, the global optimal solution of (12) is the exact solution of problem (2).

3. Smoothing Method for BQP

According to the definition of NCP function given above, functions satisfying conditions can take various forms. Several commonly used NCP functions can be given:

ϕ M = min ( a , b ) ϕ F B = a 2 + b 2 ( a + b ) ϕ ( a , b ) = a b + 1 2 min 2 { 0 , a + b } ϕ ( a , b ) = ( a b ) 2 a | a | b | b | 0 (13)

In the following section, we mainly choose the minimum value function to study.

It is easy to know min { a , b } = max { a , b } , and for the non-differentiable maximum function:

f max ( x ) = max { f i ( x ) } , i = 1 , 2 , , n (14)

we often use aggregate function to carry out smooth approximation to it [16] [17] . The so-called aggregate function is a function in the following form:

F μ ( x ) = μ 1 { ln i = 1 m exp [ μ f i ( x ) ] } (15)

where the smooth parameter μ is large enough, and f i ( x ) , i = 1 , 2 , , n are continuous differentiable (i.e., smooth) real value function. The aggregate function has very good properties [18] , which can be found from the following properties:

Lemma 3.1 The function F μ ( x ) which is defined by (15) satisfies the following conditions:

1) f max ( x ) < F μ ( x ) f max ( x ) + μ 1 ln m

2) lim μ F μ ( x ) = f max (x)

Proof 1) F μ ( x ) can be equivalent to deformation F μ ( x ) = f max ( x ) + μ 1 ln i = 1 m exp { μ [ f i ( x ) f max ( x ) ] } , because f i ( x ) f max ( x ) , so 0 < exp { μ [ f i ( x ) f max ( x ) ] } 1 , and there is at least one indicator that makes f i ( x ) = f max ( x ) , so exist exp { μ [ f i ( x ) f max ( x ) ] } = 1 . So we have

0 < f i ( x ) f max ( x ) μ 1 ln m (16)

2) When μ , the above conclusion can be proved by formula (16).

Based on the above introduction, we can handle the minimum function as follows [19] :

ϕ M ( a , b ) = min ( a , b ) = max ( a , b ) F μ ( a , b ) = μ 1 ln [ exp ( μ a ) + exp ( μ b ) ] (17)

where the smooth parameter μ is large enough, as shown by lemma 3.1, when μ , F μ ( a , b ) and ϕ M ( a , b ) are equivalent.

For ease of expression, let’s call:

ϕ M ( x i , 1 x i ) = min { x i , 1 x i } , i = 1 , 2 , , n ϕ μ ( x i , 1 x i ) = μ 1 ln [ exp ( μ x i ) + exp ( μ ( 1 x i ) ) ] , i = 1 , 2 , , n (18)

At this point, the problem (12) can be further transformed into an equivalent continuous and smooth nonlinear constraint optimization problem:

min f ( x ) = x T Q x s .t . ϕ μ ( x i , 1 x i ) = 0 , i = 1 , 2 , , n (19)

By this continuous method, the global optimal solution of the original problem (2) can be solved by solving the corresponding nonlinear constrained optimization problem (19). Many mature optimization algorithms have been developed for solving nonlinear equality constraint optimization problems. This paper mainly uses the multiplier penalty function method to solve the above problems (19). Multiplier method is an optimization algorithm independently proposed by Powell and Hestenes in 1969 to solve equality constraint optimization problems. Later, it was extended to solve inequality constraint optimization problems by Rockfellar in 1973. The basic idea is to start from the Lagrangian function of the original problem and add an appropriate penalty function, so as to transform the original problem into a series of unconstrained optimization subproblems.

For convenience’s sake, let’s first:

Φ ( x ) = ( ϕ M ( x 1 , 1 x 1 ) , ϕ M ( x 2 , 1 x 2 ) , , ϕ M ( x n , 1 x n ) ) T Φ ( x , μ ) = ( ϕ μ ( x 1 , 1 x 1 ) , ϕ μ ( x 2 , 1 x 2 ) , , ϕ μ ( x n , 1 x n ) ) T (20)

The Lagrange function of problem (19) is [20] :

L ( x , λ ) = f ( x ) + λ T Φ ( x , μ ) (21)

where λ = ( λ 1 , λ 2 , , λ n ) T is the Lagrangian multiplier vector, and ( x * , λ * ) is set as KT pairs of problem (19), then it can be known from the optimality condition:

x L ( x * , λ * ) = 0 , λ L ( x * , λ * ) = Φ ( x * , μ ) = 0 (22)

In addition, it is not difficult to find any x in the feasible domain is satisfied:

L ( x * , λ * ) = f ( x * ) f ( x ) = f ( x ) + ( λ * ) T Φ ( x , μ ) = L ( x , λ * ) (23)

The above equation shows that if the multiplier vector λ * is known, the problem (19) can be equivalently transformed into:

min L ( x , λ * ) s .t . ϕ μ ( x i , 1 x i ) = 0 , i = 1 , 2 , , n (24)

Then the external penalty function method is considered to solve the problem (24). The augmented objective function is

L μ ( x , λ , α ) = L ( x , λ ) + α 2 Φ ( x , μ ) 2 = f ( x ) + λ T Φ ( x , μ ) + α 2 Φ ( x , μ ) 2 (25)

In this way, we can fix λ = λ ¯ and find a minimum of L μ ( x , λ ¯ , α ) , then change the value of λ appropriately to find a new value x ¯ , until we get the x * and λ * that we want. Specifically, when solving the minimum x ( k ) of min L μ ( x , λ ( k ) , α ) in the k-th iteration of the unconstrained subproblem, the necessary conditions for taking the extreme values are known:

x L μ ( x ( k ) , λ ( k ) , α ) = f ( x ( k ) ) + Φ ( x ( k ) , μ ) [ λ ( k ) + α Φ ( x ( k ) , μ ) ] = 0 (26)

And the KT-point of ( x * , λ * ) the original problem satisfies:

f ( x * ) + Φ ( x * , μ ) λ * = 0 , Φ ( x * , μ ) = 0 (27)

In order for { x ( k ) } x * and { λ ( k ) } λ * , after comparing the above two expressions, the updating formula of the multiplier sequence { λ ( k ) } is:

λ i ( k + 1 ) = λ i ( k ) + α ( k ) ϕ μ ( k ) ( x i ( k ) , 1 x i ( k ) ) , i = 1 , 2 , , n (28)

It can be seen from Equation (28) that the sufficient and necessary condition for { λ ( k ) } convergence is { ϕ μ ( x ( k ) , 1 x ( k ) ) } 0 And then we proof that ϕ μ ( x ( k ) , 1 x ( k ) ) = 0 is also a necessary and sufficient condition for judging the KT-point ( x * , λ * ) .

Theorem 3.1 Let x ( k ) be the minimum point of the unconstrained optimization problem:

min L μ ( x , λ ( k ) , α ) = L ( x , λ ( k ) ) + α 2 Φ ( x , μ ) 2 (29)

then ( x ( k ) , λ ( k ) ) being a KT-point to the (19) has a necessary and sufficient condition of ϕ μ ( x ( k ) ,1 x ( k ) ) = 0 .

Proof The necessity is obvious. The following proof is sufficient, since x ( k ) is a minimum of (29) and ϕ μ ( x ( k ) ,1 x ( k ) ) = 0 , therefore, for any feasible point:

f ( x ) = L μ ( x , λ ( k ) , α ) L μ ( x ( k ) , λ ( k ) , α ) = f ( x ( k ) ) (30)

that is, x ( k ) is also a minimum of (19). On the other hand, it is noted that x ( k ) is also the stable point of (29), therefore

x L μ ( x ( k ) , λ ( k ) , α ) = f ( x ( k ) ) + Φ ( x ( k ) , μ ) λ ( k ) = 0 (31)

The above formula indicates that λ ( k ) is the Lagrangian multiplier vector with respect to x ( k ) , that is to say ( x ( k ) , λ ( k ) ) is also the KT-point of (19).

Based on the above discussion, before giving the detailed steps to solve the multiplier method of equation constraint problem (19), the following lemmas are given by studying some properties of ϕ μ ( x i ,1 x i ) .

Lemma 3.2 For each i = 1 , 2 , , n , the following formula holds:

ϕ M ( x i , 1 x i ) μ 1 ln 2 ϕ μ ( x i , 1 x i ) ϕ M ( x i , 1 x i ) (32)

Proof According to the properties of aggregate function:

f max ( x i , 1 + x i ) F μ ( x i , 1 x i ) f max ( x i , 1 + x i ) + μ 1 ln 2 (33)

thus f max ( x i , 1 + x i ) μ 1 ln 2 ϕ μ ( x i , 1 x i ) f max ( x i , 1 + x i ) . That is to say, min { x i , 1 x i } μ 1 ln 2 ϕ μ ( x i , 1 x i ) min { x i , 1 x i } . Therefore, ϕ M ( x i , 1 x i ) μ 1 ln 2 ϕ μ ( x i , 1 x i ) ϕ M ( x i , 1 x i ) .

Lemma 3.3 For any μ > 0 , there is

| ϕ M ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) | μ 1 ln 2 Φ ( x ) Φ ( x , μ ) μ 1 n ln 2 (34)

Proof Easy to obtain by lemma 3.2 that: 0 ϕ M ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) μ 1 ln 2 , so

| ϕ M ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) | μ 1 ln 2 , i = 1 , 2 , , n . (35)

From the above equation we know:

0 Φ ( x ) Φ ( x , μ ) = { i = 1 n [ ϕ M ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) ] 2 } 1 2 [ ( μ 1 ln 2 ) 2 n ] 1 2 = μ 1 n ln 2 (36)

Lemma 3.4 For each i = 1 , 2 , , n , ϕ μ ( x i , 1 x i ) is strictly convex on the interval ( , + ) ; When μ 10 2 , ϕ μ 2 ( x i ,1 x i ) is convex on the interval ( ,0.4737 ) ( 0.5263, + ) .

Proof According to the definition of ϕ μ ( x i , 1 x i ) , it can be seen that:

ϕ μ ( x i , 1 x i ) = μ 1 ln { exp ( μ x i ) + [ μ ( 1 x i ) ] } ϕ μ 2 ( x i , 1 x i ) = μ 1 ln { exp ( μ x i ) + [ μ ( 1 x i ) ] } 2 (37)

let r 1 = e μ x i / ( e μ x i + e μ ( 1 x i ) ) , r 2 = e μ ( 1 x i ) / ( e μ x i + e μ ( 1 x i ) ) . We could get,

ϕ μ ( x i , 1 x i ) = 4 μ r 1 r 2 > 0 , i = 1 , 2 , , n ; ( ϕ μ 2 ( x i , 1 x i ) ) = 8 μ [ ( r 1 r 2 ) 2 / ( 4 μ r 1 r 2 ) + ϕ μ ( x i , 1 x i ) ] . (38)

So ϕ μ ( x i , 1 x i ) is strictly convex on the interval ( , + ) ; for ϕ μ 2 ( x i , 1 x i ) , when μ 10 2 all the x ( 0.4737,0.5263 ) satisfy ( ϕ μ 2 ( x i , 1 x i ) ) > 0 , so when μ 10 2 , ϕ μ 2 ( x i , 1 x i ) is convex on the interval ( ,0.4737 ) ( 0.5263, + ) .

Lemma 3.5 For any μ > 0 and α > 0 that are large enough, as long as the region where x satisfies the following equation: ( r 2 r 1 ) 2 / ( 4 μ r 1 r 2 ) + ϕ μ ( x i , 1 x i ) > 0 , the augmented Lagrange function L μ ( x , λ , α ) defined by Equation (25) is convex in this region.

Proof Let l ( x , λ ) = x T Q x + λ T Φ ( x , μ ) then the hesse matrix of L μ ( x , λ , α ) is

x x 2 L μ ( x , λ , α ) = x x 2 l ( l , p ) + α B ( x , p ) (39)

where

B ( x , p ) = D i a g [ ( ϕ μ ( x i , 1 x i ) ) 2 + ϕ μ ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) ] , i = 1 , 2 , , n (40)

and

( ϕ μ ( x i , 1 x i ) ) 2 + ϕ μ ( x i , 1 x i ) ϕ μ ( x i , 1 x i ) = 8 μ [ ( r 2 r 1 ) 2 / ( 4 μ r 1 r 2 ) + ϕ μ ( x i , 1 x i ) ] (41)

So we just have to satisfy ( r 2 r 1 ) 2 / ( 4 μ r 1 r 2 ) + ϕ μ ( x i , 1 x i ) > 0 , the matrix B ( x , p ) is positive, therefore for sufficiently large α > 0 , L μ ( x , λ , α ) is convex in the region that satisfies the above conditions. When μ > 10 3 the interval is ϕ μ 2 ( x i , 1 x i ) is convex on the interval ( ,0.4962 ) ( 0.5038, + ) ; when μ > 10 4 the interval is ( ,0.4995 ) ( 0.5005, + ) .

4. Algorithm

According to lemma 3.5, we can use the augmented Lagrange penalty function to solve the problem (19). For a strictly monotonic increasing sequence { μ k }

and { α k } , the solution of unconstrained optimization min x R n L μ ( x , λ ( k ) , α ( k ) ) is

{ x k } , the corresponding Lagrange multiplier is { λ k } , and is modified according to (28) in the iteration.

Based on the previous analysis, the basic algorithm for solving the problem (19) is as follows [19] :

Algorithm 1

Step 1 Given parameters μ ( 0 ) > 0 , α ( 0 ) > 0 , ε 1 > 0 , ε 2 > 0 and σ 1 > 1 , σ 2 > 1 . Select a starting point x ( s ,0 ) , λ ( 0 ) and set t ( 0 ) = Φ ( x ( s , 0 ) , μ ( 0 ) ) , k = 0;

Step 2 Using BFGS algorithm solve the unconstrained minimization problem

min x R n L μ ( x , λ ( k ) , α ( k ) ) with the starting point x ( s , k ) , and denote by x ( k ) its optimal solution;

Step 3 If = Φ ( x ( k ) , μ ( k ) ) ε 1 , f ( x ( k ) ) f ( x ( s , k ) ) ε 2 , set x * = x ( k ) go to Step 6; else go to Step 4;

Step 4 If = Φ ( x ( k ) , μ ( k ) ) 0 , 1 t ( k ) go to Step 5; else update the parameter α ( k + 1 ) = σ 1 α ( k ) , μ ( k + 1 ) = σ 2 μ ( k ) set k = k + 1 , and go to Step 1;

Step 5 Set t ( k ) = Φ ( x ( k ) , μ ( k ) ) , x ( s , k + 1 ) = x ( k ) , modify λ ( k + 1 ) according to Equation (28), set k = k + 1 , and go to Step 1;

Step 6 Finish.

The optimal solution sequence x ( k ) required by the above algorithm makes Φ ( x ( k ) , μ ( k ) ) converge to 0 at a speed of at least 0.1. If this requirement is not met in an iteration, the penalty factor and smooth parameter should be increased automatically.

Then we introduce the BFGS algorithm mentioned in step 2. It is the most popular and effective quasi-newton methods at present. It was proposed by Broyden, Fletcher, Goldfarb, and Shanno independently in 1970, so it is called BFGS algorithm. We know that the basic idea of quasi-newton method is to replace the Hesse matrix G ( l ) = 2 f ( x ( l ) ) with one of its approximate matrixs B ( l ) in the steps of basic Newton method, and the following relational expression is required:

B ( l + 1 ) s ( l ) = y ( l ) (42)

where displacement s ( l ) = x ( s , l ) x ( s , l 1 ) , gradient difference y ( l ) = g ( l ) g ( l 1 ) , (42) is usually referred to as quasi-newton equation or quasi-newton condition.

Algorithm 2 (BFGS)

Step 1 Given parameters δ ( 0,1 ) , σ ( 0,0.5 ) select a starting point x ( s , l ) and positive matrices B ( 0 ) ( usually set as G ( x 0 ) or the unit matrix I n ),and set l = 0 , f ( x ) = L μ ( x , λ ( k ) , α ( k ) ) ;

Step 2 Calculate g ( l ) = f ( x ( s , l ) ) , if g ( l ) ε go to Step 6, output x ( s , l ) as approximate minimum point; else go to Step 3;

Step 3 Solve the linear equations B ( l ) d = g ( l ) and get the solution d ( l ) ;

Step 4 Suppose m ( l ) is the minimum non-negative solution that satisfies the following inequality:

f ( x ( s , l ) ) + δ m d ( l ) f ( x ( s , l ) ) + σ δ m ( g ( l ) ) T d (l)

set δ ( l ) = δ m ( l ) , x ( s , l + 1 ) = x ( s , l ) + α ( l ) d ( l ) ;

Step 5 Set s ( l ) = x ( s , l ) x ( s , l 1 ) y ( l ) = g ( l ) g ( l 1 ) ,by the correction formula:

B ( l + 1 ) = ( B ( l ) ( y ( l ) ) T s ( l ) 0 B ( l ) B ( l ) s ( l ) ( s ( l ) ) T B ( l ) ( s ( l ) ) T B ( l ) s ( l ) + y ( l ) ( y ( l ) ) T ( y ( l ) ) T s ( l ) ( y ( l ) ) T s ( l ) > 0 (43)

identify the B ( l + 1 ) , set l = l + 1 and go to Step 1;

Step 6 Finish.

5. Number Experiments

About BQP questions, the current relatively popular in the question bank is J. E. Beasleys OR-library (http://people.brunel.ac.uk/~%20mastjjb/jeb/orlib/bqpinfo.html), in this paper, we has carried on the numerical experiment on parts of question bank BQP questions, and the calculation results of our continuous algorithm are compared with those of the best solution of known, as shown in Table 1.

The first column Mu in the Table 1 is the question number; The second column m denotes the number of variables; The third is the numerical result of this paper; The fourth is the best known solution; The fifth column represents our numerical result as a percentage of the best solution.

Table 1. Comparison of numerical results.

It’s not hard to see from the Table 1 that in the process of solving these problems, the optimal solution obtained by our algorithm is basically close to the best known solution. Through the comparison in the table above, it can be seen that the proposed continuous algorithm is basically close to the known best solution for solving the general unconstrained 0 - 1 quadratic programming problem.

6. Conclusion

In this paper, we have reformulated an unconstrained BQP problem as a MPEC problem by the equivalent complementarity conditions of a binary vector. To seek a global minimizer of the resulting continuous optimization problem, we construct a global smoothing function and develop a global continuation algorithm via a sequence of unconstrained minimization. And the numerical results indicate that the continuous approach proposed is extremely promising, especially for those large problems, in terms of the quality of the optimal values generated and the computational work involved. In addition, this new approach can be extended to general nonlinear binary optimization problems, and we can leave it as a future research topic.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Hammer, P. and Rudeanu, S. (1968) Boolean Methods in Operations Research. Elsevier B.V., New York.
https://doi.org/10.1007/978-3-642-85823-9
[2] Glover, F., Kochenberger, G., Alidaee, B. and Amini, M.M. (1999) Tabu with Search Critical Event Memory: An Enhance Application for Binary Quadratic Programs. Kluwer Academic Publisher, Boston.
[3] Goffin, J.L. (1997) Solving Nonlinear Multicommodity Flow Problems by the Analytic Center Cutting Plane Method. Mathematical Programming, 76, 131-154.
https://doi.org/10.1007/BF02614381
[4] Christoph, H. and Rendl, F. (1998) Solving Quadratic (0,1)-Problems by Semidefinite Programs and Cutting Planes. Mathematical Programming, 82, 291-315.
https://doi.org/10.1007/BF01580072
[5] Wolkowicz, H. and Anjos, M.F. (2002) Semidefinite Programming for Discrete Optimization and Matrix Completion Problem. Discrete Applied Mathematics, 123, 513-577.
https://doi.org/10.1016/S0166-218X(01)00352-3
[6] Beasley, J.E. (1998) Heuristic Algorithms for the Unconstrained Binary Quadratic. Management School of Imperial College of U.K., London, 1-36.
[7] Glover, F. (2002) One-Pass Heuristics for Large-Scale Unconstrained Binary Quadratic Problems. European Journal of Operational Research, 137, 272-287.
https://doi.org/10.1016/S0377-2217(01)00209-0
[8] Warners, J.P. (1997) Potential Reduction Algorithms for Structured Combinatorial Optimization Problems. Operations Research Letters, 21, 55-64.
https://doi.org/10.1016/S0167-6377(97)00031-X
[9] Audet, C. (1997) Links between Linear Bilevel and Mixed 0 - 1 Programming Problems. Journal of Optimization Theory and Applications, 93, 273-300.
https://doi.org/10.1023/A:1022645805569
[10] Kiwiel, K.C. (2000) Bregman Proximal Relaxation of Large-Scale 0 - 1 Problems. Computational Optimization and Applications, 15, 33-44.
https://doi.org/10.1023/A:1008770914218
[11] Pardalos, P.M. (2000) Recent Developments and Trends in Global Optimization. Journal of Computational and Applied Mathematics, 124, 209-228.
https://doi.org/10.1016/S0377-0427(00)00425-8
[12] Pardalos, P.M. (1996) Continuous Approaches to Discrete Optimization Problems. Nonlinear Optimization and Applications. Plenum Publishing, New York, 313-328.
https://doi.org/10.1007/978-1-4899-0289-4_22
[13] Ng, K.M. (2002) A Continuation Approach for Solving Nonlinear Optimization Problems with Discrete Variables. Stanford University, Stanford.
[14] Horst, R., Pardalos, P.M. and Thoai, N.V. (1995) Introduction to Global Optimization. Kluwer Academic Publisher, Boston.
[15] Mangasarian, O.L. (1976) Equivalence of the Complementarity Problem to a System of Nonlinear Equations. SIAM Journal on Applied Mathematics, 31, 89-92.
https://doi.org/10.1137/0131009
[16] Li, X. (1991) An Aggregate Constrained Method for Nonlinear Programming. The Operations Research Society, 42, 1003-1010.
https://doi.org/10.1057/jors.1991.190
[17] Li, X. (1992) An Entropy-Based Aggregate Method for Minimax Optimization. Engineering Optimization, 18, 277-285.
https://doi.org/10.1080/03052159208941026
[18] Li, Y. and Li, X. (2009) Continuous Approaches to 0 - 1 Programming Problem with Applications. PhD Thesis, Dalian University of Technology, Dalian.
[19] Tao, T. and Li, X. (2006) Continuous Approaches to Discrete Optimum Design. PhD Thesis, Dalian University of Technology, Dalian.
[20] Ma, C. (2010) Optimization Method and Matlab Program Design. Science Press, Beijing.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.