^{1}

^{*}

^{1}

^{2}

The nonlinear multidimensional knapsack problem is defined as the minimization of a convex function with multiple linear constraints. The methods developed for nonlinear multidimensional programming problems are often applied to solve the nonlinear multidimensional knapsack problems, but they are inefficient or limited since most of them do not exploit the characteristics of the knapsack problems. In this paper, by establishing structural properties of the continuous separable nonlinear multidimensional knapsack problem, we develop a multi-tier binary solution method for solving the continuous nonlinear multidimensional knapsack problems with general structure. The computational complexity is polynomial in the number of variables. We presented two examples to illustrate the general application of our method and we used statistical results to show the effectiveness of our method.

The nonlinear multidimensional knapsack problem is defined as minimizing a convex function with multiple linear constraints. The nonlinear knapsack problem is a class of nonlinear programming, and some methods designed for nonlinear programming can be applied for solving the nonlinear multidimensional knapsack problems. The general nonlinear programming problems have been intensively studied in the last decades, and some different methods have been developed, such as Newton method [

Generally, it is much faster and more reliable to solve knapsack problems with specialized methods than with standard methods [

Most research of nonlinear knapsack problems studied the one-dimensional problems with continuous or integer variables, and the proposed methods cannot be directly extended for solving multi-dimensional problems. Some researchers attempted to solve multi-dimensional problems with integer-valued variables. Morin and Marsten firstly studied the nonlinear multidimensional knapsack problems and developed the imbedded state space approach [

This paper establishes some structural properties of the continuous separable nonlinear multidimensional knapsack problem, and develops a multi-tier binary solution method for solving a class of continuous nonlinear multidimensional knapsack problems with general structure. The computational complexity is polynomial in the number of variables. We presented two examples to illustrate the application of our method, and the statistical study with the randomly generated instances for different problem sizes are reported to show the effectiveness of our method.

The paper is organized as follows. In Section 2, the nonlinear multidimensional knapsack problem is described. Section 3 studies the structural properties of the problem, and develops the algorithm. Section 4 presents the illustrative examples and the statistical results. Finally, the concluding remarks are given in Section 5. All proofs are listed in Appendix.

The continuous separable nonlinear multidimensional knapsack problem studied in this paper is as follows (denoted as problem P):

Min f ( x ) = ∑ i = 1 N f i ( x i ) , (1)

Subject to

∑ i = 1 N c i , j x i ≤ C j , j = 1 , ⋯ , M , (2)

l i ≤ x i ≤ u i , i = 1 , ⋯ , N . (3)

The notation used in this paper is listed in

In problem P, all objective functions f i ( x i ) , i = 1 , ⋯ , n are convex and differentiable, the unit resource coefficient c i , j > 0 for all i = 1 , ⋯ , N , j = 1 , ⋯ , M , the resource constraints C j > 0 for all j = 1 , ⋯ , M , and the lower and upper bounds satisfy 0 ≤ l i < u i for all i = 1 , ⋯ , N .

Since the objective functions and the feasible domain in problem P are all convex, the optimality condition for problem P can be characterized using KKT conditions. Let λ = ( λ 1 , ⋯ , λ M ) , λ j ≥ 0 , j = 1 , ⋯ , M , be the Lagrange multiplier vector for the constraints given in Equation (2), and w = ( w 1 , ⋯ , w N ) , w i ≥ 0 , i = 1 , ⋯ , N , v = ( v 1 , ⋯ , v N ) , v i ≥ 0 , i = 1 , ⋯ , N be the Lagrange multiplier vectors for the constraints in Equation (3). Thus, the Lagrange function for problem P can be written as:

L ( x , λ , w , ν ) = ∑ i = 1 N f i ( x i ) − ∑ j = 1 M λ j ( C j − ∑ i = 1 N c i , j x i ) − ∑ i = 1 N w i ( x i − l i ) + ∑ i = 1 N v i ( x i − u i ) . (4)

Notations | Definitions |
---|---|

N | total number of variables |

M | total amount of resource |

i | variable index |

j | resource index |

X | decision variable vector x = ( x 1 , ⋯ , x N ) |

f i ( x i ) | the objective function related to variable x i |

g i ( x i ) | the derivative function of f i ( x i ) , g i ( x i ) = d f i ( x i ) / d x i |

k i ( x i ) | the derivative function of g i ( x i ) , k i ( x i ) = d g i ( x i ) / d x i |

h i ( ⋅ ) | the inverse function of g i ( x i ) , h i ( ⋅ ) = g i − 1 ( ⋅ ) |

c i , j | coefficient of variable i of resource j |

C j | available amount of resource j |

λ | the Lagrange multiplier vector for the resource constraints |

w | the Lagrange multiplier vector for the variable constraints |

v | the Lagrange multiplier vector for the variable constraints |

f ( ⋅ ) | The objective function vector |

Let g i ( x i ) = d f i ( x i ) / d x i , i = 1 , ⋯ , N . The KKT conditions for problem P can be summarized as the following proposition.

Proposition 1: The KKT conditions for problem P are:

g i ( x i ) + ∑ j = 1 M λ j c i , j − w i + v i = 0 , i = 1 , ⋯ , N , (5)

∑ i = 1 N w i ( x i − l i ) + ∑ i = 1 N v i ( x i − u i ) = 0 , (6)

λ j ( ∑ i = 1 N c i , j x i − C j ) = 0 , j = 1 , ⋯ , M . (7)

Since f i ( x i ) is convex in x i , g i ( x i ) is an increasing function of x i . Let x ¯ i be the point that satisfies g i ( x i ) = 0 if g i ( 0 ) ≤ 0 and lim x i → + ∞ g i ( x i ) ≥ 0 . If g i ( 0 ) > 0 , we let x ¯ i = 0 . If lim x i → + ∞ g i ( x i ) < 0 , we set x ¯ i = + ∞ . Then x ¯ i is the optimal solution to the objective function in Equation (1) without any constraint. We summarize it as

x ¯ i = arg min { f i ( x i ) , 0 ≤ x i ≤ + ∞ } = { 0 , if g i ( 0 ) > 0 , arg { x i | g i ( x i ) = 0 } , if g i ( 0 ) ≤ 0 and lim x i → + ∞ g i ( x i ) ≥ 0 , + ∞ , if lim x i → + ∞ g i ( x i ) < 0. (8)

In this section, we first investigate the structural properties of the optimal solution to problem P. Then we develop a solution method based on the structural properties for solving problem P.

We denote by problem PR the knapsack relaxation problem from problem P, in which the constraints in Equation (2) are relaxed. This implies that we do not consider Equation (2) in problem PR. By analyzing the solution to problem PR, we can find the way to construct the solution to problem P. We let x ^ i ( i = 1 , ⋯ , N ) be the optimal solution to problem PR, then x ^ i ( i = 1 , ⋯ , N ) has the following property.

Proposition 2: The optimal solution to problem PR is x ^ i = min { max { x ¯ i , l i } , u i } .

If ∑ i = 1 N c i , j x ^ i ≤ C j holds for some j = 1 , ⋯ , M , then the corresponding constraints in problem P are inactive, which can be removed from problem P. In the following, without loss of generality, we assume that ∑ i = 1 N c i , j x ^ i > C j for all j = 1 , ⋯ , M . The KKT conditions in Equation (7) are met at either λ j = 0 , or ∑ i = 1 N c i , j x i = C j . The condition λ j = 0 implies that there is enough resource j at the optimal solution, and hence the j-th constraint is inactive. ∑ i = 1 N c i , j x i = C j

means that the j-th constraint is active, and knapsack space of the j-th constraint must be fully utilized at the optimal solution.

We denote by x * the optimal solution to problem P and λ * the corresponding Lagrange multiplier vector. Let x i ( λ ) be a solution of the KKT conditions in Equation (5) and Equation (6). We denote by h i ( ⋅ ) = g i − 1 ( ⋅ ) , then we have the following proposition.

Proposition 3. (a) x i ( λ ) = min { max { h i ( − ∑ j = 1 M λ j c i , j ) , l i } , u i } , i = 1 , ⋯ , N .

(b) If ( x ( λ ) , λ ) satisfies λ j = 0 or ∑ i = 1 N c i , j x i = C j , j = 1 , ⋯ , M , then we have x * = x ( λ ) .

For any given λ M ≥ 0 , we let x ( λ M ) and λ 1 , ⋯ , λ M − 1 be the optimal so-

lution of Equations (5) and (6) and λ j ( ∑ i = 1 N c i , j x i − C j ) = 0 , j = 1 , ⋯ , M − 1 . For

ease of exposition, we denote problem P as P ( f , M ) , where f = ( f i , ⋯ , f N ) is the objective function vector. Problem P ( f ^ ( λ M ) , M − 1 ) with f ^ i ( λ M ) = f i + λ M c i , M x i , i = 1 , ⋯ , N , is an M − 1 constraint problem with the objective function f ^ i ( λ M ) and the first M − 1 knapsack constraints of problem P.

By analyzing the structural properties of x ( λ M ) and P ( f ^ ( λ M ) , M − 1 ) , we can prove the following proposition.

Proposition 4. (a) If ( x ( λ M ) , λ M ) satisfies λ M = 0 or ∑ i = 1 N c i , M x i ( λ M ) = C M , then we have x * = x ( λ M ) .

(b) x ( λ M ) is the optimal solution to problem P ( f ^ ( λ M ) , M − 1 ) with f ^ i ( λ M ) = f i + λ M c i , M x i , i = 1 , ⋯ , N .

From Proposition 4(a), we know that the optimal solution to problem P ( f ^ ( λ M ) , M − 1 ) is obtained in two possible cases: 1) λ M = 0 , which means

that the constraint ∑ i = 1 N c i , M x i ( λ M ) ≤ C M is not binding and it can be removed

from problem P ( f , M ) . Therefore, x * can be obtained by solving problem P ( f , M − 1 ) , which has the same structure as problem P ( f , M ) ; 2)

∑ i = 1 N c i , M x i ( λ M ) = C M , which implies that ∑ i = 1 N c i , M x i ( λ M ) ≤ C M is an active constraint, and the optimal solution must be obtained at ∑ i = 1 N c i , M x i ( λ M ) = C M with λ M > 0 .

Since problem P ( f , M ) can be solved by solving problem P ( f , M − 1 ) in the case of λ M = 0 . In the following, we study the case of λ M > 0 . Proposition 4(b) indicates that problem P ( f ^ ( λ M ) , M − 1 ) determines the optimal values of x ( λ M ) and λ j , j = 1 , ⋯ , M − 1 . For any λ M > 0 , the M − 1 resource constraints could be active or inactive, and the N decision variables could take bound values or non-bound values.

If λ j > 0 , j = 1 , ⋯ , M − 1 , constraint j will be active, thus we denote by J ( λ M ) = { j | λ j > 0 , j = 1 , ⋯ , M } the active constraint set for the given λ M . Note that J ( λ M ) includes at least one active constraint for the case of λ M > 0 .

From Equation (5), we know x i ( λ M ) > l i if − g i ( l i ) − ∑ j = 1 M λ j c i , j > 0 , i = 1 , ⋯ , N , and x i ( λ M ) < u i if − g i ( u i ) − ∑ j = 1 M λ j c i , j > 0 , i = 1 , ⋯ , N . For the given λ M , we define the non-bound variable set I ( λ M ) , and lower and upper bound variable sets I L ( λ M ) and I U ( λ M ) as

I ( λ M ) = { i | g i ( l i ) < − ∑ j = 1 M λ j c i , j < g i ( u i ) , i = 1 , ⋯ , N } , (9)

I L ( λ M ) = { i | − ∑ j = 1 M λ j c i , j ≤ g i ( l i ) , i = 1 , ⋯ , N } , (10)

I U ( λ M ) = { i | − ∑ j = 1 M λ j c i , j ≥ g i ( u i ) , i = 1 , ⋯ , N } . (11)

Let m = | J ( λ M ) | , n = | I ( λ M ) | , n L = | I L ( λ M ) | , and n U = | I U ( λ M ) | . For the given λ M > 0 , without changing the orders of indices j and i, we re-index the constraints in the active constraint set J ( λ M ) as j = 1 , ⋯ , m , and we re-index the variables in the non-bound variable set I ( λ M ) as i = 1 , ⋯ , n , and re-index the variables in I L ( λ M ) and I U ( λ M ) as i = 1 , ⋯ , n L , and i = 1 , ⋯ , n U , respectively. As a result, constraint M in the original problem is re-indexed as constraint m, and λ M is also restated as λ m .

We define G j ( λ , 1 ⋯ , λ m ) ≡ ∑ i = 1 N c i , j x i ( λ ) − C j = 0 , j = 1 , ⋯ , m − 1 , and substitute

x i ( λ ) = min { max { h i ( − ∑ j = 1 M λ j c i , j ) , l i } , u i }

into G j ( λ 1 , ⋯ , λ m ) , then we have

G j ( λ 1 , ⋯ , λ m ) ≡ ∑ i = 1 n c i , j h i ( − ∑ s = 1 m λ s c i , s ) − ( C j − ∑ i = 1 n L c i , j l i − ∑ i = 1 n U c i , j u i ) = 0 , (12)

Taking the derivative of Equation (12), we get

d G j ( λ 1 , ⋯ , λ m ) d λ m = − [ ∑ i = 1 n c i , j k i ( x i ( λ 1 , ⋯ , λ m ) ) ∑ s = 1 m d λ s d λ m c i , s ] = − ∑ s = 1 m ∑ i = 1 n c i , j c i , s k i ( x i ( λ 1 , ⋯ , λ m ) ) d λ s d λ m = 0 , j = 1 , ⋯ , m − 1 , (13)

where k i ( x i ) = d g i ( x i ) / d x i .

Since f i ( x i ) , i = 1 , ⋯ , n are differentiable convex, we know g i ( x i ) is increasing and k i ( x i ( λ 1 , ⋯ , λ m ) ) > 0 . Note that f ^ i ( λ M ) = f i + λ M c i , M x i has the same structure as f i ( x i ) . So we define

ρ i = 1 k i ( x i ( λ 1 , ⋯ , λ m ) ) > 0 ,

i = 1 , ⋯ , n , and a j s = ∑ i = 1 n ρ i c i , j c i , s , j , s = 1 , ⋯ , m , then Equation (13) can be rewritten in matrix form:

( a 11 a 12 ⋯ a 1 m a 21 a 22 ⋯ a 2 m ⋮ ⋮ ⋱ ⋮ a ( m − 1 ) 1 a ( m − 1 ) 2 ⋯ a ( m − 1 ) m ) ( d λ 1 / d λ m ⋮ d λ m − 1 / d λ m 1 ) = ( 0 0 ⋮ 0 ) . (14)

In order to solve d λ j d λ m , j = 1 , ⋯ , m − 1 , from Equation (14), we further define

H m = | a 11 a 12 ⋯ a 1 m a 21 a 22 ⋯ a 2 m ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m m | , (15)

and denote by H j ( m − 1 ) , j = 1 , ⋯ , m − 1 the m-1 dimensional determinant in which the j column of H m − 1 is replaced by ( a 1 m , a 2 m , ⋯ , a ( m − 1 ) m ) T . We have the following formula from Equation (14) and Equation (15):

d λ j d λ m = − H j ( m − 1 ) H m − 1 , j = 1 , ⋯ , m − 1 , m > 1 . (16)

Notice that the above results have similar structures as the results in Zhang [

d ∑ i = 1 n c i , m x i ( λ m ) d λ m = − ∑ i = 1 n ρ i c i , m ( ∑ j = 1 m − 1 c i , j d λ j d λ m + c i , m ) = − ∑ i = 1 n ρ i c i , m ( − ∑ j = 1 m − 1 c i , j H j ( m − 1 ) H m − 1 + c i , m ) = − H m H m − 1 < 0 . (17)

Since constraint M in the original problem is re-indexed as constraint m, and λ M is also restated as λ m , then ∑ i = 1 n c i , m x i ( λ m ) is equivalent to ∑ i = 1 N c i , M x i ( λ M ) in problem P with the original index, thus we know that ∑ i = 1 N c i , M x i ( λ M ) is a decreasing in λ M .

Therefore, there are three possible cases: 1) When λ M = 0 , we get the optimal solution to problem P ( f , M ) by solving problem P ( f , M − 1 ) ; 2) If λ M > 0 and m = 1 , we obtain the optimal solution to problem P ( f , M ) by setting x i ( λ M ) = min { max { h i ( − λ M c i , M ) , l i } , u i } ; 3) When λ M > 0 and m > 1 , we can solve problem P ( f , M ) by studying problem P ( f ^ ( λ M ) , M − 1 ) , with f ^ i ( λ M ) = f i + λ M c i , M x i .

According to Proposition 2, we can solve x * by searching the optimal value of λ . Before presenting the solution method, we first study the bounds for λ . The lower bound for λ is 0, and the upper bound for λ is given in the following proposition.

Proposition 5. The upper bound of λ i is max ( 0 , max i = 1 , ⋯ , N { − g i ( l i ) / c i , M } ) .

From Proposition 4, we get the optimal value of x * if the optimal solution x ( λ M ) to problem P ( f ^ ( λ M ) , M − 1 ) satisfies

λ M ( ∑ i = 1 N c i , M x i ( λ M ) − C M ) = 0 .

Since ∑ i = 1 N c i , M x i ( λ M ) is decreasing in λ M , the optimal solution can be found by applying the binary search over [ 0 , max ( 0 , max i = 1 , ⋯ , N { − g i ( l i ) / c i , M } ) ] . Since Problem P ( f ^ ( λ M ) , M − 1 ) has the same structure as problem P ( f , M ) , we can use a multi-tier binary search method to solve problem P. Main steps of the multi-tier binary search method are given in Algorithm 1.

Algorithm 1: SloveP ( f , M )

Step 1: If M = 0 , then let x i * = min { max { arg { g i ( x i ) = 0 } , l i } , u i } , stop;

Step 2: Let λ M L = 0 , λ M U = max ( 0 , max i = 1 , ⋯ , N { − g i ( l i ) / c i , M } ) ;

Step 3: Let λ M = ( λ M L + λ M U ) / 2 ;

Step 4: If λ M = 0 , then let x i * = SolveP ( f , M − 1 ) and λ M * = 0 , stop;

Step 5: If M = 1 , then let x i ( λ M ) = min { max { h i ( − λ M c i , M ) , l i } , u i } ;

If M > 1 , then let x i ( λ M ) = SolveP ( f ^ ( λ M ) , M − 1 )

Step 6: If ∑ i = 1 N c i , M x i ( λ M ) > C M , then let λ M L = λ M , go to Step 3;

If ∑ i = 1 N c i , M x i ( λ M ) < C M , then let λ M U = λ M , go to Step 3;

Step 7: Let x i * = x i ( λ M ) and λ M * = λ M , stop.

In the algorithm, we first solve the unconstrained problem with bounded variables (Step 1) to obtain x * . If the constraints are active, we apply the binary search procedure (Step 2 - 7) over interval [ λ M L , λ M U ] to determine λ M * . If either λ M = 0 or ∑ i = 1 N c i , M x i ( λ M ) = C M , the binary search procedure terminates. If ∑ i = 1 N c i , M x i ( λ M ) ≤ C M is not binding, then the iterating process will end in Step 4 with λ M = 0 . Therefore, we can get the optimal solution x i * by solving problem P ( f , M − 1 ) . If the constraint ∑ i = 1 N c i , M x i ( λ M ) ≤ C M is active, the so-

lution procedure will stop at Step 7 with ∑ i = 1 N c i , M x i ( λ M ) = C M . Step 5 derives

x i ( λ M ) by solving problem P ( f ^ ( λ M ) , M − 1 ) with f ^ i ( λ M ) = f i + λ M c i , M x i for the given λ M > 0 . If M = 1 , problem P ( f ^ ( λ M ) , M − 1 ) has no knapsack

constraint, and hence we have x i ( λ M ) = min { max { h i ( − λ M c i , M ) , l i } , u i } . If

M > 1 , we can solve the problem recursively. Problem P ( f ^ ( λ M ) , M − 1 ) has the same structure as problem P ( f , M ) , and hence the algorithm can call itself recursively to solve the problem P ( f ^ ( λ M ) , M − 1 ) .

The algorithm is a recursive algorithm with M tiers of binary search loop. The computational complexity of M-tier binary search procedure is O ( ( log 2 ( 1 / ε ) ) M ) , where ε is the error target for the binary search. The computational complexity of the last recursive step is O ( N ) . Therefore, the proposed algorithm has the computational complexity O ( ( log 2 ( 1 / ε ) ) M N ) , which is polynomial in the number of decision variables N.

The solution method developed in this paper can be used for solving the continuous nonlinear multidimensional knapsack problems with general structure, so many application problems with different objective functions summarized in Zhang and Hua with multiple constraints can be used to show the application of our method [

In our numerical study, we first show the application of our method using two examples: quadratic multidimensional knapsack problem (QMK) and the production planning problem presented in Bretthauer and Shetty [

The first illustrative example is a separable quadratic knapsack problem. We set the objective function as f i ( x i ) = a i ( x i − b i ) 2 , a i > 0 , i = 1 , ⋯ , N . It has two resource constraints: C_{1} = 12,000 and C_{2} = 10,000.

In the second example, we solve the production planning problem in Bretthauer and Shetty [

f i ( x i ) = min ∑ i = 1 n ( h i + d i x i + e i x i ) ,

i = 1 , ⋯ , N . There are three resource constraints: C_{1} = 200, C_{2} = 300, and C_{3} = 500. We use the same parameters used in Bretthauer and Shetty [

i | a_{i} | b_{i} | c_{i,}_{1} | c_{i,2} | l_{i } | u_{i } | x i * _{ } |
---|---|---|---|---|---|---|---|

1 | 12 | 20 | 50 | 100 | 6.7 | 10 | 10.0000 |

2 | 15 | 18 | 50 | 80 | 1 | 20 | 13.4020 |

3 | 20 | 8 | 50 | 100 | 2 | 30 | 3.6894 |

4 | 10 | 28 | 150 | 100 | 2.5 | 40 | 19.3787 |

5 | 10 | 10 | 100 | 80 | 5 | 5.6 | 5.0000 |

6 | 20 | 30 | 100 | 80 | 3 | 20 | 20.0000 |

7 | 18 | 25 | 100 | 100 | 8 | 25 | 20.2104 |

8 | 15 | 30 | 100 | 88 | 3 | 20 | 20.0000 |

λ * | 0.0092 | 1.7243 | |||||

f * | 6795 |

information for this example is listed in

In this subsection, we present two numerical experiments to show the effectiveness of our method for solving problems with different scale and objective functions. In the first experiment, parameters of the QMK problems are all randomly generated. We use the notation z ~ U ( α , β ) to denote that z is uniformly generated over [ α , β ] . The parameters of QMK instances are generated as follows: a i ~ U ( 1 , 2 ) , b i ~ U ( 5 , 10 ) , c i , j ~ U ( 1 , 10 ) , l i ~ U ( 5 , 15 ) , u i ~ U ( 20 , 30 ) and C j ~ N × U ( 100000 , 200000 ) , for i = 1 , ⋯ , N ; j = 1 , ⋯ , M .

In this experiment, we set problems with different sizes, respectively with M = 4 and N = 10, M = 2 and N = 100, M = 3 and N = 100, M = 2 and N = 1000. For each problem size, 50 test instances are randomly generated. The statistical results on number of iterations and computation time (in seconds) are reported in

In the Second experiment, we solve the production planning problem with randomly generated parameters. The parameters of the instances are generated as follows: d i ~ U ( 30 , 50 ) , e i ~ U ( 100 , 200 ) , c i , j ~ U ( 10 , 50 ) , l i ~ U ( 1 , 5 ) , u i ~ U ( 20 , 30 ) and C j ~ N × U ( 100000 , 200000 ) , for i = 1 , ⋯ , N ; j = 1 , ⋯ , M .

In this experiment, we set problems with different sizes, respectively with M = 4 and N = 10, M = 2 and N = 100, M = 3 and N = 100, M = 2 and N = 1000. For each problem size, we randomly generated 50 test instances. The statistical results on number of iterations and computation time (in seconds) are presented in

From

i | h_{i} | d_{i} | e_{i} | c_{i,1} | c_{i,2} | c_{i,3} | l_{i } | u_{i } | x i * _{ } |
---|---|---|---|---|---|---|---|---|---|

1 | 10 | 30.2 | 83 | 10 | 1 | 11 | 1 | 20 | 1.6578 |

2 | 20 | 5 | 15 | 12 | 2 | 2 | 5 | 20 | 5.0000 |

3 | 14 | 42.5 | 63 | 1 | 2 | 4 | 2 | 25 | 2.0000 |

4 | 13 | 48 | 81 | 5 | 5 | 5 | 4.4 | 22 | 4.4000 |

5 | 4 | 42 | 65 | 3 | 1 | 6 | 2.3 | 25 | 2.3000 |

6 | 5 | 36 | 75 | 8 | 2 | 3 | 2.2 | 24 | 2.2000 |

7 | 13 | 41.4 | 94 | 5 | 2 | 2 | 1 | 24 | 1.5068 |

8 | 27 | 22.5 | 20 | 1 | 3 | 3 | 3.5 | 22 | 3.5000 |

9 | 40 | 31.6 | 12 | 2 | 5 | 5 | 1.6 | 30 | 1.6000 |

10 | 23 | 44 | 55.5 | 3 | 8 | 8 | 1.9 | 32 | 1.9000 |

λ * | 0.0051 | 0.0064 | 0.0064 | ||||||

f * | 1261.5 |

# of iterations | Computation time | ||||||||
---|---|---|---|---|---|---|---|---|---|

N, M | 10, 4 | 100, 2 | 100, 3 | 1000, 2 | 10, 4 | 100, 2 | 100, 3 | 1000, 2 | |

Mean | 15.30 | 16.82 | 16.92 | 17.86 | 1.1713 | 0.0073 | 0.1044 | 0.0225 | |

Std.dev. | 1.0926 | 0.7475 | 0.8041 | 0.3505 | 0.2811 | 0.0026 | 0.0365 | 0.0083 | |

95% C.I. | Lower | 13 | 15 | 15 | 17 | 0.6012 | 0.0028 | 0.0405 | 0.0082 |

Upper | 17 | 18 | 18 | 18 | 1.7729 | 0.0118 | 0.1575 | 0.0381 |

# of iterations | Computation time | ||||||||
---|---|---|---|---|---|---|---|---|---|

N, M | 10, 4 | 100, 2 | 100, 3 | 1000, 2 | 10, 4 | 100, 2 | 100, 3 | 1000, 2 | |

Mean | 22.46 | 23.82 | 23.92 | 24.02 | 9.2210 | 0.0292 | 0.6762 | 0.1025 | |

Std.dev. | 1.4458 | 0.3881 | 0.2740 | 0.1414 | 2.3702 | 0.0120 | 0.2538 | 0.0456 | |

95% C.I. | Lower | 18 | 23 | 23 | 24 | 2.5809 | 0.0113 | 0.2713 | 0.0457 |

Upper | 24 | 24 | 24 | 25 | 13.5177 | 0.0456 | 1.0566 | 0.1750 |

computation time is more sensitive to the number of the resource constraints rather than the number of variables. Since the application problems often have much more variables than knapsack constraints, our algorithm is useful in practice.

In this paper, we study a class of continuous separable nonlinear multidimensional knapsack problems. By analyzing the structural properties of the optimal solution, we develop a multi-tier binary solution method. The proposed method has following advantages. 1) It is applicable for solving the nonlinear multidimensional knapsack problems with general structure. 2) It has computational complexity of polynomial in the number of variables.

This research can be further extended in several ways. One is to study non-separable multidimensional knapsack problems using the similar idea. Another way is to develop exact solution methods or heuristics for solving the integer multidimensional knapsack problems based on our method. Finally, the idea used in this study can be extended for investigating other complex optimization problems with multiple constraints.

This work is supported by National Natural Science Foundation of China (Grants No. 71672199).

Zhang, B., Lin, Z. and Wang, Y. (2018) A Class of Continuous Separable Nonlinear Multidimensional Knapsack Problems. American Journal of Operations Research, 8, 266-280. https://doi.org/10.4236/ajor.2018.84015

A.1 Proof of Proposition 2

It is defined that 0 ≤ l i < u i for all i = 1 , ⋯ , N . The optimal solution to problem PR should satisfy Equation (1) and Equation (3). If l i ≤ x ¯ i ≤ u i , it means that the bound constraint is inactive. Therefore, we have x ^ i = x ¯ i . Since g i ( x i ) is increasing in x i , and g i ( x ¯ i ) = 0 , we have g i ( x i ) ≥ 0 if x i > x ¯ i . If x ¯ i < l i < u i , then g i ( x i ) ≥ 0 if l i ≤ x i ≤ u i . Thus for any x i ∈ [ l i , u i ] , we have f i ( l i ) ≤ f i ( x i ) , and x ^ i = l i . If l i < u i < x ¯ i , we have x ^ i = u i . It can be proved similar to the condition of x ¯ i < l i .

A.2 Proof of Proposition 3

1) If g i ( l i ) ≤ − ∑ j = 1 M λ j c i , j ≤ g i ( u i ) , then we have l i ≤ h i ( − ∑ j = 1 M λ j c i , j ) ≤ u i , and w i = v i = 0 , which implies x i ( λ ) = h i ( − ∑ j = 1 M λ j c i , j ) . If − ∑ j = 1 M λ j c i , j < g i ( l i ) , then we have g i ( x i ) + ∑ j = 1 M λ j c i , j ≥ g i ( l i ) + ∑ j = 1 M λ j c i , j > 0 , and hence w i > 0 , x i ( λ ) = l i .

If − ∑ j = 1 M λ j c i , j > g i ( u i ) , we have g i ( x i ) + ∑ j = 1 M λ j c i , j ≤ g i ( u i ) + ∑ j = 1 M λ j c i , j < 0 , which means v i > 0 , and x i ( λ ) = u i . Therefore, we have

x i ( λ ) = { l i , if − ∑ j = 1 M λ j c i , j < g i ( l i ) , h i ( − ∑ j = 1 M λ j c i , j ) , if g i ( l i ) ≤ − ∑ j = 1 M λ j c i , j ≤ g i ( u i ) , u i , if − ∑ j = 1 M λ j c i , j > g i ( u i ) . (A1)

2) λ j = 0 or ∑ i = 1 N c i , j x i ( λ ) = C j implies

λ j ( ∑ i = 1 N c i , j x i ( λ ) − C j ) = 0 , j = 1 , ⋯ , M .

Because x ( λ ) satisfies Equation (5) and Equation (6), x ( λ ) will satisfy all KKT conditions. Therefore, x * = x ( λ ) if ( x ( λ ) , λ ) satisfies λ j = 0 or ∑ i = 1 N c i , j x i = C j , j = 1 , ⋯ , M .

A.3 Proof of Proposition 4

1) λ M = 0 or ∑ i = 1 N c i , M x i ( λ M ) = C M implies λ M ( ∑ i = 1 N c i , M x i − C M ) = 0 . Since ( x ( λ M ) , λ M ) satisfies Equation (5) and Equation (6), it will satisfy all KKT conditions. Therefore, x * = x ( λ M ) if ( x ( λ M ) , λ M ) satisfies λ M = 0 or ∑ i = 1 N c i , M x i ( λ M ) = C M .

2) KKT conditions for problem P ( f ^ ( λ M ) , M − 1 ) are

d f ^ i ( x i ) d x i + ∑ j = 1 M − 1 λ j c i , j − w i + v i = 0 , i = 1 , ⋯ , N , (A2)

∑ i = 1 N w i ( x i − l i ) + ∑ i = 1 N v i ( x i − u i ) = 0 , (A3)

λ j ( ∑ i = 1 N c i , j x i − C j ) = 0 , j = 1 , ⋯ , M − 1 . (A4)

Notice that f ^ i ( x i ) is a parameter-adjusted function of f i ( x i ) , with f ^ i ( λ M ) = f i + λ M c i , M x i . These conditions in Equations (A2)-(A3) are the same as KKT conditions given in Equations (5)-(7) without λ M ( ∑ i = 1 N c i , M x i − C M ) = 0 . Since x ( λ M ) is the optimal solution of the KKT conditions in Equations (5)-(7) without λ M ( ∑ i = 1 N c i , M x i − C M ) = 0 , it must be the optimal solution to problem P ( f ^ ( λ M ) , M − 1 ) .

A.4 Proof of Proposition 5

Let λ ¯ M = max ( 0 , max i = 1 , ⋯ , N { − g i ( l i ) / c i , M } ) . If λ M * > λ ¯ M , then we have λ M * c i , M > − g i ( l i ) , i = 1 , ⋯ , N . From Equation (5), we have

w i * = g i ( x i ) + ∑ j = 1 M λ j c i , j + v i > g i ( x i ) + λ M * c i , M + v i > 0 , i = 1 , ⋯ , N . (A5)

Since w i > 0 , from Equation (6), we know x i = l i and v i = 0 . Thus, we have

λ M * ( ∑ i = 1 N c i , M x i * − C M ) = λ M * ( ∑ i = 1 N c i , M l i − C M ) ≠ 0 . (A6)

Equation (A6) violates the slackness condition λ M ( ∑ i = 1 N c i , M x i − C M ) = 0 in Equation (7). Therefore, there must be λ M * < λ ¯ M .