^{1}

^{2}

^{2}

^{*}

^{3}

In this thesis, we reformulate the original non-linear model for the LMRP. Firstly, we introduced a set of parameters to represent the non-linear part of the cost increase for a facility space allocated potential additional costs and new set of decision variables, indicating how many customers each equipment distribution. The algorithms are tested on problems with 5 to 500 potential facilities and randomly generated locations. Then using actual data to validate this new method is better. Our work was motivated by the modeling approach used in the Maximum Expected Covering Location Problem (MEXCLP). We compare new method and Lagrangian relaxation method to solve LMRP with constant customer demand rate and equal standard deviation of daily demand.

Our work was motivated by the modeling approach used in the Maximum Expected Covering Location Problem (MEXCLP). MEXCLP is introduced by Mark S. Daskin in 1983 [

This model is a kind of covering problem; it decides the number of vehicles in each location in order to maximize the expected number of demands that can be covered, given that vehicles may be unavailable (in use). The model assumes that there is an equal probability that a vehicle is busy at any location. As the objective function is the expected number of demands, the decision variables that choose the number of vehicles in each location appear in an exponential term. This makes the objective function non-linear, just like the LMRP problem. Das- kin introduces a set of parameters to represent the increase in the expected coverage for each additional vehicle, as well as a set of binary decision variables to indicate whether the customer is covered a specific numbers of times. By using the sum of all th0e benefits of adding a new vehicle to represent the expected coverage, he changes the problem into one that is linear and easy to solve. So we apply the same idea to convert the LMRP into a linear mixed-integer programming problem and compare it with the Lagrangian method of Mark S. Daskin, Collette R. Coullard, and Zuo-Jun Max Shen to see if it will give us a more efficient method [

As we mentioned in the Introduction chapter, our approach for solving concave binary minimization problems is inspired by a reformulation strategy that is sometimes used to solve other binary optimization problems in which the objective function contains a non-linear function of the sum of the binary variables. The basic idea is to introduce auxiliary parameters and binary variables and use their product to represent the none-linear part, and use these to linearize the objective function.

One model that uses this approach is the maximum expected covering location problem (MEX-CLP) by Mark S. Daskin in 1983. The MEXCLP chooses locations of facilities that can sometimes be unavailable (e.g., because the ambulance located there is busy on another call). A demand node is covered by a facility if it is within a certain coverage radius of it. The goal of the MEXCLP is to locate at most P facilities to maximize the total expected coverage of the demand nodes.

The MEXCLP assumes that the probability that a facility is unavailable at any time is given by q. It also assumes that facility unavailability are independent, so if there are n facilities that cover a demand node, then the probability that all of them are unavailable is given by q^{n}. Since the number of covering facilities, n, is not known a priori, we have to express it in terms of the decision variables as ∑ j ∈ J a i j X j , where a i j is a parameter that equals 1 if facility j covers demand node i and 0 otherwise. Then the model can be formulated as follows:

Parameters:

J set of potential facilities, indexed by j,

I set of customer nodes, indexed by i,

q the probability that a facility is unavailable at any time,

P the maximum number of the facilities can be chosen,

h i the demand generated at node i,

a i j = { 1 ifafacilityat j cancoverdemandsatcustomernode i 0 otherwise

Decision Variables;

X j the number of facilities to be built at j

Then the model can be formulated as follows:

Maximize ∑ i ∈ I h i ( 1 − q ∑ j ∈ J a i j X j )

subject to ∑ j ∈ J X j ≤ P

X j ∈ 0 , 1 , ∀ j ∈ J

In the original formulation, the probability that the demand of a customer node i is covered is given by 1 − q ∑ j ∈ J a i j X j , which is a non-linear function of X_{j}. Instead of computing the probability directly, proposes adding up the benefits of each new facility. We now summarize his approach.

The availability probability for n facilities is ( 1 − q n + 1 ) − ( 1 − q n ) = q n ( 1 − q ) . We introduce a new variable Z j k to represent the number of times covered, which we define to be 1 if demand node i is covered k or more times, and 0 if not. The model then can be formulated as follows.

Maximize ∑ k = 1 P ∑ i ∈ I h i q k − 1 ( 1 − q ) Z j k

subject to ∑ j ∈ J a i j X j − ∑ k = 1 P Z i k ≥ 0 , ∀ i ∈ I

X j ∈ 0 , 1 , ∀ j ∈ J

Z j k ∈ 0 , 1 , ∀ j ∈ J ; k = 1 , 2 , ⋯ , P .

In this model, we add up the benefits to replace the none-linear part of the objective function and that gives us a linear formulation. In what follows we propose a similar method to reformulate the LMRP model as a linear one.

The LMRP model is an extension of the UFLP that considers uncertain demand. Besides the fixed cost of opening locations and the variable transportation cost, it also includes the cost of cycle stock and safety stock. As a result, the LMRP is structured much like the UFLP model, with two extra non-linear terms in the objective function. Despite its concave objective function, the LMRP problem can be solved by Lagrangian relaxation quite efficiently, just like the UFLP, assuming that the ratio of the customer demand rate and the standard deviation of daily demand are constant. We use the following notations:

Parameters:

I set of retailers, indexed by i,

J set of candidate DC sites, indexed by j,

u i mean daily demand of retailer i, for each i ∈ I

σ i 2 variance of daily demand of retailer i, for each i ∈ I ,

f j fixed (daliy) demand of locating a DC at candidate site j, for each j ∈ J ,

K j fixed cost for DC j to place an order from the supplier, including fixed components of both ordering and transportation costs, for each j ∈ J ,

d i j cost per unit to ship between retailer i and candiddate DC site j, for each i ∈ I and j ∈ J

θ a constant parameter that captures the safety stock costs at candidate sites.

Decision Variables:

X j = { 1 ifwelocateatcandidatesite j 0 if not

Y i j = { 1 ifdemandsatretailer i areassignedtoaDCatcandidatesite j 0 ifnot

Then the model is formulated as follows.

Minimize ∑ j ∈ J { f j + ∑ i ∈ I d i j Y i j + K j ∑ i ∈ I u i Y i j + θ ∑ i ∈ I σ i 2 Y i j }

subjectto ∑ j ∈ J Y i j = 1 , ∀ i ∈ I

Y i j ≤ X j , ∀ i ∈ I , ∀ j ∈ J

X j ∈ { 0 , 1 } , ∀ j ∈ J

Y i j ∈ { 0 , 1 } , ∀ i ∈ I , ∀ j ∈ J

To make the objective function linear, we introduce a new parameter γ j k to represent the cost of safety and cycle stock cost that k retailers are assigned to DC j, that is

γ j k = K j k u + θ k σ 2

Also we introduce a new decision variable

Z j k = { 1 , ifexactly k retailersareassignedtoDC j , 0 , if not

To associate Z j k with its meaning using linear constraints, we add the constraints

∑ k = 0 | I | k Z j k = ∑ i ∈ I Y i j , ∀ j ∈ J

∑ k = 0 | I | Z j k = 1 , ∀ j ∈ J

The second constraint says that only one of the Z j k can be equal to 1 for each j and the first constraint makes sure that the 1 appears when k = ∑ i ∈ I Y i j which is just how we define the meaning of Z j k .

So the linear model is:

Minimize ∑ j ∈ J { f j X j + ∑ i ∈ I d i j Y i j + ∑ k ∈ J γ j k Z j k } (2.1)

subject to ∑ j ∈ J Y i j = 1 , ∀ i ∈ I (2.2)

Y i j ≤ X j , ∀ i ∈ I , ∀ j ∈ J (2.3)

∑ k = 0 | I | k Z j k = ∑ i ∈ I Y i j , ∀ j ∈ J (2.4)

∑ k = 0 | I | Z j k = 1 , ∀ j ∈ J (2.5)

X j ∈ { 0 , 1 } , ∀ j ∈ J (2.6)

Y i j ∈ { 0 , 1 } , ∀ i ∈ I , ∀ j ∈ J (2.7)

Z j k ∈ { 0 , 1 } , ∀ j ∈ J , ∀ k = 0 , ⋯ , I (2.8)

From these two formulations, we can see although the second method is linear, it has many more constraints than the original formulation. On the other hand, it can be solved by an o―the-shelf MIP solver and does not require Lagrangian relaxation as in the original LMRP. So it’s hard to say which computation time would be shorter only by looking at the models. We will test randomly generated examples and compare the solution time of the two methods in Chapter 4.

Similar to the UFLP, we solve the LMRP by relaxing the assignment constraints Equation (2.2) to obtain the following Lagrangian sub-problem:

Minimize ∑ j ∈ J { f j + ∑ i ∈ I d i j Y i j + K j ∑ i ∈ I u i Y i j + θ ∑ i ∈ I σ i 2 Y i j } + ∑ i ∈ I λ I ( 1 − ∑ j ∈ J Y i j ) = Minimize ∑ j ∈ J { f j + ∑ i ∈ I ( d i j − λ i ) Y i j + K j ∑ i ∈ I u i Y i j + θ ∑ i ∈ I σ i 2 Y i j } + ∑ i ∈ I λ i

subject to Y i j ∈ X j , ∀ i ∈ I , ∀ j ∈ J

X j ∈ { 0 , 1 } , ∀ j ∈ J

Y i j ∈ { 0 , 1 } , ∀ i ∈ I , ∀ j ∈ J

Although the sub-problem is a concave integer minimization problem, it can be solved relatively efficiently, using a sorting method developed by Mark S. Daskin, Collette R. Coullard, and Zuo-Jun Max Shen in 2003. The algorithm relies on the assumption that the ratio of the demand variance to the demand mean is a constant for all retailers. That is, for all i ∈ I , σ i 2 / u i = γ ≤ 0 . Then we can collapse two square root terms into one and apply the sorting algorithm to solve the resulting sub-problem.

The optimal objective function value of the Lagrangian sub-problem gives us a lower bound of the original problem; then we need an upper bound. There are many ways to find a feasible solution to get the upper bound; in this paper, we use a simple algorithm to generate the solution from the sub-problem result. This is shown in the appendix.

Finally, we recursively update λ to get a smaller gap between the lower and upper bound. Our stopping condition in the computational tests in this thesis is when the number of iterations is over 500 or the gap is less than or equal to 5 percent of the upper bound. There is no limit for CPU time since the first stopping condition includes it.

We implemented the Lagrangian method in C++ and the linearization method in AMPL with CPLEX version 12.4.0.0.

Data Scale | Lagrangian Method | Linearized Method |
---|---|---|

5 | 0.039 | 0.264 |

10 | 0.055 | 0.373 |

15 | 0.105 | 0.422 |

20 | 0.081 | 0.447 |

25 | 0.096 | 0.472 |

30 | 0.171 | 0.567 |

35 | 0.133 | 0.623 |

40 | 0.109 | 0.729 |

45 | 0.153 | 0.815 |

50 | 0.175 | 0.873 |

55 | 0.169 | 1.151 |

60 | 0.213 | 1.626 |

65 | 1.948 | 1.559 |

70 | 0.172 | 1.597 |

75 | 0.225 | 2.142 |

80 | 0.240 | 2.458 |

85 | 0.289 | 3.121 |

90 | 0.246 | 5.722 |

95 | 20715 | 5.215 |

100 | 0.373 | 4.965 |

150 | 0.726 | 31.408 |

200 | 7.645 | 300.122 |

250 | 10.753 | 193.101 |

300 | 2.540 | 650.172 |

350 | 3.750 | 397.260 |

400 | 27.234 | 166.028 |

450 | 59.588 | 397.333 |

500 | 71.281 | 364.518 |

facilities for the linearized method than it does for the Lagrangian method. So for larger scale problems (which are more practical) the Lagrangian method will have better performance.

In our experiments, for one specific data set, CPLEX gets stuck when it tries to solve the linearization problem. It takes over 90 seconds while the other samples with the same data scale only need 2.91 seconds on average. When we use the Lagrangian method to solve the same problem, the method also stopped because the number of iterations is over 500. The reason that the Lagrangian method can’t solve this kind of data set in a small number of iterations is that the Lagrangian relaxation’s optimal value can’t reach the original problem’s optimal value, and the gap is over 5 percent, but the reason why the gaps are large for this data set is not clear. Similarly, we still can’t understand why CPLEX also gets stuck for this data set.

The gap between the upper bound and the lower bound is not large; it is on average 4.3 percent. However, there is a significant cant increase in the gap when the data scale grows larger.

Figures 2-4 and

Lagrangian Method | Linearized Method | |
---|---|---|

5 | 0.078 | 0.420 |

10 | 0.086 | 0.828 |

15 | 0.125 | 0.623 |

20 | 0.192 | 0.803 |

25 | 0.202 | 0.733 |

30 | 0.233 | 0.826 |

35 | 0.187 | 1.022 |

40 | 0.162 | 1.483 |

45 | 0.329 | 1.891 |

50 | 0.245 | 1.623 |

55 | 0.227 | 1.92 |

60 | 0.297 | 2.427 |

65 | 6.935 | 2.895 |

70 | 0.502 | 2.798 |

75 | 0.643 | 4.052 |

80 | 0.582 | 6.304 |

85 | 0.636 | 8.471 |

90 | 0.721 | 12.567 |

95 | 9.028 | 15.465 |

100 | 1.873 | 13.981 |

150 | 2.234 | 90.385 |

200 | 21.43 | 639.293 |

250 | 28.388 | 293.233 |

300 | 8.449 | 1243.54 |

350 | 14.284 | 1539.455 |

400 | 53.293 | 324.144 |

450 | 19.116 | 567.342 |

500 | 20.101 | 597.369 |

Parameters | Generated from distribution |
---|---|

Uniform [0,500] | |

Uniform [0,15] | |

Uniform [0,25] | |

Inverse Normal | |

Uniform [0,1] | |

Uniform [0,2] |

Based on the results, we can see that the gap between the lower and upper bounds is acceptable, even in the cases that stopped due to the 500-iterations limit. Additionally, when the scale is not large, we can get results from the Lagrangian method with a tiny gap, say 0.5 percent. So it can be concluded that the Lagrangian method is reliable.

The linearization method becomes slower than the Lagrangian method on average when the data scale is large. From

For further research, we will test on larger scale problems and real world instances.

As in real life, the data such as distance, fixed cost of opening a new facility and the demand of different places are not always independent, so it is necessary to compare the computation time not only on random data sets, but also on examples that come from more realistic instances.

Our data comes from “An inventory-location model: Formulation, solution algorithm and computational results. Annals of Operations Research” and we use two data sets. For the 88-node dataset, representing the 50 largest cities in the 1990 US census along with the 48 capitals of the continental US minors duplicates, the mean demand was obtained by dividing the population data by 1000 and rounding the result to the nearest integer. Fixed facility location costs were obtained by dividing the facility location costs by 100. For the 150-node dataset, representing the 150 largest cities in the continental US for the 1990 census,

j(i) | _{ } | _{ } | u | ||
---|---|---|---|---|---|

1 | 6.7899 | 20.1892 | −0.6551 | 0.1004 | 1.4484 |

2 | 8.3984 | 10.8181 | −0.6551 | 0.1004 | 1.4484 |

3 | 5.4865 | 24.0214 | −0.6551 | 0.1004 | 1.4484 |

4 | 3.2355 | 14.2425 | −0.6551 | 0.1004 | 1.4484 |

5 | 8.7225 | 10.2614 | −0.6551 | 0.1004 | 1.4484 |

1 | 2 | 3 | 4 | 5 | |
---|---|---|---|---|---|

1 | 313.0018 | 427.8025 | 148.3325 | 269.3765 | 47.4767 |

2 | 307.9506 | 275.2097 | 240.9874 | 236.6587 | 39.4540 |

3 | 210.9449 | 216.0247 | 295.6569 | 200.8956 | 55.5366 |

4 | 228.2597 | 245.9648 | 386.1843 | 321.6904 | 185.5226 |

5 | 83.5940 | 316.4902 | 476.5017 | 329.6392 | 48.3012 |

the mean demand was obtained in the same manner. The fixed facility costs were all set to 100, one thousandth of the value in the dataset given by Mark S. Daskin, Collette R. Coullard, and Zuo-Jun Max Shen in 2002. These changes were made to allow us to deal with smaller numbers.

For the 88-node dataset, the solution time for the Lagrangian method is 0.203 s and it takes CPLEX 2.435 s. For the 150-node dataset, the solution time for the Lagrangian method is 0.539 s and it takes CPLEX 19.673 s.

We see that the solution time for both methods is a little bit smaller than the average of the random samples and the Lagrangian method is still much faster han the linearization method. So the randomness of the initial instances may not have much influence on the comparison of these two methods.

Our linearization of the LMRP requires longer solution time on average than the Lagrangian method does. However, it performs better in some special instances.

For future research, we would like to determine under what conditions the linearization method will have a shorter solution time than the Lagrangian method does.

Zhang, X., Tian, X., Wang, C. and Li, T. (2017) Algorithmic Methods for Concave Optimization Problems. American Journal of Industrial and Business Management, 7, 944-955. https://doi.org/10.4236/ajibm.2017.77067