^{1}

^{*}

^{1}

^{*}

This work aims to give a systematic construction of the two families of mixed-integer-linear-programming (MILP) formulations, which are graph-
based and sequence-based, of the well-known scheduling problem

Many machine scheduling problems practically require setup times prior to the processing of jobs. It is common that the setup times of one job are dependent on the last job that has been previously processed on the same machine. For instance, in the package printing industry, the time taken to prepare the ink colors for a task is dependent on the colors that have been used on the last printing task. In this paper, we consider an identical parallel machine scheduling problem with jobs’ sequence-dependent setup times and release dates. The problem’s objective is to minimize the total completion time. This problem is denoted by the three fields notation [

Parallel machine scheduling with job setup times problem is widely studied in the literature. Readers can refer to the survey of Allahverdi [

As far as we know, Kurz and Askin [

For this reason, we extend the works of Nessah et al. [

● In Section 2, we introduce two MILP formulations: graph-based formulation and sequence-based formulation. We establish tight upper bounds for the completion times of jobs for each formulation.

● In Section 3, we apply the test protocol introduced by Nessah et al. [

● In Section 4, we draw conclusions from the numerical tests and present four perspectives of future researches.

We consider a problem where one has to schedule n jobs on m identical machines. m should be strictly inferior to n unless the problem becomes trivial. Each job j has a release date r j when it is ready to process and a sequence-dependent setup time s i j if it is preceded immediately by job i on the same machine. We consider three following assumptions:

● The setting up of a job can be done before the release date of this job.

● The first job to be processed in each machine does not need to be setup because it is preceded by no other job.

● All the values of the setup time must satisfy the triangle inequality s i j ≤ s i k + s k j for all job i, j, k. This assumption is important to avoid the insertion of a job k with p k = 0 and s i j < s i k + s k j for some jobs i and j.

Sets and parameters

● J = 1 , ⋯ , n : set of jobs.

● M = 1 , ⋯ , m : set of machines.

● r j : release date of jobj.

● p j : processing time of job j.

● s i j : Setup time if job j follows job i to be processed on the same machine.

We introduce a dummy job, Job 0, which has processing time equal to zero ( p 0 = 0 ) and it is available at the beginning of the scheduling horizon ( r 0 = 0 ). The setup times required all jobs preceding task 0 are also zero. To limit the number of identical machines we must have a constraint that assures at most m jobs can connect to job 0.

Set

● J ′ = 0 , ⋯ , n : set of jobs including imaginary job 0.

Decision variables

● C j : the completion time of job j.

● x i j : the sequence decision variable. x i j = 1 if job j processes right after job i on the same machine (denoted by i → j ); otherwise x i j = 0 .

Formulation GF

Minimize ∑ j ∈ J C j (1)

subject to

C j ≥ r j + p j , ∀ j ∈ J (2)

C j ≥ C i + s i j + p j − ( 1 − x i j ) ( C ¯ i + s i j − r j ) , ∀ i , j ∈ J (3)

∑ j ∈ J x 0 j ≤ m (4)

∑ i ∈ J ′ x i j = 1, ∀ j ∈ J (5)

∑ i ∈ J x j i ≤ 1, ∀ j ∈ J (6)

x j j = 0, ∀ j ∈ J (7)

C j ≥ 0 , ∀ i , j ∈ J (8)

x i j ∈ { 0,1 } , ∀ i , j ∈ J (9)

with C ¯ i is an upper bound of the completion time of task i.

Constraint (2) makes sure that a job j must start after its release date ( r j ). Constraint (3) ensures the equation C j ≥ C i + S i j + p j to be valid only if job i precede job j, i.e. x i j = 1 . Otherwise, C i − C ¯ i + p j + r j ≤ p j + r j ≤ C j because C i < C ¯ i . By constraint (4), at most m jobs can follow job 0 because of m available machines. Constraint (5) ensures that each job, except job 0, can have only one precedence. Constraint (6) limits each job, except job 0, to have at most one follower. Constraint (7) forbids a job to follow itself.

We develop an upper bound for the completion time of job j for the identical machine scheduling environment, based on the work of Nogueira et al. [

Theorem 1. There is an optimal schedule such that the number of jobs scheduled on any machine is less than or equal to n − m + 1 (Yalaoui and Chu [

Proposition 2. C ¯ j M = max { r j , max i ∈ J , i ≠ j ( r i ) + ∑ i = m n ρ [ i ] − ρ j } + p j is an upper bound of job j’s completion time C j according to theorem 1, where:

● ρ j = p j + s j · .

● ρ [ j ] is the j^{th} sorted element of ρ in a non-decreasing order.

● s j · = max k ∈ J , k ≠ j s j k .

Proof. We consider firstly a schedule with one machine where each task has the maximum setup time for any following job: s j · = max k ∈ J , k ≠ j s j k .

Let j be the last job to be scheduled on this machine. Let us assume that job k precedes job j. Hence, C j = max { r j , C k + s k j } + p j ≤ max { r j , C k + s k · } + p j .

Furthermore, C k + s k · ≤ max i ∈ J , i ≠ j r i + ∑ i ∈ J , i ≠ j ρ i then C j ≤ max { r j , max i ∈ J , i ≠ j r i + ∑ i ∈ J , i ≠ j ρ i } + p j .

We consider secondly a schedule with two machines, according to theorem 1, there is at least one task, denotedl, to be scheduled on the second machine. The improved job j’s completion time bound is then:

C j ≤ max { r j , max i ∈ J , i ≠ j r i + ∑ i ∈ J , i ≠ j ρ i − ρ l } + p j ≤ max { r j , max i ∈ J , i ≠ j r i + ∑ i ∈ J , i ≠ j ρ i − ρ [ 1 ] } + p j (10)

We consider a general case of m machine. By applying the same process, one can have a strict bound of the completion time of job j:

C j ≤ max { r j , max i ∈ J , i ≠ j ( r i ) + ∑ i ∈ J , i ≠ j ρ i − ∑ i = 1 m − 1 ρ [ i ] } + p j (11)

In addition, ∑ i ∈ J , i ≠ j ρ i − ∑ i = 1 m − 1 ρ [ i ] = ∑ i = m n ρ [ i ] − ρ j , we can rewrite the upper bound as:

C j ≤ max { r j , max i ∈ J , i ≠ j ( r i ) + ∑ i = m n ρ [ i ] − ρ j } + p j = C ¯ j M (12)

□

The intuition behind the sequence formulation is to have a global job sequence cut into job sequences of machines. First, we use the variables ν j k to make a jobs sequence: if ν j k = 1 then job j occupies the k^{th} position on the scheduling sequence, otherwise ν j k = 0 . Second, we cut the sequence into parallel machines by using variable θ . If θ k = 1 then the job at the position k will start to process on a new machine.

We take the same example introduced previously with 10 jobs and 3 machines to illustrate the assignment of variables for the sequence-based formulation. The value assigned to the variables θ is shown in

The sequencing of jobs on machines is as follow:

● Machine 1: 5 → 4 → 7 → 9 .

● Machine 2: 8 → 6 .

● Machine 3: 2 → 1 → 10 → 3 .

Auxiliary variable β i j k denotes the sequencing of two jobs i and j. If β i j k = 1 then job j follows job i and takes the k^{th} position of the scheduling sequence. The completion time of the job assigned to position k is denoted by y k .

k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|

Job | 5 | 4 | 7 | 9 | 8 | 6 | 2 | 1 | 10 | 3 |

θ k | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |

Formulation SF

Set

● K = { 1, ⋯ , n } : jobs’ position.

Variables

● y k : the completion time of job at position k.

● ν j k : is equal to 1 if job j occupies position k.

● θ k : is equal to 1 if job at the position k will start to process on a new machine.

● β i j k : an auxiliary variable, is equal to 1 if job j follows job i at k^{th} position.

Minimize ∑ k ∈ K y k (13)

subject to

∑ k ∈ K ν j k = 1 ∀ j ∈ J (14)

∑ j ∈ J ν j k = 1 ∀ k ∈ K (15)

y k ≥ ∑ j ∈ J ν j k ( r j + p j ) ∀ k ∈ K (16)

β i j k ≥ 1 − ( 2 − ν i k − 1 − ν j k ) ∀ i , j ∈ J : i ≠ j , ∀ k ∈ K (17)

y k ≥ y k − 1 + ∑ i ∈ J ∑ j ∈ J β i j k s i j + ∑ i ∈ J ν j k p j − θ k ( y ¯ k + s max − r min ) ∀ k ∈ K (18)

∑ k ∈ K θ k ≤ m (19)

y 0 = 0 ; θ 1 = 1 (20)

ν j k ∈ { 0,1 } ∀ j ∈ J , k ∈ K (21)

β i j k ∈ { 0,1 } ∀ i , j ∈ J , k ∈ K (22)

y k ≥ 0 ∀ k ∈ K (23)

where y ¯ k is an upper bound of the completion time of the job that takes the k^{th} position, s max = max i , j ∈ J s i j and r min = min j ∈ J r j .

Constraints (14) and (15) limit one job at one position and vice-versa. Constraint (16) ensures a job j to start after its release date when it is assigned to k^{th} position. Constraint (17) triggers the variable β i j k to be greater or equal to 1 if job i is in the position k − 1 and job j is in the position k, otherwise β i j k is unconstrained. Constraint (18) computes the completion time of the job taking the k^{th} slot. Constraint (19) limits the number of machines to m. The last constraints initialize and bound the decision variables.

It is important to note that the integral constraint of variable β i j k can be relaxed, i.e. β i j k ∈ [ 0,1 ] , without violating the integrality of the solution [

We introduce in the following proposition an upper bound for y k . This is an adaptation from the completion time upper bound for a single machine scheduling, which is introduced by Nogueira et al. [

Proposition 3. y ¯ k = max j ∈ J ( p j + r j ) + ∑ j = max { n − k + 1 , m } n ρ [ j ] is a valid upper

bound of the completion time of job at the k^{th} position, y k , for any schedule with respect to theorem 1 where:

● ρ j = p j + s j max .

● ρ [ j ] is the j^{th} sorted element of ρ in a non-decreasing order.

● s j · = max k ∈ J , k ≠ j s j k .

Proof. Completion time of the job at the first position cannot exceed max j ∈ J ( p j + r j ) . The job in the second position cannot complete after max j ∈ J ( p j + r j ) + ρ [ n ] , and so on, until we reach the job at the ( n − m + 2 ) th position. Completion time of this job is bounded the same way as the job at ( n − m + 1 ) th position. Because from the position 1 to ( n − m + 2 ) th there would be at least one job k ∈ { 1, ⋯ , n − m + 2 } that has θ k = 1 (by theorem 1). Since p k + r k ≤ max j ∈ J ( p j + r j ) and p k + s k < ∑ j = m n ρ [ j ] then y n − m + 2 ≤ y ¯ n − m + 2 = y ¯ n − m + 1 . In the same way, from the position 1 to ( n − m + τ ) there would be at least τ > 0 jobs 1 ≤ k ≤ n m + τ that have θ k = 1 .

Consequently, y ¯ n − m + 1 = y ¯ n − m + 2 = ⋯ = y ¯ n . □

To conduct the numerical test, we use the same benchmark introduced by [

The objective of the numerical tests is two-fold. First, we try to observe the sensibility of each solving method to the corresponding data structure. Second, we tend to examine the impacts of the data structure to the tractability of the problem.

The following subsection will mention the random generators of the instances.

The number of jobs generated is n ∈ { 5,10,15,20,25,30 } and the number of machines generated is m ∈ { 2,3,5 } .

Job’s release date is generated randomly according to the uniform distribution

which takes the value between [ 0,50.5 × α × n m ] . Jobs arrival density is scaled by

Formulations | GF | SF |
---|---|---|

No. variables | O ( n 2 ) | O ( n 3 ) |

No. Constraints | O ( n 2 ) | O ( n 3 ) |

the value of α which takes value from the set { 0.6,0.8,1.5,2.0,3.0 } . When α = 0.6 , jobs arrival would be densest and earliest. In the contrary, when α = 3.0 , jobs arrival would distribute more evenly along the scheduling horizon.

The processing times of jobs are uniformly generated in [ 1,100 ] . The setup time s i j is equal to a × min { p i , p j } , with a randomly generated in [ A , B ] ∈ { [ 0.01 , 0.1 ] , [ 0.05 , 0.1 ] , [ 0.1 , 0.2 ] , [ 0.2 , 0.5 ] , [ 0.1 , 0.5 ] } .

For each data structure { n , m , α , A } , we generate 10 instances, hence, each problem size { n , m } has 250 instances generated. The total number of instances tested is 6000 instances.

We compare the CPLEX solving MILP formulations with an exact method: the Branch-and-Bound algorithm developed by Nessah et al. [

The MILP formulations are solved by IBM ILOG CPLEX 12.8. The Branch-and-Bound algorithm is coded in C++. The computer runs on Window 7 professional 64bits (Dell Optiplex 9020, CPU Intel Core i5-4690 3.5GHz and 8192MB RAM). We report in this section the following KPIs (Key Performance Indicators):

● Time (in second): solving time, which is limited to 3600 seconds.

● LPGap (in percentage): the LP relaxation gap. For the solution obtained by solving MILP formulations, LPGap is the gap between the best objective of the integral solution and LP relaxation one, i.e. when all the integrity constraint are removed. When Branch-and-Bound algorithm solves the instance, LPGap is the gap between the best solution objective and the best lower bound.

● MLBGap (the maximal lower bound, in percentage): the gap between the objective value found by each method to the best lower bound found by all the method. The MLBGap value can be considered as the worst possible gap to the optimal solution of each instance tested. This KPI is used to compare the quality of the actual solution found by a method to others.

● Opt: percentage of instances solved to optimality.

Param. | Description | Generation |
---|---|---|

n | Number of jobs | { 5,10,15,20,25,30,35,40 } |

m | Number of machines | { 2,3,5 } |

α | JAD scale parameter | { 0.6,0.8,1.5,2.0,3.0 } |

r j | Release date | U ( 0,50.5 × α × n m ) |

p j | Processing time | U ( 1,100 ) |

[ A , B ] | Scale factor intervals | { [ 0.01,0.1 ] , [ 0.05,0.1 ] , [ 0.1,0.2 ] , [ 0.2,0.5 ] , [ 0.1,0.5 ] } |

a | SRL scale parameter | U ( A , B ) |

s i j | Sequence dependent setup time | a × min { p i , p j } |

Impacting factor: Number of machines

Impacting factor: Jobs’ arrival density

From the data of

Impacting factor: Jobs’ setup times relative length

Factor | Value | GF | SF | BnB |
---|---|---|---|---|

m | 2 | 753.4 | 1056.2 | 208.4 |

3 | 691.3 | 1208.9 | 756.1 | |

5 | 510.0 | 1246.1 | 1420.5 | |

α | 0.6 | 1804.9 | 998.4 | 989.8 |

0.8 | 1372.3 | 1227.9 | 909.7 | |

1.5 | 67.8 | 1441.3 | 816.7 | |

2.0 | 12.5 | 1276.2 | 786.9 | |

3.0 | 0.3 | 908.3 | 472.1 | |

a | [0.01, 0.1] | 612.7 | 911.9 | 540.1 |

[0.05, 0.1] | 657.7 | 966.3 | 619.2 | |

[0.1, 0.2] | 634.2 | 1220.9 | 757.0 | |

[0.2, 0.5] | 682.2 | 1383.6 | 1076.0 | |

[0.1, 0.5] | 671.1 | 1369.3 | 982.8 |

MILP and the most sensitive method to this factor is BnB. Concerning the computational time, the job setup times relative lengths are less impacting than the other two factors.

Impacting factor: Number of machines

Impacting factor: Jobs’ arrival density

Impacting factor: Jobs’ setup relative length

In

Altogether, Branch-and-Bound has tightest LPGap. The all-average gap is equal to 0.6% and the standard deviation is 1.7%. GF has the largest LPGap

Factor | GF | SF | BnB |
---|---|---|---|

Number of machine (m) | −1.00 | 0.86 | 0.99 |

Arrival density scale parameter (α) | −0.84 | −0.27 | −0.97 |

Average setup times scale parameter ( a ¯ ) | 0.82 | 0.95 | 1.00 |

Factor | Value | GF | SF | BnB |
---|---|---|---|---|

m | 2 | 3.13% | 0.56% | 0.04% |

3 | 2.47% | 0.71% | 0.20% | |

5 | 1.32% | 1.20% | 0.61% | |

α | 0.6 | 7.96% | 1.49% | 0.64% |

0.8 | 3.55% | 1.59% | 0.56% | |

1.5 | 0.02% | 0.69% | 0.12% | |

2.0 | 0.01% | 0.29% | 0.07% | |

3.0 | 0.00% | 0.07% | 0.01% | |

a | [0.01, 0.1] | 2.04% | 0.39% | 0.14% |

[0.05, 0.1] | 2.19% | 0.53% | 0.20% | |

[0.1, 0.2] | 2.16% | 0.88% | 0.23% | |

[0.2, 0.5] | 2.67% | 1.25% | 0.47% | |

[0.1, 0.5] | 2.47% | 1.07% | 0.37% |

when the release dates are dense where α ∈ { 0.6,0.8 } while yielding excellent results at other value of α . This fact underlines our previous observation about the sensitivity of GF to the job availability distribution.

The quality of the solution found by each method/formulation can be evaluated by the gap of the best-known lower bound (MLBGap) to the actual solution. This gap can be considered as the maximum possible gap to the optimal solution.

Figures 9-11 shows the impact of the scale factors on three paradigms over the MLBGap. When having the worst LPGap, GF has the best MLBGap in all instance size and in any scale factors. Inversely, the Branch-and-Bound yields the worst MLBGap while having the best LPGap. This observation underlines again the fact that the lower-bound of the Branch-and-Bound calculated by the mSPRT heuristic [

The global results on MLBGap of the three methods according to the setup times length scale factors are shown in

For this KPI, we tend to examine the tractability of an instance according to its structure, i.e. quantify how easy an instance can be solved according to a specific structure. First, we introduce the graphs of the evolution of the percentages of the instances being solved to optimality in function of the problem size, and, corresponding to the value of m, α and a. As for the other KPI which are previously introduced, those graphs give us an idea on how well each solving method cope with the instance in an specific structure. Second, we introduce an cross-table analysis, with combined two factors. This analysis would help us to have an insight on how easy an instance can be solved with a specific pair of configurations.

Factor | GF | SF | BnB |
---|---|---|---|

Number of machine (m) | −1.00 | 0.99 | 1.00 |

Arrival density scale parameter (α) | −0.78 | −0.95 | −0.90 |

Average setup times scale parameter ( a ¯ ) | 0.96 | 0.97 | 0.98 |

Factor | Value | GF | SF | BnB |
---|---|---|---|---|

m | 2 | 0.11% | 0.14% | 0.23% |

3 | 0.33% | 0.44% | 0.95% | |

5 | 0.39% | 0.53% | 1.49% | |

α | 0.6 | 0.48% | 0.48% | 1.26% |

0.8 | 0.33% | 0.44% | 1.10% | |

1.5 | 0.02% | 0.12% | 0.21% | |

2.0 | 0.00% | 0.06% | 0.08% | |

3.0 | 0.00% | 0.01% | 0.02% | |

a | [0.01, 0.1] | 0.05% | 0.07% | 0.28% |

[0.05, 0.1] | 0.08% | 0.11% | 0.38% | |

[0.1, 0.2] | 0.15% | 0.20% | 0.49% | |

[0.2, 0.5] | 0.31% | 0.40% | 0.83% | |

[0.1, 0.5] | 0.24% | 0.33% | 0.69% |

Factor | GF | SF | BnB |
---|---|---|---|

Number of machine (m) | 0.87 | 0.89 | 0.96 |

Arrival density scale parameter (α) | −0.84 | −0.91 | −0.88 |

Average setup times scale parameter ( a ¯ ) | 0.99 | 1.00 | 0.99 |

Figures 12-14 show the percentages of instances that are solved optimally according to each scale factor.

Cross-

Cross-

Cross-

α | ||||||
---|---|---|---|---|---|---|

[A,B] | 0.6 | 0.8 | 1.5 | 2.0 | 3.0 | |

GF | [0.01, 0.1] | 52% | 67% | 100% | 100% | 100% |

[0.05, 0.1] | 49% | 65% | 99% | 100% | 100% | |

[0.1, 0.2] | 53% | 65% | 98% | 100% | 100% | |

[0.2, 0.5] | 49% | 65% | 97% | 99% | 100% | |

[0.1, 0.5] | 53% | 61% | 99% | 99% | 100% | |

SF | [0.01, 0.1] | 88% | 80% | 69% | 77% | 80% |

[0.05, 0.1] | 83% | 80% | 67% | 73% | 83% | |

[0.1, 0.2] | 77% | 67% | 64% | 65% | 81% |

[0.2, 0.5] | 66% | 861% | 59% | 65% | 78% | |
---|---|---|---|---|---|---|

[0.1, 0.5] | 71% | 63% | 59% | 61% | 71% | |

BnB | [0.01, 0.1] | 87% | 86% | 87% | 87% | 87% |

[0.05, 0.1] | 81% | 83% | 82% | 85% | 94% | |

[0.1, 0.2] | 76% | 79% | 81% | 81% | 89% | |

[0.2, 0.5] | 63% | 68% | 74% | 75% | 85% | |

[0.1, 0.5] | 71% | 73% | 74% | 75% | 83% | |

All | [0.01, 0.1] | 93% | 93% | 100% | 100% | 100% |

[0.05, 0.1] | 88% | 93% | 99% | 100% | 100% | |

[0.1, 0.2] | 86% | 86% | 99% | 100% | 100% | |

[0.2, 0.5] | 77% | 79% | 98% | 99% | 100% | |

[0.1, 0.5] | 79% | 83% | 99% | 100% | 100% | |

Scale | 0% | 25% | 50% | 75% | 100% |

m | ||||
---|---|---|---|---|

[A, B] | 2 | 3 | 5 | |

GF | [0.01, 0.1] | 77% | 77% | 83% |

[0.05, 0.1] | 76% | 77% | 83% | |

[0.1, 0.2] | 76% | 79% | 81% | |

[0.2, 0.5] | 76% | 75% | 82% | |

[0.1, 0.5] | 74% | 76% | 83% | |

SF | [0.01, 0.1] | 70% | 66% | 67% |

[0.05, 0.1] | 68% | 65% | 63% | |

[0.1, 0.2] | 61% | 58% | 61% | |

[0.2, 0.5] | 59% | 53% | 54% | |

[0.1, 0.5] | 57% | 54% | 54% | |

BnB | [0.01, 0.1] | 96% | 76% | 66% |

[0.05, 0.1] | 92% | 77% | 57% | |

[0.1, 0.2] | 88% | 70% | 57% | |

[0.2, 0.5] | 80% | 61% | 49% | |

[0.1, 0.5] | 86% | 63% | 49% | |

All | [0.01, 0.1] | 99% | 91% | 88% |

[0.05, 0.1] | 98% | 91% | 86% | |

[0.1, 0.2] | 93% | 89% | 84% | |

[0.2, 0.5] | 88% | 83% | 82% | |

[0.1, 0.5] | 92% | 84% | 84% | |

Scale | 20% | 950% | 100% |

m | ||||
---|---|---|---|---|

α | 2 | 3 | 5 | |

GF | 0.6 | 36% | 40% | 52% |

0.8 | 48% | 52% | 61% | |

1.5 | 94% | 93% | 99% | |

2.0 | 100% | 99% | 100% | |

3.0 | 100% | 100% | 100% | |

SF | 0.6 | 73% | 64% | 57% |

0.8 | 64% | 60% | 52% | |

1.5 | 57% | 52% | 50% | |

2.0 | 60% | 54% | 58% | |

3.0 | 61% | 66% | 82% | |

BnB | 0.6 | 85% | 69% | 43% |

0.8 | 87% | 73% | 43% | |

1.5 | 91% | 70% | 50% | |

2.0 | 91% | 66% | 58% | |

3.0 | 88% | 69% | 85% | |

All | 0.6 | 87% | 71% | 61% |

0.8 | 87% | 74% | 64% | |

1.5 | 96% | 94% | 99% | |

2.0 | 100% | 99% | 100% | |

3.0 | 100% | 100% | 100% | |

Scale | 20% | 50% | 100% |

the release date density has more discriminating power. When combined all solving methods, more than 94% of the instances are solved to optimality with sparse jobs availability ( α ≥ 1.5 ). When α is less than 1.5, then, instances with less number of machines are easier to be solved to optimality.

First, we present in this paper two MILP formulations to solve the parallel machine scheduling with task release dates and sequence-dependent setup times to minimize total completion time. We provide also two upper bounds for the job’s completion times corresponding to each formulation.

Second, by a thorough analysis of the numerical tests, we observe a considerable impact of the job arrival rate, and a non-negligible impact of the number of machines to the quality of the solution found and the time-performance of the solving algorithm. The cross-table analysis at the end of the numerical tests section could establish a base for further classification methods.

We developed four perspectives on future research of this problem. First, we may notice the difference between the LPGap and the actual gap of each instance. Thus, a better estimation of the lower bound can be very helpful to improve the computational effort. Second, we tend to test the performances of GF, SF, and Branch-and-Bound with other objective functions such as makespan minimization and total weighted completion time minimization. Third, we notice that the MILP formulation is very sensitive to the upper bound of the completion time so a tighter upper bound could be helpful. Finally, one can observe that the performance of the different methods is input data characteristics (jobs’ arrival density scale parameter and setup time scale factors) dependent. Further work is to develop a new method corresponding to our previous conclusion.

This research is supported by Chaire Connected Innovation tenured by Prof. Farouk Yalaoui.

The authors declare no conflicts of interest regarding the publication of this paper.

Yalaoi, F. and Nguyen, N.Q. (2021) Identical Machine Scheduling Problem with Sequence-Dependent Setup Times: MILP Formulations Computational Study. American Journal of Operations Research, 11, 15-34. https://doi.org/10.4236/ajor.2021.111002