^{1}

^{*}

^{1}

Considering that the probability distribution of random variables in stochastic programming usually has incomplete information due to a perfect sample data in many real applications, this paper discusses a class of two-stage stochastic programming problems modeling with maximum minimum expectation compensation criterion (MaxEMin) under the probability distribution having linear partial information (LPI). In view of the nondifferentiability of this kind of stochastic programming modeling, an improved complex algorithm is designed and analyzed. This algorithm can effectively solve the nondifferentiable stochastic programming problem under LPI through the variable polyhedron iteration. The calculation and discussion of numerical examples show the effectiveness of the proposed algorithm.

The stochastic programming with recourse, as an important method for solving optimization problems with uncertain parameters, was first proposed by G. Dantzig, the founder of linear programming. In the design of the optimal number of airline flights, he first considered a two-stage stochastic programming problem with recourse [

Generally, in the study of the stochastic programming with recourse, the second stage function with recourse is determined by the expectation criterion on the premise that the probability distribution of the random variable has complete information, so that the stochastic programming problem is equivalent to a definite mathematical programming problem. However, in practical problems, due to the lack of historical data and the limitation of statistical methods, the probability distribution information of random variables is not easily obtained, and only partial information may be obtained. Thus, the classic stochastic programming algorithms are no longer applicable. In order to solve this problem, the linear partial information theory (LPI) was proposed in reference [

Considering that the stochastic programming problem is transformed into the corresponding equivalent problem, the problem can be regarded as a kind of nonlinear programming problem, which can be solved by nonlinear programming method. With the continuous development of nonlinear programming methods, a large number of nonlinear programming methods have been applied to stochastic programming problems. In reference [

Aiming at the problem of stochastic programming under the uncertain probability distribution, this paper discusses a kind of stochastic two-stage programming model based on the maximal minimum expectation criterion under LPI based on the literature [

Let ( Ω , 2 Ω , P ) be a probability space, where Ω = { ω 1 , ω 2 , ⋯ , ω l } is a finite sample space, 2 Ω is the power set of sample space Ω , and P = ( p 1 , p 2 , ⋯ , p l ) T is the probability distribution corresponding to sample set Ω = { ω 1 , ω 2 , ⋯ , ω l } , that is p i = Pr ( { w = w i } ) , i = 1 , ⋯ , l , where Pr ( θ ) is the probability function of event θ , P ( Ω ) = 1 . In reference [

{ min x ∈ R n f ( x ) + g ( y ) s .t . C x ≤ b , (1)

where,

g ( y ) = E P ( ϕ ( x , ω ) ) ϕ ( x , ω ) = max y ∈ R m − 1 2 y T H y + ( σ ( ω ) − x ) T y s .t . W y ≤ q (2)

here, x ∈ R n , y ∈ R m are the decision variables in the first and second stages, H ∈ R m × m is a symmetric positive definite matrix, σ ( ω ) ∈ R m is a random variable in space Ω , C ∈ R , k × n b ∈ R k × 1 , W ∈ R t × m , q ∈ R t × 1 are all known coefficient matrices, f ( x ) is a convex function with x as the decision variable, and g ( y ) is the second stage function with recourse.

Assuming that the random variables in the model are finitely discrete, the second stage compensation function g ( y ) can be expressed as

g ( y ) = ∑ i = 1 l p i ϕ ( x , ω i ) (3)

The establishment of the above model is based on the assumption that the probability distribution information of the random variables in the model is complete, that is, P = ( p 1 , p 2 , ⋯ , p l ) T is completely determined. But due to the limitations of historical data, such complete probability distribution information is not easy to obtain. Based on the literature [

Suppose that the probability distribution information of random variables aren’t known completely, but have linear partial information, that is, the following constraint condition is satisfied:

φ = { P = ( p 1 , ⋯ , p l ) T ∈ R l | B P ≤ d , ∑ i = 1 l p i = 1 ; p i ≥ 0 , i = 1 , ⋯ , l } (4)

In the formula, B ∈ R m 1 × l and d ∈ R m 1 are both known matrices.

From the above assumption, it can be concluded that the solution space φ composed of LPI (P) of probability distribution P of random variables is a bounded convex polyhedron. The value on this convex polyhedron φ is the probability distribution of random variables in the model.

Since the probability distribution of random variables has linear partial information, simply using the expectation criterion to determine the second stage function with recourse will no longer be applicable. The paper expands the second stage function with recourse of the model, and combines the maximal minimum expectation criterion in the expectation model to give the two-stage stochastic programming model with recourse under the LPI discussed in the paper:

{ min x ∈ R n f ( x ) + max P ∈ π ∑ i = 1 l p i ϕ ( x , ω i ) s .t . C x ≤ b , (5)

where,

ϕ ( x , ω ) = max y ∈ R m − 1 2 y T H y + ( σ ( ω ) − x ) T y s .t . W y ≤ q (6)

ξ = { P = ( p 1 , ⋯ , p l ) T ∈ R l | B P ≤ d , ∑ i = 1 l p i = 1 ; p i ≥ 0 , i = 1 , ⋯ , l } (7)

Models (5)-(7) are the stochastic programming models with linear partial information probability distributions given in the paper. It can be seen that this model is a generalized form of the stochastic programming model in reference [

Because the second stage compensation function max P ∈ π ∑ i = 1 l p i ϕ ( x , ω i ) is not differentiable, the gradient information of the model does not exist, and the previous gradient-based method will not be applicable. In order to solve the two-stage stochastic programming problem with linear partial information probability distribution given in this paper, the complex optimization algorithm based on direct optimization method is introduced. By improving the complex method, it is adapted to the solution process of the model, and then a stochastic programming algorithm based on the improved complex method under the uncertain probability distribution is given. Then, several examples are used to verify the effectiveness of the designed model and the algorithm.

As a direct optimization algorithm, the complex method is simple and easy to implement, so it is widely used in engineering optimization problems [

The complex method is an optimization method that only needs to compare the objective value of the optimization function to determine the optimization direction. Its basic idea is that we should first construct an initial complex shape in the feasible region. Then by comparing the objective function values of each vertex, we can find a new point in the feasible region where the objective function values are improved, and use it to replace the vertices with poor objective function values to form a new complex shape. By repeating the above process, the complex shape is continuously deformed, transferred and shrunk, gradually approaching the best. When the objective function value of each vertex in the complex shape is not much different or the distance between each vertex is very close, the vertex with the lowest objective function value can be regarded as the best [

In n-dimensional space, a polyhedron composed of k ≥ n + 1 points is called a complex shape. Referring to the previous literature, there are two main methods to generate initial complex shape: manual definition of initial complex shape and random generation of initial complex shape. Considering the complexity of the stochastic programming model, the paper uses the second method. The following is the specific operation of randomly generating the initial complex shape:

1) Suppose that the vertices of the complex shape are n-dimensional, the number of vertices of the initial complex shape is determined to be k, and an initial vertex is selected manually in a given feasible region;

2) Suppose that the upper and lower bounds of the vertices of the complex shape are u p b ∈ R n , l o b ∈ R n respectively, where u p b are the upper bounds of the vertices and l o b are the lower bounds of the vertices. Then the remaining k − 1 vertices are generated by using the random number in [ 0 , 1 ] . The build rule is x i = l o b + r i ( u p b − l o b ) , where r i is the random number in interval [ 0 , 1 ] , i = 2 , ⋯ , k ;

3) Check whether the generated k vertices are in the feasible region: assuming that w vertices are in the feasible region and the remaining k − w vertices are not in the feasible region, the k − w vertices that are not in the feasible region can be translated into the feasible region by the following methods:

a) The geometric centers of w vertices in the feasible region are calculated and recorded as x g c = 1 w ∑ i = 1 w x i ;

b) If k − w vertices that are not in the feasible region are recorded as x o u t , j , j = 1 , ⋯ , k − w , then a vertex x ′ o u t , j in the feasible region can be found on the line between x g c and x o u t , j . The specific searching method is as follows:

x ′ o u t , j = x g e + ρ ( x o u t , j − x g e ) , ρ ∈ ( 0 , 1 ) , j = 1 , ⋯ , k − w (8)

If the result x ′ o u t , j is not in the feasible region, the formula ρ = 0.5 ρ can be used to continuously reduce ρ until the vertex is translated into the feasible region. Through the above steps, we can get the initial complex shape that meets the conditions.

In the generated complex shape, let the worst point be recorded as x h , the secondary bad point as x s , and the best point as x l . The centroid of other vertices with the worst points removed in the complex shape is calculated by formula x c = 1 n ∑ i ≠ h x i , which is recorded as x c . In the process of the complex shape optimization, several methods of vertex transformation for polyhedron in the iterative process are as follows:

1) Mapping method:

Transformation thought: We expect to find a better value in the opposite of the worst point x h , to replace x h .

Search direction: It searches along the direction from the worst point x h to the centroid x c , i.e. along the direction of x h → x c .

Step factor: Mapping factor α : α > 1 , representing the step size of the mapping.

Mapping iteration formula: x r = x c + α ( x c − x h ) , where x r is called mapping point.

Rule of judgement: If x r is in the feasible region and f ( x r ) < f ( x h ) , x r will be used instead of x h to form a new complex shape and carry out the next iteration.

2) Expansion method:

Transformation thought: According to the advantages and disadvantages of mapping point x r obtained by mapping method, we expect to get better transformation vertices. If the function value of the mapping point is less than the function value of the best x l , i.e. f ( x r ) < f ( x l ) , then the direction from x c to x r is the current optimal direction and it can be expanded in this direction.

Expansion iteration formula: x e = x c + β ( x r − x c ) .

Expansion coefficient β : β ≥ 1 .

Rule of judgement:

a) if f ( x e ) < f ( x l ) , the expansion is successful, and x e replaces x h to form a new complex shape.

b) If f ( x e ) > f ( x l ) , expansion fails, and x r replaces x h to form a new complex shape.

3) Shrinkage method:

Transformation thought: If f ( x h ) < f ( x r ) in the mapping method, it indicates that the step size of the mapping method is too large, let α = 0.5 α , and we repeat the mapping method. If it still fails until α < 10 − 5 , it indicates that the current optimization direction is not right. In this case, shrinkage method is considered to find the search direction in the complex shape.

Shrinkage direction: through the failure of the mapping method, it shows that the optimization direction x h → x c of the mapping method is not correct, so the complex shape is shrunk along the direction from the center of the centroid x c to the worst point x h , i.e. along the direction of x c → x h .

Shrinkage coefficient: γ : 0 < γ < 1 .

Shrinkage formula: x k = x c − γ ( x c − x h ) .

Rule of judgement: If f ( x k ) < f ( x h ) , we use shrinkage point x k to replace the worst point x h to form a new polyhedron; If the shrinkage fails, we carry out the compression step.

4) Compression method:

Transformation thought: shrinkage failure means that the effect of iteration points in the search direction composed of the most nearly x h and the center of mass x c is not good. In this case, we generally compress the compound shape to the best point μ l , so as to find the compound shape with good performance.

Compression formula: x i = x l + δ ( x i − x l ) , i = 1 , ⋯ , k , use this formula to replace all points except the best point in the current composite shape.

Compression factor: δ : 0 < δ < 1

The basic thought of the complex method is to change the complex shape step by step through continuous iteration, so that the final approximation of complex shape can be compressed to the optimal solution, and the iteration can be completed [

1 k { ∑ i = 1 k [ f ( x i ) − f ( x j ) ] 2 } ≤ ε (9)

where x j = 1 k ∑ i = 1 k x i , i = 1 , ⋯ , k .

The following is the specific steps of the complex method: Set the parameter α , β , γ , δ , and the convergence parameter ε > 0 . The number of vertices of the complex shape is determined. If the decision variable is n-dimensional, the number of vertices of the complex shape should be between n + 1 and 2 n .

1) Generate the initial complex shape. The steps of generating the initial complex shape by using the random method given in this paper are used to get the initial complex shape satisfying the requirements;

2) Calculate the function value of each vertex in the current complex shape, and sort out the worst point x h , the secondary bad point x s , the best point x l , and calculate the centroid x c of the current complex shape;

3) According to the mapping coefficient α and the mapping formula, the mapping point x r is calculated:

a) If the mapping point x r is within the feasible region, step 4) is carried out;

b) If the mapping point x r is not in the feasible region, we reduce the mapping coefficient α , that is α = 0.5 α , and then repeat step 3);

4) Calculate the function value of the mapping point x r , and compare the function value of x r with the vertex of the current complex shape:

a) If f ( x r ) < f ( x l ) , the expansion step is carried out. Using the expansion formula, the expansion point x e can be got. If f ( x e ) < f ( x r ) , then we replace x h with x e to get a new polyhedron, and carry out step 6); otherwise, we replace x h with x r to get a new polyhedron, and carry out step 6);

b) If f ( x l ) < f ( x r ) < f ( x h ) , x r is used instead of x h to get a new polyhedron, and step 6) is carried out;

c) If f ( x r ) ≥ f ( x h ) , compare the value of mapping coefficient α : if α > 10 − 5 , we reduce α , and set α = 0.5 α . Then step 3) is carried out; otherwise, we carried out the contraction step of the complex method, and use the contraction formula x k = x c − γ ( x c − x h ) to calculate the contraction point x k . Then step 5) is carried out;

5) Compare the function values of the contraction point and the worst point x h : if f ( x k ) < f ( x h ) , we replace x h with x k to get a new polyhedron, and carry out step 6); otherwise, the compression step of the complex method is carried out to get a new complex shape. Then step 2) is carried out;

6) Judge whether the current complex shape meets the termination condition 1 k { ∑ i = 1 k [ f ( x i ) − f ( x j ) ] 2 } ≤ ε . If it does, we stop the iteration. At this time, the best solution is the best solution and the best function value is the best value. Otherwise, step 2) is carried out.

Through the concrete steps of the complex method, the nonlinear programming problem can be solved. The stochastic programming problem under LPI proposed in this paper can also be regarded as a nondifferentiable nonlinear programming problem. Therefore, the paper innovatively introduces the complex method into the solution of the model, which provides a feasible way for the stochastic programming algorithm under the uncertain probability distribution.

Combined with the compensation two-stage stochastic programming model (5) - (6) given above, this paper presents a complex method of decision variable x ∈ R 6 to solve the stochastic programming model. At the same time, in view of the different probability distribution information of random variables, the paper discusses the examples according to different probability distribution information, so as to compare and analyze the two-stage stochastic programming model under different probability distribution information.

In the model (5) - (6), f ( x ) in the first stage is a general convex function form. Here, it is set as a quadratic function form in the calculation example, in which the decision variable is x ∈ R 6 . As for the random variable in the compensation function of the second stage, the capacity of the calculation example is set to 7, i.e. l = 7 , so P = ( p 1 , ⋯ , p 7 ) T ∈ R 7 . Therefore the paper considers the following stochastic programming problems:

{ min x ∈ R 6 1 2 x T A x + D T x + max P ∈ π ∑ i = 1 7 p i ϕ ( x , ω i ) s .t . C x ≤ b , (10)

ϕ ( x , ω i ) = max y ∈ R 6 − 1 2 y T H y + ( σ ( ω i ) − x ) T y s .t . W y ≤ q (11)

The parameters of correlation matrix and variables used in the model are: A = d i a g ( 2 , 2 , 3 , 1 , 2 , 1 ) , a diagonal matrix; H ∈ R 3 × 3 , a unit matrix; other parameters of correlation matrix are as follows:

D = ( 2 3 1 4 2 1 ) ; C = ( 3 1 0 2 1 3 1 1 2 0 1 2 2 3 1 4 0 3 ) ; b = ( 12 5 20 ) ;

W = ( 1 0 2 1 1 3 2 − 1 0 3 1 2 3 2 1 0 1 1 ) ; q = ( 7 7 7 ) ;

In the paper, the corresponding value of random variable is fixed, and the probability of occurrence of random variable is uncertain information, that is, the probability of occurrence of random variable is variable. In order to make the example more universal, the value of the random variable σ ( w i ) = ( σ 1 ( w i ) , σ 2 ( w i ) , σ 3 ( w i ) , σ 4 ( w i ) , σ 5 ( w i ) , σ 6 ( w i ) ) T , i = 1 , ⋯ , 7 is generated by a random number with lower bound ( 1 , 2 , 3 , 4 , 5 , 6 ) T and upper bound ( 6 , 7 , 8 , 9 , 10 , 11 ) T , and determined. The value of σ ( ω ) is

σ ( ω ) = ( 3 .08515 .60163 .00065 .51175 .73386 .4617 3 .18002 .12965 .74836 .17667 .10187 .6517 3 .75405 .54074 .45456 .55419 .464710 .4815 5 .83514 .73627 .86347 .57418 .48867 .0804 2 .11006 .35374 .03368 .59317 .44219 .0587 5 .46433 .65997 .10614 .20855 .53838 .9753 1 .38155 .89965 .19207 .61739 .88998 .6925 )

Combined with the matrix parameters of the model given above, according to the completeness of the probability distribution information of the designed random variables, the paper analyzes and discusses the examples in three cases.

Case (1):

It is assumed that the probability distribution of random variables involved in the model does not have too much effective information, and only has the following linear partial information constraints:

ξ = { P = ( p 1 , ⋯ , p 7 ) T ∈ R 7 | ∑ i = 1 7 p i = 1 , p i ≥ 0 , i = 1 , ⋯ , 7 }

This means that the occurrence of random variables in the case is accidental, and we cannot know the exact value of random variables in the case. For such a specific problem, we use the robust decision-making scheme designed in this paper to find the optimal decision-making result under the condition of maximizing the compensation function, so as to ensure that the actual result will not be worse than the expected decision-making result.

In this paper, the first initial point of the initial complex shape is taken as x 0 = ( 0 , 0 , 0 , 0 , 0 , 0 ) T , the number of vertices of the complex shape is set as 12. As the paper introduces in the vertex transformation method of complex shape, the mapping coefficient α > 1 , expansion coefficient β ≥ 1 , contraction coefficient 0 < γ < 1 , the compression coefficient 0 < δ < 1 and the smaller the convergence parameter ε , the higher the accuracy of the algorithm. Therefore these parameters adopted in the complex method are respectively taken as α = 1.3 , β = 1 , γ = 0.7 , δ = 0.5 , ε = 10 − 6 . Through the operation of the program, the iterative process is shown in

As shown in the above table, the algorithm stops iteration after the 448th time, and the optimal solution x = (−2.1646, 0.7194, −0.3065, −0.4003, 1.3779, −0.7288) is obtained. At the same time, the optimal value W 1 = 62.2188 of stochastic programming is obtained. In order to more intuitively explain the iterative process of the complex method in solving stochastic programming, the paper presents the iterative graph of the optimal value changing with the number of iterations w, as shown in

Iteration times w | optimal solution x | optimal value W |
---|---|---|

1 | (−1.7197, 1.5025, 0.4358, 1.4467, 2.7328, −2.2133) | 71.8494 |

2 | (−1.7197, 1.5025, 0.4358, 1.4467, 2.7328, −2.2133) | 71.8494 |

… | … | … |

19 | (−1.1580, 2.5627, −0.4518, −0.0830, 1.2015, −2.5338) | 69.2828 |

20 | (−0.0872, 1.3478, −0.0326, 0.7095, 1.5657, −0.2709) | 67.9572 |

21 | (−0.0872, 1.3478, −0.0326, 0.7095, 1.5657, −0.2709) | 67.9572 |

… | … | … |

447 | (−2.1646, 0.7194, −0.3065, −0.4003, 1.3779, −0.7288) | 62.2188 |

448 | (−2.1646, 0.7194, −0.3065, −0.4003, 1.3779, −0.7288) | 62.2188 |

It can be seen that the optimal value of the model gradually decreases with the increase of the number of iterations, and keeps approaching to the optimal solution. The final optimal value converges to 62.2188, which shows that for the solution of stochastic programming, the complex method has good convergence and the designed algorithm is effective.

Case (2):

Compared with case (1), we set the probability distribution information of random variables in case (2) more complete, and its probability distribution has some linear constraint information. Under the constraint of case (1), case (2) supposes that the probability distribution of random variables has the following linear constraints:

{ p 1 + p 2 + p 3 ≤ 1 2 p 4 + p 5 ≤ 1 3 p 6 + p 7 ≤ 1 3 1 9 ≤ p 7 ≤ 1 5

Let B = ( 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 − 1 ) , d = ( 1 2 1 3 1 3 1 5 − 1 9 ) T , then the probability distribution of random variables in case (2) has the following linear partial information:

ξ = { P = ( p 1 , ⋯ , p 7 ) T ∈ R 7 | B P ≤ d , ∑ i = 1 7 p i = 1 , p i ≥ 0 , i = 1 , ⋯ , 7 }

In this case, the other relevant parameters set in case (1) are kept unchanged. The complex method is used to solve the stochastic programming problem in case (2), and the robust decision scheme and result in case (2) are given. The results of the iterative process are shown in

The program is terminated after 431 iterations, and the optimal solution x is (−2.0086, 0.6482, −0.4208, −0.7191, 0.9701, −0.2265). At this time, the optimal value of stochastic programming problem is obtained, that is W 2 = 56.1144 . It can be seen that the optimal value of case (2) is better than that of case (1), which shows that when the probability distribution information of random variables is more complete, the decision result is better. The optimal value of the model changes with the number of iterations w, as shown in

Case (3):

In order to compare the influence of the completeness of the probability distribution information of the random variables on the decision result, the probability of the random variables in case (3) is set as a fixed value. Next, the other parameters of the stochastic programming model are consistent with the situations (1) and (2), and the probability distribution of the random variables is set as P = ( 3 25 , 3 25 , 1 5 , 3 25 , 3 25 , 1 5 , 3 25 ) T , that is, the example in this paper is strengthened to the classical stochastic programming model. In this paper, the result of case (3) obtained by the complex method under the condition that the probability information of the random variables is complete is shown in

The experimental result shows that the program ends after 385 iterations. The optimal solution and the optimal value of the example are: x = (−1.6394, 0.1992, −0.1810, −1.0080, 0.5954, −0.6059), W 3 = 45.1761 , respectively. At this time, the optimal value of case (3) is far less than that of case (1) and case (2), which also shows that when the probability distribution information of random variables in the stochastic programming problem is complete, the better decision result can be obtained. The trend chart of the optimal value iteration in case (3) is shown in

In order to illustrate the significance of stochastic programming model under uncertain probability distribution in reality, the paper brings the optimal solution of case (3) into the objective function of case (1), and the optimal value difference value between them is 64.3512 - 62.2188, that is 2.1324; similarly, the

Iteration times w | optimal solution x | optimal value W |
---|---|---|

1 | (0, 0, 0, 0, 0, 0) | 63.7740 |

2 | (0, 0, 0, 0, 0, 0) | 63.7740 |

… | … | … |

29 | (0.1711, 0.3808, −0.1774, 0.7904, 0.3968, 0.2465) | 63.5492 |

30 | (0.1711, 0.3808, −0.1774, 0.7904, 0.3968, 0.2465) | 63.5492 |

31 | (−1.3699, 1.2283, −0.2804, 0.6253, −0.0063, −1.1811) | 60.6809 |

… | … | … |

431 | (−2.0086, 0.6482, −0.4208, −0.7191, 0.9701, −0.2265) | 56.1144 |

431 | (−2.0086, 0.6482, −0.4208, −0.7191, 0.9701, −0.2265) | 56.1144 |

Iteration times w | optimal solution x | optimal value W |
---|---|---|

1 | (0, 0, 0, 0, 0, 0) | 49.6901 |

2 | (0, 0, 0, 0, 0, 0) | 49.6901 |

… | … | … |

31 | (−0.5435, 0.3253, −0.5615, −0.0954, 1.3423, −0.8559) | 47.7653 |

32 | (−1.6089, 0.8900, −0.8656, −0.1203, 0.1785, −0.4645) | 47.6223 |

33 | (−2.1276, 0.3484, 0.2670, 0.0070, 1.2468, −0.6278) | 47.3861 |

… | … | … |

384 | (−1.6394, 0.1992, −0.1810, −1.0080, 0.5954, −0.6059) | 45.1761 |

385 | (−1.6394, 0.1992, −0.1810, −1.0080, 0.5954, −0.6059) | 45.1761 |

optimal solution of case (3) is brought into case (2), and the optimal value difference value between them is 57.1422 - 56.1144, that is 1.0278. It can be seen that the difference values 2.1324 and 1.0278 are the loss value caused by the inaccuracy of the probability distribution information of the random variables when using the classical stochastic programming model. This also fully shows the significance of the stochastic programming model based on the maximum minimum expectation criterion under the uncertainty probability distribution in the actual problem. The model can effectively reduce the loss caused by decision-making in the face of the stochastic programming problem with incomplete information of the probability distribution of random variables.

Under the guidance of linear partial information theory, the stochastic programming model with uncertain probability distribution is established based on the maximum minimum expectation criterion. According to the nondifferentiability of the model, the paper designs a solution method based on the complex method. Finally, the solution algorithm is used to solve several specific examples, which show the value of the model in practical problems and the effectiveness of the designed solution algorithm.

The authors declare no conflicts of interest regarding the publication of this paper.

Luo, Y.P. and Ma, X.S. (2020) A Complex Algorithm for Solving a Kind of Stochastic Programming. Journal of Applied Mathematics and Physics, 8, 1016-1030. https://doi.org/10.4236/jamp.2020.86079