^{1}

^{*}

^{1}

^{2}

In this paper, we focus on a new approach based on new generalized hesitant fuzzy hybrid weighted aggregation operators, in which the evaluation information provided by decision makers is expressed in hesitant fuzzy elements (HFEs) and the information about attribute weights and aggregation-associated vector is unknown. More explicitly, some new generalized hesitant fuzzy hybrid weighted aggregation operators are proposed, such as the new generalized hesitant fuzzy hybrid weighted averaging (NGHFHWA) operator and the new generalized hesitant fuzzy hybrid weighted geometric (NGHFHWG) operator. Some desirable properties and the relationships between them are discussed. Then, a new algorithm for hesitant fuzzy multi-attribute decision making (HF-MADM) problems with unknown weight information is introduced. Further, a practical example is used to illustrate the detailed implementation process of the proposed approach. A sensitivity analysis of the decision results is analyzed with different parameters. Finally, comparative studies are given to verify the advantages of our method.

Hesitant fuzzy multiple attribute decision making (HF-MADM) can be characterized as a process of choosing or selecting or ranking a finite number of alternatives to attain the best one(s), in which alternative evaluations are expressed in HFEs by decision makers. It has been successfully applied in various areas, such as risk investment [

Aggregation operators are widely used in HF-MADM problems, which can calculate the actual aggregation values of the alternatives. Xia et al. [

It is noted that the weight vector of these hesitant fuzzy aggregation operators should play an important part of decision making problems. Then, an important issue related to the hesitant fuzzy aggregation operators is to choose an optimal method to gain their associated weights. For example, Xu and Zhang [

1) The new generalized hesitant fuzzy hybrid weighted aggregation operators satisfy some desirable properties, including the properties of idempotency and boundedness.

2) The new algorithm can deal with HF-MADM problems with unknown weights information. Especially, aggregation-associated weight vector and attribute weights are calculated by the known HFEs.

This paper is organized as follows. In Section 2, we review some basic concepts of HFSs, the distance measure of HFEs, the generalized hesitant fuzzy hybrid averaging (GHFHA) operator, the generalized hesitant fuzzy hybrid geometric (GHFHG) operator, the generalized hesitant fuzzy hybrid weighted averaging (GHFHWA) operator and the generalized hesitant fuzzy hybrid weighted geometric (GHFHWG) operator. Section 3 proposes some new generalized hesitant fuzzy hybrid weighted aggregation operators, investigates properties and relationships of these operators. Section 4 presents a new algorithm to implement the proposed operators to MADM, in which aggregation-associated weight vector and attribute weights are unknown. In Section 5, a practical example is illustrated to verify the effectiveness and practicality of our approach. In Section 6, comparative studies are given to clarify the advantages of our proposed method. Some conclusions and future works are made in Section 7.

Definition 1. ( [

A = { 〈 x , h A ( x ) 〉 | x ∈ X }

where h A ( x ) is a set of values in [ 0,1 ] , denoting the possible membership degrees of the element x ∈ X to a set A. For convenience, we call h A ( x ) a hesitant fuzzy element (HFE), denoted by h.

In many decision making problems, the memberships of HFSs are nonempty and finite subsets of [ 0,1 ] , which are called typical hesitant fuzzy sets (THFSs) [

s ( h ) = 1 l ∑ i = 1 l γ ( i ) . (1)

The hesitant fuzzy order central polymerization degree function of h is defined as

p ( h ) = 1 − 1 l ∑ i = 1 l | γ ( i ) − s ( h ) | , (2)

where l is the number of elements in h. Based on the score function s ( h ) and the hesitant fuzzy order central polymerization degree function p ( h ) , the comparison scheme can be developed to rank any HFEs:

If s ( h 1 ) < s ( h 2 ) , then h 1 < h 2 ;

If s ( h 1 ) = s ( h 2 ) , then

If p ( h 1 ) < p ( h 2 ) , then h 1 < h 2 ;

If p ( h 1 ) = p ( h 2 ) , then h 1 = h 2 .

Definition 3. ( [

1) h 1 ⊕ h 2 = ∪ γ 1 ∈ h 1 , γ 1 ∈ h 2 { γ 1 + γ 2 − γ 1 γ 2 } ,

2) h 1 ⊗ h 2 = ∪ γ 1 ∈ h 1 , γ 1 ∈ h 2 { γ 1 γ 2 } ,

3) h k = ∪ γ ∈ h { γ k } ,

4) k h = ∪ γ ∈ h { 1 − ( 1 − γ ) k } .

The above operations are the basic operations laws. However, they sometimes have some drawbacks. We find that if h 1 = h 2 = h , then h 1 ⊕ h 2 ≠ 2 h 1 , h 1 ⊕ h 2 ≠ 2 h 2 . For example, suppose that h 1 , h 2 are two HFEs, h 1 = h 2 = { 0.2 , 0.4 , 0.5 } . Then we get h 1 ⊕ h 2 = { 0.36 , 0.52 , 0.6 , 0.52 , 0.64 , 0.7 , 0.6 , 0.7 , 0.75 } and 2 h 1 = 2 h 2 = { 0.36 , 0.64 , 0.75 } . So h 1 ⊕ h 2 ≠ 2 h 1 , h 1 ⊕ h 2 ≠ 2 h 2 . Both the addition and multiplicative operations of HFEs can increase the number of elements in the derived HFE, and also make the calculation process complicated. Liao et al. [

Definition 4. ( [

1) h 1 ⊕ ˙ h 2 = ∪ i = 1 l { γ 1 ( i ) + γ 2 ( i ) − γ 1 ( i ) γ 2 ( i ) } ,

2) h 1 ⊕ ˙ h 2 = ∪ i = 1 l { γ 1 ( i ) γ 2 ( i ) } ,

where l = max { l 1 , l 2 } . If l 1 < l 2 , an extension of h 1 should be considered optimistically by repeating its maximum elements until it has the same length with h 2 .

Definition 5. ( [

d ( h 1 , h 2 ) = 1 l ∑ i = 1 l | γ 1 ( i ) − γ 2 ( i ) | . (3)

Definition 6. ( [

1) the generalized hesitant fuzzy hybrid averaging (GHFHA) operator:

GHFHA ( h 1 , h 2 , ⋯ , h n ) = ( ⊕ j = 1 n ω j h ˙ σ ( j ) p ) 1 p = ∪ γ ˙ σ ( 1 ) ∈ h ˙ σ ( 1 ) , γ ˙ σ ( 2 ) ∈ h ˙ σ ( 2 ) , ⋯ , γ ˙ σ ( n ) ∈ h ˙ σ ( n ) { ( 1 − ∏ j = 1 n ( 1 − γ ˙ σ ( j ) p ) ω j ) 1 p } , (4)

where p > 0 , h ˙ σ ( j ) is the jth largest of h ˙ = n λ k h k ( k = 1 , 2 , ⋯ , n ) .

2) the generalized hesitant fuzzy hybrid geometric (GHFHG) operator:

GHFHG ( h 1 , h 2 , ⋯ , h n ) = 1 p ( ⊗ j = 1 n ( p h ¨ σ ( j ) ) ω j ) = ∪ γ ¨ σ ( 1 ) ∈ h ¨ σ ( 1 ) , γ ¨ σ ( 2 ) ∈ h ¨ σ ( 2 ) , ⋯ , γ ¨ σ ( n ) ∈ h ¨ σ ( n ) { 1 − ( 1 − ∏ j = 1 n ( 1 − ( 1 − γ ¨ σ ( j ) ) p ) ω j ) 1 p } , (5)

where p > 0 , h ¨ σ ( j ) is the jth largest of h ¨ k = h k n λ k ( k = 1 , 2 , ⋯ , n ) .

3) the generalized hesitant fuzzy hybrid weighted averaging (GHFHWA) operator:

GHFHWA ( h 1 , h 2 , ⋯ , h n ) = ( ⊕ j = 1 n λ j ω ε ( j ) h j p ∑ j = 1 n λ j ω ε ( j ) ) 1 p = ∪ γ 1 ∈ h 1 , γ 2 ∈ h 2 , ⋯ , γ n ∈ h n { ( 1 − ∏ j = 1 n ( 1 − γ j p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } , (6)

where ε : { 1,2, ⋯ , n } → { 1,2, ⋯ , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , and p is a parameter such that p ∈ ( − ∞ , + ∞ ) .

4) the generalized hesitant fuzzy hybrid weighted geometric (GHFHWG) operator:

GHFHWG ( h 1 , h 2 , ⋯ , h n ) = ( ⊗ j = 1 n ( h j p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p

= ∪ γ 1 ∈ h 1 , γ 2 ∈ h 2 , ⋯ , γ n ∈ h n { ( ∏ j = 1 n ( γ j ) p λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } , (7)

where ε : { 1,2, ⋯ , n } → { 1,2, ⋯ , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , and p is a parameter such that p ∈ ( − ∞ , + ∞ ) .

In this section, we replace operations ⊕ and ⊗ in Equation (6) and Equation (7) by ⊕ ˙ and ⊗ ˙ respectively, get the new generalized hesitant fuzzy hybrid weighted averaging (NGHFHWA) operator and the new generalized hesitant fuzzy hybrid weighted geometic (NGHFHWG) operator.

Definition 7. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , the following new generalized hesitant fuzzy hybrid weighted aggregation operators are defined by the mapping: H n → H with an associated weight vector ω = ( ω 1 , ω 2 , ⋯ , ω n ) T such that ω j ∈ [ 0,1 ] and ∑ j = 1 n ω j = 1 . Then

1) the NGHFHWA operator:

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = ( ⊕ ˙ j = 1 n λ j ω ε ( j ) h j p ∑ j = 1 n λ j ω ε ( j ) ) 1 p (8)

2) the NGHFHWG operator:

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) = 1 p ( ⊗ ˙ j = 1 n ( p h j ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) (9)

where ε : { 1,2, ⋯ , n } → { 1,2, ⋯ , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , λ = ( λ 1 , λ 2 , ⋯ , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , ⋯ , n ) with λ j ∈ [ 0,1 ] and ∑ j = 1 n λ j = 1 , p is a parameter such that p > 0 .

Notice that p is positive parameter, since the negative multiplication of HFE h j has no meaning.

Especially, if p = 1 , then the NGHFHWA and NGHFHWG operators reduce to the new hesitant fuzzy hybrid weighted averaging (NHFHWA) operator and the new hesitant fuzzy hybrid weighted geometric (NHFHWG) operator, respectively:

NHFHWA ( h 1 , h 2 , ⋯ , h n ) = ⊕ ˙ j = 1 n λ j ω ε ( j ) h j ∑ j = 1 n λ j ω ε ( j )

NHFHWG ( h 1 , h 2 , ⋯ , h n ) = ⊕ ˙ j = 1 n h j λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j )

Remark 1. In some decision making problems, t HFEs are the same for a collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , denoted by h i 1 = h i 2 = ⋯ = h i t . According to Definition 7, we get ε ( i 1 ) = ε ( i 2 ) = ⋯ = ε ( i t ) , thus ω ε ( i 1 ) = ω ε ( i 2 ) = ⋯ = ω ε ( i t ) = a , but ∑ j = 1 n ω ε ( j ) = b ≠ 1 . In order to meet ∑ j = 1 n ω ε ( j ) = 1 , notice h i 1 = h i 2 = ⋯ = h i t , we assume that 1 − b can be equally distributed to ω ε ( i 1 ) , ω ε ( i 2 ) , ⋯ , ω ε ( i t ) , so ω ε ( i 1 ) = ω ε ( i 2 ) = ⋯ = ω ε ( i t ) = a + 1 − b k .

Theorem 1. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , the aggregated value by using the NGHFHWA operator or the NGHFHWG operator is also a HFE, and

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = ∪ i = 1 l { ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } , (10)

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) = ∪ i = 1 l { 1 − ( 1 − ∏ j = 1 n ( 1 − ( 1 − γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } , (11)

where ω = ( ω 1 , ω 2 , ⋯ , ω n ) T is an associated weight vector with ω j ∈ [ 0,1 ] and ∑ j = 1 n ω j = 1 , ε : { 1,2, ⋯ , n } → { 1,2, ⋯ , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , λ = ( λ 1 , λ 2 , ⋯ , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , ⋯ , n ) with λ j ∈ [ 0,1 ] and ∑ j = 1 n λ j = 1 , p is a parameter such that p > 0 .

In the following, we show that both the NGHFHWA operator and the NGHFHWG operator satisfy the properties of idempotency and boundedness, and other desirable properties.

Theorem 2. (Idempotency) If h j = h ( j = 1 , 2 , ⋯ , n ) , then

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = h ,

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) = h .

Theorem 3. (Boundedness) For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , the following inequations hold:

h − ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) ≤ h + , (12)

h − ≤ NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ h + , (13)

where h − = m i n 1 ≤ i ≤ l m i n 1 ≤ j ≤ n γ j ( i ) , h + = max 1 ≤ i ≤ l max 1 ≤ j ≤ n γ j ( i ) .

Lemma 1. ( [

Theorem 4. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , then

NHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) ,

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NHFHWA ( h 1 , h 2 , ⋯ , h n ) .

Theorem 5. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , the NGHFHWA operator is monotonically increasing and the NGHFHWG operator is monotonically decreasing with respect to the parameter p.

Lemma 2. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } ( l = max j { l j } , j = 1 , 2 , ⋯ , n ) , p is a parameter such that p > 0 . Let

f ( p ) = ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p ,

g ( p ) = ( 1 − ∏ j = 1 n ( 1 − ( 1 − γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p .

Then

lim p → 0 f ( p ) = e ∏ j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) (14)

lim p → 0 g ( p ) = e ∏ j = 1 n ( ln ( 1 − γ j ( i ) ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) (15)

where ω = ( ω 1 , ω 2 , ⋯ , ω n ) T is an associated weight vector with ω j ∈ [ 0,1 ] and ∑ j = 1 n ω j = 1 , ε : { 1,2, ⋯ , n } → { 1,2, ⋯ , n } is a permutation such that h j is the ε ( j ) th largest element of the collection of HFEs h j ( j = 1 , 2 , ⋯ , n ) , λ = ( λ 1 , λ 2 , ⋯ , λ n ) T is the weight vector of HFEs h j ( j = 1 , 2 , ⋯ , n ) with λ j ∈ [ 0,1 ] and ∑ j = 1 n λ j = 1 .

Theorem 6. For a collection of HFEs h j = ∪ i = 1 l { γ j ( i ) } , l = max j { l j } , ( j = 1 , 2 , ⋯ , n ) , then

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) .

The proofs of Theorems 1-6 and Lemma 2 can be found in the Appendix.

Consider that decision makers intend to evaluate a collection of alternatives A = { A 1 , A 2 , ⋯ , A m } with respect to the attributes G = { G 1 , G 2 , ⋯ , G n } . Suppose that h i j is an attribute value given by decision makers, which is a HFE for alternative A i with respect to attribute G j . All h i j ( i = 1 , 2 , ⋯ , m ; j = 1 , 2 , ⋯ , n ) form the hesitant fuzzy decision matrix H = ( h i j ) m × n , h i j = ∪ t = 1 l i j { γ i j ( t ) } . Based on the assumption that all the decision makers are optimistic. Optimists anticipate desirable outcomes and add the maximum value of the membership degrees. We obtain a normalized decision matrix H ˜ = ( h ˜ i j ) m × n , h ˜ i j = ∪ t = 1 l { γ ˜ i j ( t ) } , l = max i , j { l i j } ， ( i = 1 , 2 , ⋯ , m ; j = 1 , 2 , ⋯ , n ) . Weight vector λ = ( λ 1 , λ 2 , ⋯ , λ n ) T is the importance degree for the relevant attribute, such that λ j ∈ [ 0,1 ] and ∑ j = 1 n λ j = 1 . Meanwhile ω = ( ω 1 , ω 2 , ⋯ , ω n ) T is the aggregation-associated weight vector, which ω j ∈ [ 0,1 ] and ∑ j = 1 n ω j = 1 .

In the following, we first determine the weights of attributes and the aggregation-associated weight vector by optimal methods, then give an algorithm for MADM problems based on new generalized hybrid weighted aggregation operators under unknown weight information.

The attribute weight vector plays an important role in MADM, which not only represent the relative importance of attributes, but also the preferences of decision-makers. In order to get the optimal weight vector λ = ( λ 1 , λ 2 , ⋯ , λ n ) T of attributes under completely unknown information, we extend the maximizing deviation method [

For the nonlinear programming model (M-1):

(M-1) { max D ( λ ) = ∑ j = 1 n ∑ i = 1 m ∑ k = 1 m λ j ( 1 l ∑ t = 1 l | γ ˜ i j ( t ) − γ ˜ k j ( t ) | ) s .t . λ j ≥ 0 , j = 1 , 2 , ⋯ , n , ∑ j = 1 n λ j 2 = 1

As the calculation in [

λ j = Y j ∑ j = 1 n Y j , j = 1 , 2 , ⋯ , n , (16)

where Y j = ∑ i = 1 m ∑ k = 1 m ( 1 l ∑ t = 1 l | γ ˜ i j ( t ) − γ ˜ k j ( t ) | ) , j = 1 , 2 , ⋯ , n .

Determining the aggregation-associated weight vector ω = ( ω 1 , ω 2 , ⋯ , ω n ) T by a proper method is also an important step composed by new generalized hesitant fuzzy hybrid weighted aggregation operators. The normal distribution based method was introduced by Xu [

Algorithm:

Step 1. Obtain a normalized decision matrix H ˜ = ( h ˜ i j ) m × n from H = ( h i j ) m × n .

Step 2. Determine ω = ( ω 1 , ω 2 , ⋯ , ω n ) T according to the number of n in

Step 3. Utilize Equation (16) to obtain the weights of attributes λ = ( λ 1 , λ 2 , ⋯ , λ n ) T .

Step 4. Utilize the new generalized hybrid weighted aggregation operators, such as NGHFHWA, NGHFHWG, to synthesize h ˜ i j into overall h ˜ i ( i = 1 , 2 , ⋯ , m ) for alternatives A i ( i = 1 , 2 , ⋯ , m ) .

Step 5. Calculate the scores s ( h ˜ i ) of the overall hesitant fuzzy values h ˜ i by Equation (1). If any two scores of alternatives are the same, calculate their p ( h ˜ i ) functions according to Equation (2) ( i = 1 , 2 , ⋯ , m ) .

Step 6. Rank all the alternatives A i ( i = 1 , 2 , ⋯ , m ) in accordance with s ( h ˜ i ) and p ( h ˜ i ) ( i = 1 , 2 , ⋯ , m ) .

n | the aggregation-associated weight vector ω = ( ω 1 , ω 2 , ⋯ , ω n ) T |
---|---|

n = 2 | ω = ( 0.5,0.5 ) T |

n = 3 | ω = ( 0.2429,0.5142,0.2429 ) T |

n = 4 | ω = ( 0.1550,0.3450,0.3450,0.1550 ) T |

n = 5 | ω = ( 0.1117,0.2365,0.3036,0.2365,0.1117 ) T |

n = 6 | ω = ( 0.0865,0.1716,0.2419,0.2419,0.1716,0.0865 ) T |

n = 7 | ω = ( 0.0702,0.1311,0.1907,0.2161,0.1907,0.1311,0.0702 ) T |

n = 8 | ω = ( 0.0588,0.1042,0.1525,0.1845,0.1845,0.1525,0.1042,0.0588 ) T |

n = 9 | ω = ( 0.0506,0.0855,0.1243,0.1557,0.1678,0.1557,0.1243,0.0855,0.0506 ) T |

n = 10 | ω = ( 0.0443,0.0719,0.1034,0.1317,0.1487,0.1487,0.1317,0.1034,0.0719,0.0443 ) T |

As the development of the internet technology, more and more people are tending to use smartphone to get information rather than reading paper. Newspapers, as a traditional industry, must expand their business by new-media to keep pace with the times. As a government procurement function department, Public Resource Trading Center decided to purchase WeChat live broadcasting system for Haimen Daily newspaper. The aim of our example is to help government decision makers to select a proper supplier according to the following six attributes: 1) G 1 is the price; 2) G 2 is the quality; 3) G 3 is the technology; 4) G 4 is the green development degree; 5) G 5 is the reputation and 6) G 6 is the after sales service. It is assumed that four suppliers A i ( i = 1 , 2 , 3 , 4 ) are participating in the tender according to the tender request. In real world applications, decision makers found that it is hard for him to decide which supplier should be selected due to his limited knowledge. Hesitant fuzzy set may represent this data. The evaluation values of four suppliers with respect to six attributes are shown in the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (see

In what follows, we utilize the developed method to select the most desirable supplier.

Step 1. The normalized decision matrix H ˜ = ( h ˜ i j ) 4 × 6 from H = ( h i j ) 4 × 6 is shown in

Step 2. According to

Step 3. Utilize Equation (16), we obtain the weights of the attributes λ = ( 0.2389,0.1327,0.2743,0.1416,0.0885,0.1239 ) T .

Step 4. Utilize the NGHFHWA operator to obtain HFEs h ˜ i for the cars A 1 , A 2 , A 3 , A 4 . We take A 2 as an example.

G 1 | G 2 | G 3 | G 4 | G 5 | G 6 | |
---|---|---|---|---|---|---|

A 1 | { 0.3,0.5 } | { 0.6,0.7 } | { 0.4,0.6 } | { 0.7,0.8,0.9 } | { 0.4,0.5 } | { 0.5,0.6 } |

A 2 | { 0.6,0.7,0.8 } | { 0.5,0.6,0.8 } | { 0.5,0.7 } | { 0.6,0.7 } | { 0.4,0.5,0.6 } | { 0.4,0.5,0.6 } |

A 3 | { 0.4,0.5,0.6 } | { 0.6,0.7,0.8 } | { 0.5,0.6,0.7 } | { 0.7,0.8 } | { 0.4,0.6 } | { 0.3,0.5 } |

A 4 | { 0.5,0.6,0.7 } | { 0.5,0.6 } | { 0.8,0.9 } | { 0.7,0.9 } | { 0.4,0.6,0.7 } | { 0.5,0.6 } |

G 1 | G 2 | G 3 | G 4 | G 5 | G 6 | |
---|---|---|---|---|---|---|

A 1 | { 0.3,0.5,0.5 } | { 0.6,0.7,0.7 } | { 0.4,0.6,0.6 } | { 0.7,0.8,0.9 } | { 0.4,0.5,0.5 } | { 0.5,0.6,0.6 } |

A 2 | { 0.6,0.7,0.8 } | { 0.5,0.6,0.8 } | { 0.5,0.7,0.7 } | { 0.6,0.7,0.7 } | { 0.4,0.5,0.6 } | { 0.4,0.5,0.6 } |

A 3 | { 0.4,0.5,0.6 } | { 0.6,0.7,0.8 } | { 0.5,0.6,0.7 } | { 0.7,0.8,0.8 } | { 0.4,0.6,0.6 } | { 0.3,0.5,0.5 } |

A 4 | { 0.5,0.6,0.7 } | { 0.5,0.6,0.6 } | { 0.8,0.9,0.9 } | { 0.7,0.9,0.9 } | { 0.4,0.6,0.7 } | { 0.5,0.6,0.6 } |

h ˜ 2 = NGHFHWA ( h ˜ 21 , h ˜ 22 , h ˜ 23 , h ˜ 24 , h ˜ 25 , h ˜ 26 ) = NGHFHWA ( { 0.6,0.7,0.8 } , { 0.5,0.6,0.8 } , { 0.5,0.7,0.7 } , { 0.6,0.7,0.7 } , { 0.4,0.5,0.6 } , { 0.4,0.5,0.6 } ) .

According to Equation (1), we get s ( h ˜ 21 ) = 0.6 + 0.7 + 0.8 3 = 0.7 , s ( h ˜ 22 ) = 0.6333 , s ( h ˜ 23 ) = 0.6333 , s ( h ˜ 24 ) = 0.6667 , s ( h ˜ 25 ) = 0.5 , s ( h ˜ 26 ) = 0.5 . Notice s ( h ˜ 22 ) = s ( h ˜ 23 ) , by Equation (2), we obtain

p ( h ˜ 22 ) = 1 − 1 3 ( | 0.5 − 0.6333 | + | 0.6 − 0.6333 | + | 0.8 − 0.6333 | ) = 0.8889 ,

p ( h ˜ 23 ) = 1 − 1 3 ( | 0.5 − 0.6333 | + | 0.7 − 0.6333 | + | 0.7 − 0.6333 | ) = 0.9111 .

Hence, h ˜ 21 > h ˜ 24 > h ˜ 23 > h ˜ 22 > h ˜ 25 = h ˜ 26 . Thus, ε ( 21 ) = 1 , ε ( 24 ) = 2 , ε ( 23 ) = 3 , ε ( 22 ) = 3 , ε ( 25 ) = ε ( 26 ) = 5 . By Remark 1, we get ω ε ( 2 ) = ( 0.0865,0.2419,0.2419,0.1716,0.1290,0.1290 ) T . Therefore,

λ 1 ω ε ( 21 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1210 , λ 2 ω ε ( 22 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1879 , λ 3 ω ε ( 23 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.3884 ,

λ 4 ω ε ( 24 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1422 , λ 5 ω ε ( 25 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.0668 , λ 6 ω ε ( 26 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.0936 .

According to Equation (10), choose p = 1 , we can calculate that

h ˜ 2 = NGHFHWA ( h ˜ 21 , h ˜ 22 , h ˜ 23 , h ˜ 24 , h ˜ 25 , h ˜ 26 ) = ∪ i = 1 3 { ( 1 − ( 1 − ( γ ˜ 21 ( i ) ) p ) 0.1210 ( 1 − ( γ ˜ 22 ( i ) ) p ) 0.1879 ( 1 − ( γ ˜ 23 ( i ) ) p ) 0.3884 × ( 1 − ( γ ˜ 24 ( i ) ) p ) 0.1422 ( 1 − ( γ ˜ 25 ( i ) ) p ) 0.0668 ( 1 − ( γ ˜ 26 ( i ) ) p ) 0.0936 ) 1 p } = { 0.5145,0.6563,0.7228 } .

Similarly, we can calculate h ˜ 1 = { 0.4677,0.6165,0.6355 } , h ˜ 3 = { 0.4894,0.6079,0.6837 } , h ˜ 4 = { 0.5893,0.7318,0.7605 } by using the NGHFHWA operator for alternatives A 1 , A 3 , A 4 , respectively.

Step 5. Calculate the scores s ( h ˜ i ) of h ˜ i for A i ( i = 1 , 2 , 3 , 4 ) :

s ( h ˜ 1 ) = 0.5732 , s ( h ˜ 2 ) = 0.6312 , s ( h ˜ 3 ) = 0.5937 , s ( h ˜ 4 ) = 0.6939 .

Step 6. Rank alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h ˜ i ) : h ˜ 4 > h ˜ 2 > h ˜ 3 > h ˜ 1 , thus A 4 is the best supplier.

In this subsection, the influence of parameter p on the ranking results is investigated and discussed. The detailed results are shown in

From

Parameter | s ( h ˜ 1 ) | s ( h ˜ 2 ) | s ( h ˜ 3 ) | s ( h ˜ 4 ) | Ranking | |
---|---|---|---|---|---|---|

NGHFHWA | p = 1 | 0.5732 | 0.6312 | 0.5937 | 0.6939 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

p = 2 | 0.5799 | 0.6335 | 0.5985 | 0.7020 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 5 | 0.6045 | 0.6409 | 0.6154 | 0.7286 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 10 | 0.6480 | 0.6527 | 0.6452 | 0.7673 | A 4 ≻ A 2 ≻ A 1 ≻ A 3 | |

p = 20 | 0.7056 | 0.6691 | 0.6881 | 0.8080 | A 4 ≻ A 1 ≻ A 3 ≻ A 2 | |

p = 30 | 0.7339 | 0.6781 | 0.7111 | 0.8260 | A 4 ≻ A 1 ≻ A 3 ≻ A 2 | |

NGHFHWG | p = 1 | 0.5504 | 0.6207 | 0.5758 | 0.6489 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

p = 2 | 0.5436 | 0.6157 | 0.5692 | 0.6327 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 5 | 0.5261 | 0.5978 | 0.5495 | 0.6044 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 10 | 0.5048 | 0.5716 | 0.5279 | 0.5856 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 20 | 0.4789 | 0.5427 | 0.4921 | 0.5687 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 | |

p = 30 | 0.4654 | 0.5295 | 0.4750 | 0.5590 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

are increasing with respect to p, while those obtained by the NGHFHWG operator are decreasing. On the other hand, the ranking orders obtained by the NGHFHWA operator are somewhat different as p increases. For more detailed investigation,

i) If p ∈ ( 0,8.9 ] , the ranking of the four alternatives is A 4 ≻ A 2 ≻ A 3 ≻ A 1 .

ii) If p ∈ ( 8.9,10.9 ] , the ranking of the four alternatives is A 4 ≻ A 2 ≻ A 1 ≻ A 3 .

iii) If p ∈ ( 10.9,12.3 ] , the ranking of the four alternatives is A 4 ≻ A 1 ≻ A 2 ≻ A 3 .

iv) If p ∈ ( 12.3,30 ] , the ranking of the four alternatives is A 4 ≻ A 1 ≻ A 3 ≻ A 2 .

In summary, we conclude that the selection of values for parameter p mainly depends on decision makers’ risk preferences. Pessimists anticipate desirable outcomes and may choose small value of parameter p, while optimistic experts may choose big values. For the computational simplicity of HF-MADM problems, the decision makers can select p = 1 (or 2), which is simple and straightforward.

In subsection 5.1, we utilize the proposed method to solve the example successfully, which has proven the availability of our method. In addition, we also analyze the impacts of parameter p on ranking results in subsection 5.2. The sensitivity analysis illustrates the high flexibility of our approach. In order to further demonstrate the advantages of the algorithm, we use GHFHWA, GHFHWG [

associated weight vector ω = ( 0.09,0.17,0.24,0.24,0.17,0.09 ) T and attribute weights λ = ( 0.15,0.25,0.14,0.16,0.20,0.10 ) T are given by decision makers.

1) Compared with the approach based on GHFHWA, GHFHWG operators [

We begin our comparison by employing the method based on GHFHWA, GHFHWG operators [

Step 1’. Based on the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (

h 2 = GHFHWA ( h 21 , h 22 , h 23 , h 24 , h 25 , h 26 ) = GHFHWA ( { 0.6,0.7,0.8 } , { 0.5,0.6,0.8 } , { 0.5,0.7 } , { 0.6,0.7 } , { 0.4,0.5,0.6 } , { 0.4,0.5,0.6 } ) .

Since s ( h 21 ) = 0.7 , s ( h 22 ) = 0.6333 , s ( h 23 ) = 0.6 , s ( h 24 ) = 0.65 , s ( h 25 ) = 0.5 , s ( h 26 ) = 0.5 , then h 21 > h 24 > h 22 > h 23 > h 25 = h 26 . Thus by Remark 1, we get ω ε ( 2 ) = ( 0.09,0.24,0.24,0.17,0.13,0.13 ) T . Therefore,

λ 1 ω ε ( 21 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.0779 , λ 2 ω ε ( 22 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.3462 , λ 3 ω ε ( 23 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1939 ,

λ 4 ω ε ( 24 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1570 , λ 5 ω ε ( 25 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.1500 , λ 6 ω ε ( 26 ) ∑ j = 1 6 λ j ω ε ( 2 j ) = 0.0750 .

According to Equation (6), choose p = 1 , we can calculate that

h 2 = GHFHWA ( h 21 , h 22 , h 23 , h 24 , h 25 , h 26 ) = ∪ γ 2 j ∈ h 2 j { ( 1 − ( 1 − γ 21 p ) 0.0779 ( 1 − γ 22 p ) 0.3462 ( 1 − γ 23 p ) 0.1939 × ( 1 − γ 24 p ) 0.1570 ( 1 − γ 25 p ) 0.1500 ( 1 − γ 26 p ) 0.0750 ) 1 p } ,

where we don’t list all of values in h 2 since the number of values in h 2 is 324. Similarly, we can calculate h 1 , h 3 , h 4 by using the GHFHWA operator for alternatives A 1 , A 3 , A 4 , respectively. Notice that the number of values in h 1 , h 3 , h 4 are 96, 216, 144, respectively.

Step 3’. Calculate the scores s ( h i ) ( i = 1 , 2 , 3 , 4 ) of h i ( i = 1 , 2 , 3 , 4 ) :

s ( h 1 ) = 0.5786 , s ( h 2 ) = 0.6201 , s ( h 3 ) = 0.6067 , s ( h 4 ) = 0.6619 .

Step 4’. Rank all of the alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h i ) : h 4 > h 2 > h 3 > h 1 , thus A 4 is the most desirable supplier.

If parameter p changes, choose the GHFHWA operator for example, trends of scores for the alternatives can be obtained, which is shown in

i) If p ∈ ( 0,6.5 ] , the ranking of the four alternatives is A 4 ≻ A 2 ≻ A 3 ≻ A 1 .

ii) If p ∈ ( 6.5,9.6 ] , the ranking of the four alternatives is A 4 ≻ A 3 ≻ A 2 ≻ A 1 .

iii) If p ∈ ( 9.6,11.1 ] , the ranking of the four alternatives is A 4 ≻ A 3 ≻ A 1 ≻ A 2 .

iv) If p ∈ ( 11.1,30 ] , the ranking of the four alternatives is A 4 ≻ A 1 ≻ A 3 ≻ A 2 .

Comparing with our proposed method based on NGHFHWA and NGHFHWG operators in this article, we can conclude that:

i) Our new generalized hybrid weighted aggregation operators have the property of idempotency, which is one of the most important properties for aggregation operators;

ii) The dimensions of the overall hesitant values aggregated by our new generalized hybrid weighted aggregation operators are less than that of GHFHWA and GHFHWG operators. Hence the computation of our proposed method is more simple than that of [

2) Compared with the approach based on GHFHA and GHFHG operators [

In the following, we utilize Xia and Xu’s GHFHA and GHFHG operator to get the ranking result of our example, in which weight vectors ω , λ are given by decision makers.

Step 1’’. Based on the hesitant fuzzy decision matrix H = ( h i j ) 4 × 6 (

h ˙ 21 = ( 6 × 0.15 ) h 21 = { 1 − ( 1 − 0.6 ) 0.9 , 1 − ( 1 − 0.7 ) 0.9 , 1 − ( 1 − 0.8 ) 0.9 } = { 0.5616 , 0.6616 , 0.7651 } .

Similarly, h ˙ 22 = { 0.6464,0.7470,0.9106 } , h ˙ 23 = { 0.4414,0.6363 } , h ˙ 24 = { 0.5851,0.6852 } , h ˙ 25 = { 0.4583,0.5647,0.6670 } , h ˙ 26 = { 0.2640,0.3402,0.4229 } .

Next, calculate the scores s ( h ˙ 2 j ) ( j = 1 , 2 , 3 , 4 , 5 , 6 ) : s ( h ˙ 21 ) = 0.6628 , s ( h ˙ 22 ) = 0.7680 , s ( h ˙ 23 ) = 0.5388 , s ( h ˙ 24 ) = 0.6351 , s ( h ˙ 25 ) = 0.5633 , s ( h ˙ 26 ) = 0.3424 , then h ˙ 22 > h ˙ 21 > h ˙ 24 > h ˙ 25 > h ˙ 23 > h ˙ 26 . Hence, we have h ˙ σ ( 2 j ) as follows:

h ˙ σ ( 21 ) = h ˙ 22 = { 0.6464,0.7470,0.9106 } , h ˙ σ ( 22 ) = h ˙ 21 = { 0.5616 , 0.6616 , 0.7651 } ,

h ˙ σ ( 23 ) = h ˙ 24 = { 0.5851,0.6852 } , h ˙ σ ( 24 ) = h ˙ 25 = { 0.4583,0.5647,0.6670 } ,

h ˙ σ ( 25 ) = h ˙ 23 = { 0.4414,0.6363 } , h ˙ σ ( 26 ) = h ˙ 26 = { 0.2640,0.3402,0.4229 } .

Step 2’’. According to Equation (4), we can use the GHFHA operator to aggregate all HFEs h ˙ σ ( i j ) ( j = 1 , 2 , 3 , 4 , 5 , 6 ) into collective HFEs h ˙ i ( i = 1 , 2 , 3 , 4 ) . Take h ˙ 2 as an example, choose p = 1 .

h ˙ 2 = GHFHA ( h ˙ σ ( 21 ) , h ˙ σ ( 22 ) , h ˙ σ ( 23 ) , h ˙ σ ( 24 ) , h ˙ σ ( 25 ) , h ˙ σ ( 26 ) ) = ∪ γ ˙ σ ( 2 j ) ∈ h ˙ σ ( 2 j ) { ( 1 − ( 1 − γ ˙ σ ( 21 ) p ) 0.09 ( 1 − γ ˙ σ ( 22 ) p ) 0.17 ( 1 − γ ˙ σ ( 23 ) p ) 0.24 × ( 1 − γ ˙ σ ( 24 ) p ) 0.24 ( 1 − γ ˙ σ ( 25 ) p ) 0.17 ( 1 − γ ˙ σ ( 26 ) p ) 0.09 ) 1 p } ,

where we don’t list all of values in h ˙ 2 since the number of values in h ˙ 2 is 324. Similarly, we can calculate h ˙ 1 , h ˙ 3 , h ˙ 4 by using the GHFHA operator for alternatives A 1 , A 3 , A 4 , respectively.

Step 3’’. Calculate the scores s ( h ˙ i ) ( i = 1 , 2 , 3 , 4 ) :

s ( h ˙ 1 ) = 0.5764 , s ( h ˙ 2 ) = 0.6139 , s ( h ˙ 3 ) = 0.6625 , s ( h ˙ 4 ) = 0.7081 .

Step 4’’. Rank the alternatives A i ( i = 1 , 2 , 3 , 4 ) in accordance with s ( h ˙ i ) : h ˙ 4 > h ˙ 3 > h ˙ 2 > h ˙ 1 , thus A 4 is the most desirable supplier.

The ranking result obtained by the GHFHA operator is different from our method. Notice that weight vectors ω and λ in [

By the above comparison, we can see that weight vectors given by decision makers may lead to different ranking result. In the following

From

In this paper, we first proposed some new generalized hesitant fuzzy hybrid weighted aggregation operators, such as NGHFHWA, NGHFHWG. Some properties of these operators have been investigated. Then, we apply our proposed operators to deal with HF-MADM problems, in which aggregation-associated weight vector and attribute weights are unknown. Furthermore, an illustrated example is given to show the effectiveness and validness of our proposed decision making method. By comparing with Liao and Xu’s method [

1) the new generalized hesitant fuzzy hybrid weighted aggregation operators satisfy idempotency;

the aggregation operator | scores | the ranking result |
---|---|---|

the NGHFHWA operator ( p = 1 ) | s 1 = 0.5732 , s 2 = 0.6312 , s 3 = 0.5937 , s 4 = 0.6939 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

the GHFHWA operator ( p = 1 ) | s 1 = 0.5560 , s 2 = 0.6232 , s 3 = 0.5881 , s 4 = 0.6890 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

the GHFHA operator ( p = 1 ) | s 1 = 0.5678 , s 2 = 0.6181 , s 3 = 0.6009 , s 4 = 0.6833 | A 4 ≻ A 2 ≻ A 3 ≻ A 1 |

2) the calculating procedure is more simple than that of others;

3) the aggregation-associated weight vector and the attribute weights are calculated by the information itself.

In the future, the application of these operators with different hesitant fuzzy decision making or group decision making methods will be developed, such as TOPSIS, VIKOR, ELECTRE, PROMETHEE. In addition, we can also extend our method to hesitant fuzzy linguistic set, Pythagorean hesitant fuzzy set, and so on.

This work is supported by the National Natural Science Foundation of China (No. 11571175), Natural Science Foundation of Higher Education of Jiangsu Province (No. 18KJB110024), High Training Funded for Professional Leaders of Higher Vocational Colleges in Jiangsu Province (No. 2018GRFX038), Science and Technology Research Project of Nantong Shipping College (No. HYKY/2018A03).

The authors declare no conflicts of interest regarding the publication of this paper.

Jiang, S.Q., He, W. and Cheng, Q.Q. (2020) A New Hesitant Fuzzy Multiple Attribute Decision Making Method with Unknown Weight Information. Advances in Pure Mathematics, 10, 405-431. https://doi.org/10.4236/apm.2020.107025

Proof We first prove that

⊕ ˙ j = 1 n λ j ω ε ( j ) h j p = ∪ i = 1 l { 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) } (17)

by using mathematical induction on n.

For n = 2 , we show that

⊕ ˙ j = 1 2 λ j ω ε ( j ) h j p = ∪ i = 1 l { 1 − ( 1 − ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ⋅ ( 1 − ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) } . (18)

Since λ j ω ε ( j ) h j p = ∪ i = 1 l { 1 − ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) } ( j = 1 , 2 ) , then

⊕ ˙ j = 1 2 λ j ω ε ( j ) h j 2 = ∪ i = 1 l { 1 − ( 1 − ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) + 1 − ( 1 − ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) − ( 1 − ( 1 − ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ) ( 1 − ( 1 − ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) ) } = ∪ i = 1 l { 1 − ( 1 − ( γ 1 ( i ) ) p ) λ 1 ω ε ( 1 ) ⋅ ( 1 − ( γ 2 ( i ) ) p ) λ 2 ω ε ( 2 ) } ,

which means Equation (18) holds.

If Equation (17) holds for n = k , i.e.,

⊕ ˙ j = 1 k λ j ω ε ( j ) h j p = ∪ i = 1 l { 1 − ∏ j = 1 k ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) } .

Then if n = k + 1 , based on Definition 7, we can deduce that

⊕ ˙ j = 1 k + 1 λ j ω ε ( j ) h j p = ( ⊕ ˙ j = 1 k λ j ω ε ( j ) h j p ) ⊕ ˙ ( λ k + 1 ω ε ( k + 1 ) h k + 1 p ) = ∪ i = 1 l { 1 − ∏ j = 1 k ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) + 1 − ( 1 − ( γ k + 1 ( i ) ) p ) λ k + 1 ω ε ( k + 1 ) − ( 1 − ∏ j = 1 k ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ) ( 1 − ( 1 − ( γ k + 1 ( i ) ) p ) λ k + 1 ω ε ( k + 1 ) ) } = ∪ i = 1 l { 1 − ∏ j = 1 k + 1 ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) } ,

i.e., Equation (17) holds for n = k + 1 . Hence, Equation (17) holds for all n. Furthermore, using Definition 3, we have

⊕ ˙ j = 1 n λ j ω ε ( j ) h j ∑ j = 1 n λ j ω ε ( j ) = ∪ i = 1 l { 1 − ( 1 − ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ) ) 1 ∑ j = 1 n λ j ω ε ( j ) } = ∪ i = 1 l { 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) } .

Therefore,

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = ( ⊕ ˙ j = 1 n λ j ω ε ( j ) h j p ∑ j = 1 n λ j ω ε ( j ) ) 1 p = ∪ i = 1 l { ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } ,

i.e., Equation (10) holds. The proof of Equation (11) is similar.

A.2. Proof of Theorem 2Proof Suppose h 1 = h 2 = ⋯ = h n = h = { γ ( 1 ) , γ ( 2 ) , ⋯ , γ ( l ) } , we can get

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = ∪ i = 1 l { ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } = ∪ i = 1 l { ( 1 − ∏ j = 1 n ( 1 − ( γ ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } = ∪ i = 1 l { ( 1 − ( 1 − ( γ ( i ) ) p ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } = ∪ i = 1 l { ( 1 − ( 1 − ( γ ( i ) ) p ) ) 1 p } = ∪ i = 1 l { γ ( i ) } = h .

The proof of NGHFHWG ( h 1 , h 2 , ⋯ , h n ) = h is similar.

A.3. Proof of Theorem 3Proof Let h ˙ = NGHFHWA ( h 1 , h 2 , ⋯ , h n ) , γ ˙ ( i ) = ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p ( i = 1 , 2 , ⋯ , l ) .

For any i = 1 , 2 , ⋯ , l ; j = 1 , 2 , ⋯ , n , we have h − ≤ γ j ( i ) ≤ h + . Since y = x a ( a > 0 ) is a monotonic increasing function when x > 0 , then we get

∏ j = 1 n ( 1 − ( h − ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ≥ ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ≥ ∏ j = 1 n ( 1 − ( h + ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ,

i.e.,

1 − ( h − ) p ≥ ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ≥ 1 − ( h + ) p .

Then we get

h − ≤ ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p ≤ h + .

i.e.,

h − ≤ γ ˙ ( i ) ≤ h + , i = 1 , 2 , ⋯ , l .

According to Definition 2, we have

h − = 1 l ∑ i = 1 l h − ≤ 1 l ∑ i = 1 l γ ˙ ( i ) ≤ 1 l ∑ i = 1 l h + = h + ,

i.e.,

s ( h − ) ≤ s ( h ˙ ) ≤ s ( h + ) .

Thus

h − ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) ≤ h + ,

which completes the proof of Equation (12). Similarly, we can prove Equation (13).

A.4. Proof of Theorem 4Proof For any γ j ( i ) ∈ h j , by Lemma 1, we have

∏ j = 1 n ( γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) = ( ∏ j = 1 n ( ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p ≤ ( ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ) 1 p = ( 1 − ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( 1 − ( γ j ( i ) ) p ) ) 1 p ≤ ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p .

By Definition 2, we can conclude that s ( NHFHWG ( h 1 , h 2 , ⋯ , h n ) ) ≤ s ( NGHFHWA ( h 1 , h 2 , ⋯ , h n ) ) , which implies that NHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) . Similarly, we can prove NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NHFHWA ( h 1 , h 2 , ⋯ , h n ) .

A.5. Proof of Theorem 5Proof By Theorem 1, we have

NGHFHWA ( h 1 , h 2 , ⋯ , h n ) = ∪ i = 1 l { ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p } .

For any γ j ( i ) ∈ h j , i = 1 , 2 , ⋯ , l ; j = 1 , 2 , ⋯ , n , let f ( p ) = ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p . In order to prove f ( p ) is monotonically increasing with respect to the parameter p, we calculate the derivative of f ( p ) respect to p as follows:

f ′ ( p ) = [ e 1 p l n ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) ] ′ = f ( p ) p 2 [ ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p l n ( γ j ( i ) ) p 1 − ( γ j ( i ) ) p − l n ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) ]

= f ( p ) p 2 ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) [ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ln ( γ j ( i ) ) p 1 − ( γ j ( i ) ) p − ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) ln ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ] = f ( p ) p 2 1 − x 0 x 0 [ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) g ( x j ) − g ( x 0 ) ]

where g ( x ) = x ln x 1 − x , x j = ( γ j ( i ) ) p , x 0 = 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) .

Next, we take the first and second derivatives of g ( x ) :

g ′ ( x ) = [ x ln x 1 − x ] = ( 1 + ln x ) ( 1 − x ) + x ln x ( 1 − x ) 2 = 1 − x + ln x ( 1 − x ) 2 ,

g ″ ( x ) = [ 1 − x + ln x ( 1 − x ) 2 ] = ( 1 x − 1 ) ( 1 − x ) 2 + ( 1 − x + ln x ) ⋅ 2 ( 1 − x ) ( 1 − x ) 4 = 1 − x 2 + 2 x ln x x ( 1 − x ) 3 .

Let h ( x ) = 1 − x 2 + 2 x ln x , x ∈ ( 0,1 ] and h ( 1 ) = 0 . We also calculate the first and second derivatives of h ( x ) : h ′ ( x ) = 2 − 2 x + 2 ln x , h ′ ( 1 ) = 0 and

h ″ ( x ) = 2 ( 1 − x ) x . When x ∈ ( 0,1 ) , we have h ″ ( x ) > 0 . Hence, h ′ ( x ) is monotonically increasing, i.e., h ′ ( x ) < h ′ ( 1 ) = 0 for any x ∈ ( 0,1 ) , which implies

that h ( x ) is monotonically decreasing. Therefore, h ( x ) > h ( 1 ) = 0 and g ″ ( x ) > 0 for any x ∈ ( 0,1 ) .

Because g ″ ( x ) > 0 for any x ∈ ( 0,1 ) , g ′ ( x ) is monotonically increasing in ( 0,1 ) , i.e.,

g ′ ( x ) < lim x → 1 1 − x + ln x ( 1 − x ) 2 = lim x → 1 [ 1 − x + ln x ] ′ [ ( 1 − x ) 2 ] ′ = l i m x → 1 − 1 + 1 x 2 ( x − 1 ) = l i m x → 1 − 1 2 x = − 1 2 .

Because g ″ ( x ) > 0 for any x ∈ ( 0,1 ) , g ( x ) is strictly convex, and the inequality g ( x j ) > g ( x 0 ) + ( x j − x 0 ) ⋅ g ′ ( x 0 ) holds for all x 0 , x j > 0 and x 0 ≠ x j . Therefore, we have

∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) g ( x j ) > ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) g ( x 0 ) + ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( x j − x 0 ) ⋅ g ′ ( x 0 ) = g ( x 0 ) + g ′ ( x 0 ) [ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) x j − x 0 ] = g ( x 0 ) + g ′ ( x 0 ) [ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p − 1 + ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ] .

By Lemma 1, we get

∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ≤ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( 1 − ( γ j ( i ) ) p ) = 1 − ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p .

Notice that g ′ ( x 0 ) < − 1 2 < 0 , we have

g ′ ( x 0 ) [ ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p − 1 + ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ] > 0 .

Hence, ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) g ( x j ) − g ( x 0 ) > 0 , thus f ′ ( p ) > 0 , which indicates

that f ( p ) is monotonically increasing with respect to the parameter p. Therefore, the NGHFHWA operator is monotonically increasing with respect to the parameter p. Similarly, the NGHFHWG operator is monotonically decreasing with respect to the parameter p.

A6. Proof of Lemma 2Proof By L’Hospital rule, we have

l i m p → 0 f ( p ) = e lim p → 0 l n ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) p

in which,

l i m p → 0 ln ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) p = l i m p → 0 [ ln ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) ] p ′ = l i m p → 0 1 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) 1 − ( γ j ( i ) ) p λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( γ j ( i ) ) p ln γ j ( i ) = l i m p → 0 ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ln γ j ( i ) 1 − γ j ( i ) = l i m p → 0 ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∏ k = 1 n ( 1 − ( γ k ( i ) ) p 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ln γ j ( i )

= ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∏ k = 1 n ( l i m p → 0 1 − ( γ k ( i ) ) p 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ln γ j ( i ) = ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∏ k = 1 n ( l i m p → 0 − ( γ k ( i ) ) p ln γ k ( i ) − ( γ j ( i ) ) p ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ln γ j ( i ) = ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∏ k = 1 n ( ln γ k ( i ) ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ln γ j ( i ) = ∑ j = 1 n λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ∏ k = 1 n ( ln γ k ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) = ∏ k = 1 n ( ln γ k ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j )

Thus

lim p → 0 ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p = e ∏ j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j )

which completes the proof of Equation (14). Similarly, we can prove Equation (15).

A.7. Proof of Theorem 6Proof Let f ( p ) = ( 1 − ∏ j = 1 n ( 1 − ( γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p , g ( p ) = ( 1 − ∏ j = 1 n ( 1 − ( 1 − γ j ( i ) ) p ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ) 1 p .

First, we will prove f ( p ) ≥ 1 − g ( p ) , i.e., f ( p ) + g ( p ) ≥ 1 . According to Theorem 5, f ( p ) and f ( p ) are monotonically increasing with respect to

the parameter p. Thus f ( p ) + g ( p ) ≥ l i m p → 0 f ( p ) + l i m p → 0 g ( p ) . By Lemma 2, we have

f ( p ) + g ( p ) ≥ e ∏ j = 1 n ( ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) + e ∏ j = 1 n ( ln ( 1 − γ j ( i ) ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) = e ( − 1 ) ∏ j = 1 n ( − ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) + e ( − 1 ) ∏ j = 1 n ( − ln ( 1 − γ j ( i ) ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) = 1 e ∏ j = 1 n ( − ln γ j ( i ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) + 1 e ∏ j = 1 n ( − ln ( 1 − γ j ( i ) ) ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) .

By Lemma 1, we get

f ( p ) + g ( p ) ≥ 1 e ∑ γ 1 ( i ) = γ 2 ( i ) = ⋯ = γ n ( i ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( − ln γ j ( i ) ) + 1 e ∑ γ 1 ( i ) = γ 2 ( i ) = ⋯ = γ n ( i ) λ j ω ε ( j ) ∑ j = 1 n λ j ω ε ( j ) ( − ln ( 1 − γ j ( i ) ) ) = 1 e − ln γ j ( i ) + 1 e − ln ( 1 − γ j ( i ) ) = γ j ( i ) + ( 1 − γ j ( i ) ) = 1

By Definition 2, we get

s ( NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ) ≤ s ( NGHFHWA ( h 1 , h 2 , ⋯ , h n ) ) ,

i.e.,

NGHFHWG ( h 1 , h 2 , ⋯ , h n ) ≤ NGHFHWA ( h 1 , h 2 , ⋯ , h n ) .

This completes the proof.