^{1}

^{*}

^{1}

Fuzzy regression provides more approaches for us to deal with imprecise or vague problems. Traditional fuzzy regression is established on triangular fuzzy numbers, which can be represented by trapezoidal numbers. The independent variables, coefficients of independent variables and dependent variable in the regression model are fuzzy numbers in different times and
*T*
_{W}, the shape preserving operator, is the only
*T*-norm which induces a shape preserving multiplication of
*LL*-type of fuzzy numbers. So, in this paper, we propose a new fuzzy regression model based on
*LL*-type of trapezoidal fuzzy numbers and
*T _{W}*. Firstly, we introduce the basic fuzzy set theories, the basic arithmetic propositions of the shape preserving operator and a new distance measure between trapezoidal numbers. Secondly, we investigate the specific model algorithms for

*FIFCFO*model (fuzzy input-fuzzy coefficient-fuzzy output model) and introduce three advantages of fit criteria, Error Index, Similarity Measure and Distance Criterion. Thirdly, we use a design set and two reference sets to make a comparison between our proposed model and the reference models and determine their goodness with the above three criteria. Finally, we draw the conclusion that our proposed model is reasonable and has better prediction accuracy, but short of robust, comparing to the reference models by the three goodness of fit criteria. So, we can expand our traditional fuzzy regression model to our proposed new model.

Fuzzy regression, one of the most popular methods of modeling and prediction, is an important statistical tool in evaluating the functional relationship between a set of explanatory variables and explained variable (Montgomery and Peck, 2006 [

Therefore, fuzzy set theory, introduced by Zadeh (1965) [

Both of the above approaches to fuzzy regression are widely used in usual fuzzy linear regression. But they are all sensitive to outliers. In such cases, least absolutes deviation (LAD) based on least squares deviation (LSD), is preferred to be used as a robust method. Especially, when outliers are in the response variable, the LAD estimator is more robust than the LSD estimator (Stahel and Weisberg, 1991 [

In the development of fuzzy linear regression models, a new problem arose imperceptibly that the usual multiplication changed the shape of fuzzy numbers in some cases. On the one hand, Hojati et al. (2005) [

However, traditional fuzzy regression is still based on triangle fuzzy numbers or partial fuzzy numbers between inputs, coefficients, output. In consideration of that trapezoidal fuzzy numbers, which can represent other types of fuzzy numbers, take an important role in fuzzy numbers [

The structure of this paper is as follows. In Section 2, we introduce some basic notions, and prove the good arithmetic property of

For the sake of rigor and clarity, the basic fuzzy set theories and the basic arithmetic propositions of the shape preserving operator, used in this paper, will be introduced in this section. Throughout this paper, we use R to denote all the real numbers, FN stands for the set of the all fuzzy numbers in R.

Definition 1. (Zadeh, 1965 [

1) Regularity:

2) Bounded closed interval:

Then we call

Definition 2. (Hu, 2010 [

Definition 3. (Hu, 2010 [

where

1)

2)

3)

4)

Here,

Suppose

Definition 4. (Hu, 2010 [

1) commutative law:

2) associative law:

3) monotonicity:

4) boundary condition:

Then we use T to denote T-norm on

Proposition 1. (Hu, 2010 [

where

Definition 5. (Hu, 2010 [

Hence, we use

Proposition 2. Let

1)

2)

3)

Proposition 3. Let

so we can get

Proof. Let

1) For

2) For

3) For

It follows that

Remark. The propositions 1.3 in Wang (2016) [

Proposition 4.

Proof. From proposition 3, we can get that

Now, give

Let

Proposition 5. Let

Proposition 6. Let

Definition 6. (Xu and Li, 2001) Set

where

Theorem 1. Set

where

Proof. For

so,

further, we can get

Hence, we complete the proof of Theorem 1.

In the following discussion, we set

In this section, we consider a group of n sample data, denoted by

Now, we define set

We determine each estimated value

Finally, we draw the conclusion:

Considering the efficiency of evaluation, we design the specific steps in the following. The whole process is solved by using MATLAB.

Step 1: Calculate

Step 2: Determine set

Step 3: Compare the sign of

Based on the above, we can conclude least-squares regression of

where,

Let

The other cases can be calculated as the above similarly.

For the fuzzy linear regression model (14), let

1) Error Index (Kim and Bishu, 1998 [

2) Similarity Measure (Rezaei et al., 2006 [

3) Distance Criterion

Inspired by Chen and Hsueh (2007) [

For each index having its own pros and cons. In general, smaller

Example 1. The source sample data was produced by MATLAB randomly. First, we consider the model:

where

From

Example 2. The source sample data comes from

i | x | y |
---|---|---|

1 | (2.7342, 3.0370, 0.5068, 0.6493) | (−0.2622, 4.0825, 0.6835, 1.5185) |

2 | (2.1042, 3.9744, 0.3281, 0.7629) | (−0.9008, 5.9466, 0.5261, 1.9872) |

3 | (2.7926, 3.7264, 0.7535, 0.5757) | (−0.2184, 5.4421, 0.7535, 1.8632) |

4 | (2.7827, 3.1480, 0.8360, 0.6319) | (−0.2115, 4.3029, 0.8360, 1.5740) |

5 | (2.5324, 3.1479, 0.2537, 0.2782) | (−0.4656, 4.2993, 0.6331, 1.5739) |

6 | (2.2534, 3.7048, 0.5344, 0.8398) | (−0.7496, 5.4072, 0.5633, 1.8524) |

7 | (2.0710, 3.3810, 0.4352, 0.4268) | (−0.9361, 4.7564, 0.5177, 1.6905) |

8 | (2.6258, 3.0764, 0.1577, 0.6316) | (−0.3719, 4.1577, 0.6565, 1.5382) |

9 | (2.0247, 3.4108, 0.6005, 0.8335) | (−0.9709, 4.8273, 0.6005, 1.7054) |

10 | (2.0620, 3.1430, 0.9375, 0.2702) | (−0.9306, 4.2934, 0.9375, 1.5715) |

11 | (2.1296, 3.7989, 0.1078, 0.4008) | (−0.8828, 5.5940, 0.5324, 1.8995) |

12 | (2.4506, 3.9302, 0.9000, 0.5543) | (−0.5421, 5.8741, 0.9000, 1.9651) |

13 | (2.6723, 3.0047, 0.5505, 0.4439) | (−0.3177, 4.0214, 0.6681, 1.5024) |

14 | (2.8561, 3.6500, 0.4274, 0.0904) | (−0.1567, 5.2887, 0.7140, 1.8250) |

15 | (2.4984, 3.6785, 0.1524, 0.7444) | (−0.5114, 5.3499, 0.6246, 1.8393) |

16 | (2.0488, 3.2536, 0.2475, 0.0326) | (−0.9579, 4.5015, 0.5122, 1.6268) |

17 | (2.3138, 3.8432, 0.4474, 0.4297) | (−0.6842, 5.6912, 0.5785, 1.9216) |

18 | (2.6416, 3.2940, 0.5328, 0.0373) | (−0.3679, 4.5792, 0.6604, 1.6470) |

19 | (2.7864, 3.0269, 0.3547, 0.9758) | (−0.2210, 4.0525, 0.6966, 1.9516) |

20 | (2.2892, 3.0933, 0.7731, 0.5223) | (−0.7074, 4.1906, 0.7731, 1.5467) |

21 | (2.4979, 3.7979, 0.8817, 0.9096) | (−0.4932, 5.6112, 0.8817, 1.8989) |

22 | (2.8184, 3.7114, 0.7341, 0.3832) | (−0.1934, 5.4187, 0.7341, 1.8557) |

23 | (2.5951, 3.7834, 0.4064, 0.8845) | (−0.4112, 5.5614, 0.6488, 1.8917) |

24 | (2.5364, 3.6239, 0.6042, 0.2550) | (−0.4520, 5.2606, 0.6341, 1.8120) |

25 | (2.3309, 3.8254, 0.6411, 0.9090) | (−0.6721, 5.6505, 0.6411, 1.9127) |

26 | (2.4117, 3.0350, 0.1275, 0.8946) | (−0.5863, 4.0741, 0.6029, 1.7891) |

27 | (2.7940, 3.4055, 0.4962, 0.3985) | (−0.2158, 4.8057, 0.6985, 1.7027) |

28 | (2.3432, 3.2497, 0.3105, 0.6250) | (−0.6466, 4.5151, 0.5858, 1.6248) |

29 | (2.4626, 3.4809, 0.5786, 0.5676) | (−0.5319, 4.9685, 0.6157, 1.7404) |

30 | (2.3678, 3.8808, 0.9436, 0.8945) | (−0.6274, 5.7675, 0.9436, 1.9404) |

31 | (2.6796, 3.2807, 0.4269, 0.2142) | (−0.3241, 4.5581, 0.6699, 1.6403) |

32 | (2.5678, 3.5991, 0.0331, 0.0039) | (−0.4311, 5.2096, 0.6419, 1.7996) |

33 | (2.6518, 3.0262, 0.9294, 0.8806) | (−0.3449, 4.0569, 0.9294, 1.7612) |

34 | (2.4911, 3.1552, 0.9250, 0.2351) | (−0.5033, 4.3215, 0.9250, 1.5776) |

35 | (2.3985, 3.8339, 0.3583, 0.2449) | (−0.6072, 5.6628, 0.5996, 1.9170) |

36 | (2.4775, 3.1949, 0.2600, 0.6409) | (−0.5151, 4.3977, 0.6194, 1.5974) |

37 | (2.0666, 3.8298, 0.7869, 0.3045) | (−0.9204, 5.6728, 0.7869, 1.9149) |

38 | (2.4110, 3.3381, 0.5116, 0.8256) | (−0.5957, 4.6724, 0.6028, 1.6690) |
---|---|---|

39 | (2.9691, 3.6711, 0.5625, 0.8837) | (−0.0226, 5.3533, 0.7423, 1.8356) |

40 | (2.7807, 3.0524, 0.6848, 0.9454) | (−0.2268, 4.0990, 0.6952, 1.8907) |

41 | (2.7290, 3.7343, 0.0924, 0.3908) | (−0.2659, 5.4758, 0.6823, 1.8672) |

42 | (2.7657, 3.4995, 0.8726, 0.8013) | (−0.2445, 4.9945, 0.8726, 1.7497) |

43 | (2.7566, 3.9433, 0.9429, 0.1571) | (−0.2564, 5.8819, 0.9429, 1.9716) |

44 | (2.8433, 3.2898, 0.0966, 0.6252) | (−0.1618, 4.5774, 0.7108, 1.6449) |

45 | (2.7702, 3.3766, 0.8459, 0.6990) | (−0.2269, 4.7568, 0.8459, 1.6883) |

46 | (2.9787, 3.1138, 0.9094, 0.0859) | (−0.0286, 4.2232, 0.9094, 1.5569) |

47 | (2.1114, 3.9649, 0.0113, 0.5312) | (−0.8998, 5.9267, 0.5278, 1.9824) |

48 | (2.3961, 3.4325, 0.5237, 0.8886) | (−0.5973, 4.8722, 0.5990, 1.7771) |

49 | (2.4921, 3.0846, 0.6503, 0.2637) | (−0.5003, 4.1778, 0.6503, 1.5423) |

50 | (2.2581, 3.7167, 0.3851, 0.2348) | (−0.7506, 5.4280, 0.5645, 1.8583) |

Model | Sum of | Sum of | Sum of |
---|---|---|---|

0.0933 | 49.9068 | 0.5426 | |

3.9547 | 46.2495 | 9.8448 | |

65.2061 | 21.6200 | 145.6092 |

i | x | y |
---|---|---|

1 | (0.45, 0.55, 0.045, 0.045) | (4.30, 4.40, 0.30, 0.40) |

2 | (0.90, 1.10, 0.090, 0.090) | (3.75, 4.25, 0.25, 0.25) |

3 | (1.35, 1.65, 0.135, 0.135) | (5.10, 5.40, 0.30, 0.40) |

4 | (1.80, 2.20, 0.180, 0.180) | (5.25, 5.75, 0.25, 0.25) |

5 | (2.25, 2.75, 0.225, 0.225) | (5.70, 6.00, 0.30, 0.50) |

6 | (2.70, 3.30, 0.270, 0.270) | (7.00, 8.00, 0.50, 0.50) |

7 | (3.15, 3.85, 0.315, 0.315) | (6.50, 7.00, 0.25, 0.50) |

8 | (3.60, 4.40, 0.360, 0.360) | (6.25, 6.75, 0.25, 0.25) |

9 | (4.05, 4.95, 0.405, 0.405) | (6.90, 7.65, 0.25, 0.25) |

10 | (4.50, 5.50, 0.450, 0.450) | (8.25, 8.75, 0.25, 0.25) |

11 | (4.95, 6.05, 0.495, 0.495) | (8.00, 8.50, 0.25, 0.50) |

12 | (5.40, 6.60, 0.540, 0.540) | (7.50, 8.50, 0.50, 0.50) |

13 | (5.85, 7.15, 0.585, 0.585) | (8.50, 9.50, 0.50, 0.50) |

14 | (6.30, 7.70, 0.630, 0.630) | (10.25, 10.75, 0.25, 0.25) |

15 | (6.75, 8.25, 0.675, 0.675) | (9.25, 10.40, 0.55, 0.60) |

16 | (7.20, 8.80, 0.720, 0.720) | (9.25, 9.75, 0.25, 0.25) |

Model | Sum of | Sum of | Sum of |
---|---|---|---|

13.6770 | 8.6117 | 12.8646 | |

15.0234 | 7.3054 | 15.8693 | |

14.3705 | 7.4931 | 15.2849 |

of the independent variable, the vertical axis represents the value of the components of trapezoidal fuzzy number.

From

Example 3.The source sample data comes from

i | y | |||
---|---|---|---|---|

1 | (9.975, 11.025, 0.525, 0.525) | (8.360, 9.240, 0.440, 0.440) | (14.820, 16.380, 0.780, 0.780) | (6, 7, 1, 1) |

2 | (8.455, 9.345, 0.445, 0.445) | (8.360, 9.240, 0.440, 0.440) | (14.820, 16.380, 0.780, 0.780) | (8, 8, 1, 2) |

3 | (9.880, 10.920, 0.520, 0.520) | (8.360, 9.240, 0.440, 0.440) | (15.865, 17.535, 0.835, 0.835) | (6, 7, 1, 1) |

4 | (11.875, 13.125, 0.625, 0.625) | (13.015, 14.385, 0.685, 0.685) | (21.090, 23.310, 1.110, 1.110) | (5, 5, 1, 1) |

5 | (8.550, 9.450, 0.450, 0.450) | (7.790, 8.610, 0.410, 0.410) | (14.820, 16.380, 0.780, 0.780) | (2, 3, 1, 1) |

6 | (10.165, 11.235, 0.535, 0.535) | (8.455, 9.345, 0.445, 0.445) | (15.105, 16.695, 0.795, 0.795) | (5, 5, 1, 1) |

7 | (14.820, 16.380, 0.780, 0.780) | (9.975, 11.025, 0.525, 0.525) | (14.820, 16.380, 0.780, 0.780) | (4, 5, 1, 1) |

8 | (9.120, 10.080, 0.480, 0.480) | (7.505, 8.295, 0.395, 0.395) | (14.155, 15.645, 0.745, 0.745) | (2, 3, 1, 1) |

9 | (9.120, 10.080, 0.480, 0.480) | (6.840, 7.560, 0.360, 0.360) | (12.635, 13.965, 0.665, 0.665) | (5, 5, 1, 1) |

10 | (10.450, 11.550, 0.550, 0.550) | (6.935, 7.665, 0.365, 0.365) | (14.155, 15.645, 0.745, 0.745) | (7, 8, 1, 1) |

11 | (10.735, 11.865, 0.565, 0.565) | (7.695, 8.505, 0.405, 0.405) | (13.015, 14.385, 0.685, 0.685) | (4, 5, 1, 1) |

12 | (10.260, 11.340, 0.540, 0.540) | (8.265, 9.135, 0.435, 0.435) | (14.630, 16.170, 0.770, 0.770) | (6, 7, 1, 1) |

13 | (10.735, 11.865, 0.565, 0.565) | (8.170, 9.030, 0.430, 0.430) | (14.725, 16.275, 0.775, 0.775) | (6, 7, 1, 1) |

14 | (9.215, 10.185, 0.485, 0.485) | (8.075, 8.925, 0.425, 0.425) | (15.105, 16.695, 0.795, 0.795) | (5, 5, 1, 1) |

15 | (9.595, 10.605, 0.505, 0.505) | (5.415, 5.985, 0.285, 0.285) | (11.305, 12.495, 0.595, 0.595) | (7, 8, 1, 1) |

16 | (10.925, 12.075, 0.575, 0.575) | (13.965, 15.435, 0.735, 0.735) | (19.000, 21.000, 1.000, 1.000) | (2, 3, 1, 1) |

17 | (11.875, 13.125, 0.625, 0.625) | (14.725, 16.275, 0.775, 0.775) | (19.950, 22.050, 1.050, 1.050) | (2, 3, 1, 1) |

18 | (9.500, 10.500, 0.500, 0.500) | (9.405, 10.395, 0.495, 0.495) | (15.390, 17.010, 0.810, 0.810) | (4, 5, 1, 1) |

19 | (14.250, 15.750, 0.750, 0.750) | (8.360, 9.240, 0.440, 0.440) | (11.400, 12.600, 0.600, 0.600) | (4, 5, 1, 1) |

20 | (8.075, 8.925, 0.425, 0.425) | (5.700, 6.300, 0.300, 0.300) | (14.820, 16.380, 0.780, 0.780) | (7, 8, 1, 1) |

21 | (9.215, 10.185, 0.485, 0.485) | (7.030, 7.770, 0.370, 0.370) | (16.435, 18.165, 0.865, 0.865) | (7, 8, 1, 1) |

22 | (13.965, 15.435, 0.735, 0.735) | (6.270, 6.930, 0.330, 0.330) | (15.010, 16.590, 0.790, 0.790) | (8, 8, 1, 2) |

23 | (11.685, 12.915, 0.615, 0.615) | (8.360, 9.240, 0.440, 0.440) | (19.665, 21.735, 1.035, 1.035) | (8, 8, 1, 2) |

24 | (8.740, 9.660, 0.460, 0.460) | (5.510, 6.090, 0.290, 0.290) | (16.340, 18.060, 0.860, 0.860) | (8, 8, 1, 2) |

Model | Sum of | Sum of | Sum of |
---|---|---|---|

22.1635 | 10.5348 | 29.5841 | |

25.6571 | 10.3127 | 5.4190 | |

34.7905 | 5.5778 | 26.5890 |

From

In this study, we took advantages of drastic product and classic LSD and used

Although the experimental results show that our proposed model has better performance, but the complexity of computation is still a potential problem even though it is solved to a certain extent by optimized program. The sample size or the number of variables is larger; the computation is more complex. In the future research, we will further study how to perform better when sample size is large, or there are outliers in sample sets and apply it to non-linear fuzzy regression analysis.

The authors appreciate the helpful comments of the referees on this manuscript.

Sun, J. and Lu, Q.J. (2017) Regression Analysis of a Kind of Trapezoidal Fuzzy Numbers Based on a Shape Preserving Operator. Journal of Data Analysis and Information Processing, 5, 96-114. https://doi.org/10.4236/jdaip.2017.53008