^{1}

^{*}

^{1}

The scope of this paper is to forecast wind speed. Wind speed, temperature, wind direction, relative humidity, precipitation of water content and air pressure are the main factors make the wind speed forecasting as a complex problem and neural network performance is mainly influenced by proper hidden layer neuron units. This paper proposes new criteria for appropriate hidden layer neuron unit’s determination and attempts a novel hybrid method in order to achieve enhanced wind speed forecasting. This paper proposes the following two main innovative contributions 1) both either over fitting or under fitting issues are avoided by means of the proposed new criteria based hidden layer neuron unit’s estimation. 2) ELMAN neural network is optimized through Modified Grey Wolf Optimizer (MGWO). The proposed hybrid method (ELMAN-MGWO) performance, effectiveness is confirmed by means of the comparison between Grey Wolf Optimizer (GWO), Adaptive Gbest-guided Gravitational Search Algorithm (GGSA), Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), Cuckoo Search (CS), Particle Swarm Optimization (PSO), Evolution Strategy (ES), Genetic Algorithm (GA) algorithms, meanwhile proposed new criteria effectiveness and precise are verified comparison with other existing selection criteria. Three real-time wind data sets are utilized in order to analysis the performance of the proposed approach. Simulation results demonstrate that the proposed hybrid method (ELMAN-MGWO) achieve the mean square error AVG ± STD of 4.1379e-11 ± 1.0567e-15, 6.3073e-11 ± 3.5708e-15 and 7.5840e-11 ± 1.1613e-14 respectively for evaluation on three real-time data sets. Hence, the proposed hybrid method is superior, precise, enhance wind speed forecasting than that of other existing methods and robust.

In recent year, wind energy is derived much more attention because of its special features such as inexhaustible, pollution free, free of availability, and renewable. Corresponding to the irregular and uncertain characteristics of wind speed, wind speed forecasting play a major role in wind farm and power system planning, scheduling, controlling and integration operation. Hence, the many researcher’s emphasis their research on the accurate wind speed forecasting in the literature: “Rough set theory and principal component analysis techniques incorporated ELMAN neural network based short-term wind speed forecasting performed by Yin, et al. [

An Artificial neural network is an information processing structure designed by means interlinked elementary processing devices (neurons). Feed-forward neural network and feedback neural network are the general types of artificial neural network. The feedback neural network has a profound effect on the modeling nonlinear dynamic phenomena performance and its learning capacity. Determination of hidden layer units in artificial neural network is a crucial and challenging task, over fitting and under fitting problems is caused due to random choice of hidden layer units. Therefore, previous work in determining the hidden layer neuron units: “Arai [

Optimizers have ability to give solution for a complex optimization problem and used to approximate the global optimum in order to improve the performance. Many researchers introduced new heuristic algorithms, some of the most familiar algorithms in the literature are: “Genetic Algorithm (GA) proposed by Holland [

This paper attempts to analyze the performance of ELMAN neural network based on the proposed 131 various criteria for proper hidden layer neurons units’ determination, Proposed new criteria estimate proper hidden neuron units in ELMAN neural network aid good forecasting result compared to other existing criteria and also proposed novel modified grey wolf optimizer (MGWO) to optimize the ELMAN neural network in order to improve the wind speed forecasting accuracy than earlier methods.

“ELMAN neural network (ENN) is a partial recurrent network model first pointed out by ELMAN in 1990 [

Proposed ELMAN neural network based wind speed forecasting model input layer is developed by means of the six input neurons, such as Temperature (

Input vector,

Output vector,

Weight vectors of input to the hidden vector,

Weight vectors of recurrent link layer vector,

Weight vectors between context layer and input vector,

ELMAN network inputs,

ELMAN network output,

Input of recurrent link layer,

Let,

For suggested ELMAN neural network each and every layer performs the individualistic calculations on receiving information and the calculated results are passed to the succeeding layer and to end determines the network output this fact is observed from the

The neural network designing process plays a vital role in the network performance. A Proposed ELMAN network designed parameters include dimensions and epochs shown in

The determination of hidden layer neuron units in a neural network frame is a tough task, odd choice of hidden

ELMAN Neural Network |
---|

Input neuron = 6 |

Number of hidden layer = 1 |

Output neuron = 1 |

Number of epochs = 2000 |

Threshold = 1 |

layer neuron units cause either under fitting or over fitting issue. Several researchers have suggested lots of approaches to select the hidden layer neuron units in neural networks. The approaches can be categories into pruning and constructive approaches. “For pruning approach the neural network starts with bigger than the usual size and then removing superfluous neurons and weights, finally found the minimum neurons and weights, whereas on the constructive approach neural network begins with less than the usual size network and then additional hidden units are added to the neural network specified by Jin-Yan Li, et al., Mao and Guang-Bin Huang [

1) The proposed 131 various criteria are function of input neurons and noted to be justified based on convergence theorem.

2) Both either over fitting or under fitting problems are avoided.

3) To estimate exact hidden layer neuron units with minimal computational complexity.

All considered criteria were tested on ELMAN neural network to determine the neural network training and testing stage performance by statistical error. The implementation of the pointed out approach begins by adopting the proposed criteria to ELMAN neural network. Following implementation stage train ELMAN neural network and calculate the mean square error (MSE). The network performance is calculated based on the mean square error; Equation (10) represents the mean square error formula. Criteria with the lowest minimal MSE (mean square error) are the best criteria for determination of hidden layer neuron units in ELMAN neural network.

Error Evaluation Measure is defined as follows:

where,

“GWO is recently employed by Mirjalili, et al. [

The favorable area of the search space attains by the wolves

where, i―Current Epoch,

Coefficient vectors are enumerated as follows:

The rest of the candidates (

where,

The relative distance between the present solution and

Steps involved in MGWO Algorithm are as follows:

Step1: Based on the variables upper and lower bounds randomly initialize a population of wolves.

Step2: Compute each wolf corresponding fitness value.

Step3: Store the first three best wolves as

Step4: Update the rest of the search agents (

Step5: Update variables such as u, H and G.

Step6: If stopping criteria is not satisfied go to Step2.

Step7: Record

Proposed novel Modified Grey Wolf Optimizer (MGWO) algorithm flow chart is shown in

The highest forecasting accuracy is provided by means of the optimization techniques. Therefore, novel modified grey wolf optimizer (MGWO) is attempting to optimize the ELMAN neural network for wind speed forecasting application.

Three real-time observations are used for this analysis. Data set1 is collected from the National Oceanic and

Atmospheric Administration, United States from January 1994 to December 2014. Data set 2 and Data set 3 are observed from Suzlon Energy Private Limited from January 2010 to December 2014 with different wind mill height. All data set contains 100,000 number of data samples. The neural network based wind speed forecasting approach involves the designing, training and testing stages. Exact modeling of neural network is difficult and a tough task. The gathered input attributes used for ELMAN neural network are real-time data therefore, scaling procedure is used up to avoid the training process issues such as massive esteemed input data incline to minimize the impact of little value input data. Therefore, the min-max scaling method is used to scale the variable range of real-time data within the range of 0 to 1; scaling is computed based on the Equation (20). The scaling process helps to improve the numerical computational accuracy, so ELMAN neural network model accuracy is also enhanced.

Scaled input,

where,

The proposed ELMAN neural network is established by three real-time data sets; each acquired real-time data sets consists up of 100,000 numbers of data samples. The collected 100,000 real-time data are classified as training and testing. The collected 70% of data (70,000) are utilized for neural network training phase and 30% of the gathered data (30,000) are utilized for the testing phase of the network. Proposed neural network design and all algorithms are tested in MATLAB and simulate on an Acer computer with Pentium (R) Dual Core processor running at 2.30 GHZ with 2 GB of RAM.

The issues related to the proper determination of hidden layer neuron units for a specific problem are to be chosen. The existing strategies utilizes trial and error rule for deciding neural networks hidden layer neuron units. This begins the network with less than the usual size of hidden layer neuron units and additional hidden neurons are included in the

Justification for the Chosen Criteria

Justification for determination of hidden layer neuron units is commenced by means of the convergence theorem discussion in the Appendix. Lemma 1 confirms the convergence of the chosen criteria.

Lemma 1:

The sequence

The sequence tends to have a finite limit l, if there exists constant

Proof:

Lemma 1 based proof defined as follows.

Regarding to convergence theorem, the selected value (or) sequence converges to a finite limit value.

Hidden neuron Units | Examined Criteria for determination of hidden neuron Units | Data set1 MSE | Data set2 MSE | Data set3 MSE |
---|---|---|---|---|

33 | 7.5783e−04 | 7.5783e−04 | 7.5783e−04 | |

130 | 3.3291e−04 | 3.4342e−04 | 3.9400e−04 | |

12 | 9.5408e−04 | 1.1030e−03 | 1.1380e−03 | |

74 | 8.0641e−04 | 8.4120e−04 | 8.6321e−04 | |

119 | 0.0016 | 0.0019 | 0.0020 | |

23 | 0.0011 | 0.0012 | 0.0013 | |

89 | 6.7827e−04 | 6.9402e−04 | 7.0476e−04 | |

62 | 8.8395e−04 | 8.9911e−04 | 9.1059e−04 | |

35 | 0.0018 | 0.0021 | 0.0022 | |

124 | 3.1087e−04 | 3.4500e−04 | 3.6651e−04 | |

50 | 5.0931e−04 | 5.3125e−04 | 5.5697e−04 | |

116 | 3.7581e−04 | 3.9733e−04 | 4.1111e−04 | |

85 | 4.6975e−04 | 4.8956e−04 | 4.9577e−04 | |

28 | 4.7696e−04 | 4.9023e−04 | 5.0789e−04 | |

106 | 5.9167e−04 | 6.0045e−04 | 6.1072e−04 | |

57 | 4.7917e−04 | 4.8232e−04 | 5.2004e−04 | |

70 | 7.1026e−04 | 7.5088e−04 | 7.7720e−04 | |

92 | 5.0730e−04 | 5.6589e−04 | 5.8301e−04 | |

111 | 0.0010 | 0.0011 | 0.0012 | |

8 | 0.0013 | 0.0016 | 0.0019 | |

46 | 6.4050e−04 | 6.7652e−04 | 6.9137e−04 | |

128 | 0.0016 | 0.0018 | 0.0019 | |

76 | 7.1067e−04 | 7.3284e−04 | 7.6471e−04 | |

131 | 0.0011 | 0.0013 | 0.0015 | |

60 | 0.0013 | 0.0015 | 0.0016 | |

51 | 9.1476e−04 | 9.4678e−04 | 9.7125e−04 | |

26 | 0.0016 | 0.0018 | 0.0019 | |

102 | 7.2283e−04 | 7.5392e−04 | 7.7487e−04 | |

11 | 0.0013 | 0.0014 | 0.0016 | |

94 | 0.0012 | 0.0014 | 0.0015 | |

36 | 0.0012 | 0.0015 | 0.0018 | |

87 | 7.6306e−04 | 7.8540e−04 | 7.9121e−04 | |

55 | 4.2964e−04 | 4.6008e−04 | 4.8257e−04 | |

24 | 0.0023 | 0.0026 | 0.0029 | |

80 | 0.0017 | 0.0018 | 0.0019 | |

108 | 2.9367e−04 | 3.1955e−04 | 3.4276e−04 | |

16 | 6.1773e−04 | 6.9457e−04 | 7.0121e−04 | |

72 | 6.7910e−04 | 6.9122e−04 | 7.0011e−04 | |

121 | 8.6702e−04 | 8.9780e−04 | 9.2144e−04 | |

7 | 6.8433e−04 | 6.9510e−04 | 7.0281e−04 | |

49 | 9.4473e−04 | 9.7568e−04 | 9.9045e−04 | |

68 | 9.1865e−04 | 9.7403e−04 | 9.9947e−04 |

110 | 0.0011 | 0.0012 | 0.0013 | |
---|---|---|---|---|

14 | 0.0013 | 0.0016 | 0.0018 | |

84 | 4.5795e−04 | 4.7541e−04 | 4.9123e−04 | |

47 | 0.0013 | 0.0015 | 0.0017 | |

52 | 5.6372e−04 | 5.8620e−04 | 6.1781e−04 | |

129 | 6.9995e−04 | 7.2241e−04 | 7.5599e−04 | |

67 | 8.4212e−04 | 8.5627e−04 | 8.7560e−04 | |

120 | 0.0017 | 0.0018 | 0.0021 | |

34 | 0.0011 | 0.0012 | 0.0013 | |

65 | 0.0010 | 0.0011 | 0.0013 | |

41 | 9.7246e−04 | 9.8904e−04 | 1.0031e−04 | |

2 | 3.1075e−04 | 3.4312e−04 | 3.5422e−04 | |

105 | 0.0010 | 0.0012 | 0.0013 | |

78 | 7.4818e−04 | 7.6578e−04 | 7.8117e−04 | |

43 | 0.0014 | 0.0017 | 0.0019 | |

27 | 4.3249e−04 | 4.6417e−04 | 4.9710e−04 | |

112 | 4.1333e−04 | 4.3266e−04 | 4.6782e−04 | |

56 | 0.0011 | 0.0013 | 0.0014 | |

81 | 2.4019e−04 | 2.6387e−04 | 2.7023e−04 | |

9 | 6.7128e−04 | 6.8430e−04 | 6.9281e−04 | |

73 | 0.0010 | 0.0012 | 0.0013 | |

38 | 0.0020 | 0.0021 | 0.0023 | |

123 | 5.0271e−04 | 5.4437e−04 | 5.7759e−04 | |

63 | 7.8566e−04 | 7.9623e−04 | 8.0146e−04 | |

25 | 0.0024 | 0.0027 | 0.0030 | |

100 | 8.1476e−04 | 8.2761e−04 | 8.5690e−04 | |

96 | 0.0032 | 0.0035 | 0.0036 | |

18 | 9.7457e−04 | 1.0232e−03 | 1.3615e−03 | |

40 | 0.0022 | 0.0025 | 0.0027 | |

117 | 7.9870e−04 | 8.0045e−04 | 8.2458e−04 | |

3 | 0.0020 | 0.0024 | 0.0026 | |

69 | 5.0433e−04 | 5.3025e−04 | 5.6781e−04 | |

88 | 0.0013 | 0.0015 | 0.0017 | |

104 | 4.8877e−04 | 4.9761e−04 | 5.0780e−04 | |

20 | 6.2597e−04 | 6.5780e−04 | 6.9541e−04 | |

77 | 5.1210e−04 | 5.4209e−04 | 5.8122e−04 | |

54 | 3.9739e−04 | 4.2351.e−04 | 4.5028e−04 | |

127 | 2.9093e−04 | 3.0314e−04 | 3.2410e−04 | |

10 | 0.0011 | 0.0012 | 0.0013 | |

58 | 0.0014 | 0.0016 | 0.0017 | |

114 | 9.2745e−04 | 9.5492e−04 | 9.7560e−04 | |

39 | 0.0014 | 0.0016 | 0.0018 | |

122 | 3.5541e−04 | 3.8012e−04 | 3.9956e−04 | |

61 | 0.0014 | 0.0017 | 0.0018 |

13 | 1.9842e−04 | 2.5013e−04 | 2.8917e−04 | |
---|---|---|---|---|

45 | 5.7558e−04 | 5.8689e−04 | 6.1358e−04 | |

29 | 0.0013 | 0.0015 | 0.0017 | |

109 | 2.6082e−04 | 2.7920e−04 | 2.9215e−04 | |

6 | 0.0018 | 0.0020 | 0.0021 | |

53 | 0.0015 | 0.0017 | 0.0018 | |

118 | 0.0011 | 0.0012 | 0.0013 | |

37 | 0.0015 | 0.0017 | 0.0019 | |

126 | 7.9942e−04 | 8.0923e−04 | 8.2031e−04 | |

59 | 8.3566e−04 | 8.6168e−04 | 8.9610e−04 | |

17 | 0.0015 | 0.0019 | 0.0020 | |

83 | 6.8054e−04 | 6.8994e−04 | 6.9461e−04 | |

101 | 4.0597e−04 | 4.3111e−04 | 4.6787e−04 | |

93 | 5.6662e−04 | 5.7554e−04 | 5.8530e−04 | |

66 | 8.2555e−04 | 8.6592e−04 | 8.8509e−04 | |

42 | 4.8333e−04 | 4.9612e−04 | 5.1211e−04 | |

75 | 4.1790e−04 | 4.5239e−04 | 4.7854e−04 | |

30 | 7.7837e−04 | 7.9846e−04 | 8.3071e−04 | |

95 | 0.0015 | 0.0016 | 0.0017 | |

79 | 0.0012 | 0.0013 | 0.0015 | |

86 | 2.2093e−04 | 2.5033e−04 | 2.7311e−04 | |

22 | 5.5179e−04 | 5.9641e−04 | 6.0247e−04 | |

64 | 0.0025 | 0.0026 | 0.0028 | |

4 | 0.0012 | 0.0015 | 0.0018 | |

97 | 6.4348e−04 | 6.6046e−04 | 6.8751e−04 | |

82 | 5.0672e−04 | 5.5223e−04 | 5.8548e−04 | |

113 | 1.9020e−04 | 1.9941e−04 | 2.0102e−04 | |

90 | 0.0010 | 0.0011 | 0.0012 | |

48 | 0.0010 | 0.0012 | 0.0013 | |

71 | 0.0011 | 0.0012 | 0.0013 | |

19 | 8.5313e−04 | 8.9130e−04 | 9.3103e−04 | |

21 | 6.1903e−04 | 6.6278e−04 | 6.8543e−04 | |

103 | 9.4000e−04 | 9.6203e−04 | 9.9055e−04 | |

32 | 2.9538e−05 | 3.6592e−05 | 4.2505e−05 | |

5 | 0.0020 | 0.0023 | 0.0025 | |

91 | 0.0012 | 0.0014 | 0.0015 | |

107 | 6.4686e−04 | 6.7332e−04 | 6.8697e−04 | |

44 | 5.0210e−04 | 5.4579e−04 | 5.8435e−04 | |

125 | 4.1907e−04 | 4.3321e−04 | 4.5548e−04 | |

15 | 4.3777e−04 | 4.9050e−04 | 5.4307e−04 | |

98 | 8.8330e−04 | 8.9654e−04 | 9.1113e−04 | |

31 | 0.0027 | 0.0029 | 0.0031 | |

1 | 9.4851e−04 | 9.7128e−04 | 9.8429e−04 | |

115 | 0.0011 | 0.0012 | 0.0013 | |

99 | 7.2897e−04 | 7.4500e−04 | 7.7854e−04 |

Here 5 is the limit value of the chosen sequence as

Experimental results confirm that the chosen criteria

Analysis proposed criteria effectiveness compared with the result of other existing criteria based hidden layer neuron unit’s estimation in ELMAN neural network for wind speed forecasting is presented in

S. No. | Various Methodologies | Year | Hidden Layer Neuron Units | Data set1 MSE | Data set2 MSE | Data set3 MSE |
---|---|---|---|---|---|---|

1 | Arai, M Method | 1993 | 0.0025 | 0.0026 | 0.0028 | |

2 | Jin-Yan Li et al. Method | 1995 | 0.0012 | 0.0015 | 0.0018 | |

3 | Kanellopoulas, I & Wilkinson, G, G Method | 1997 | 9.5408e−04 | 1.1030e−03 | 1.1380e−03 | |

4 | Tamura, S & Tateishi, M Method | 1997 | 0.0020 | 0.0023 | 0.0025 | |

5 | Osamu Fujita Method | 1998 | 5.1210e−04 | 5.4209e−04 | 5.8122e−04 | |

6 | Zhaozhi Zhang et al. Method | 2003 | 6.7128e−04 | 6.8430e−04 | 6.9281e−04 | |

7 | Jinchauan Ke & Xinzhe Lie Method | 2008 | 7.9870e−04 | 8.0045e−04 | 8.2458e−04 | |

8 | Shuxiang Xu & Ling Chen Method | 2008 | 0.0013 | 0.0015 | 0.0017 | |

9 | Stephen Trenn Approach | 2008 | 0.0020 | 0.0024 | 0.0026 | |

10 | Katsunari Shibata & Yusuke Ikeda Method | 2009 | 3.1075e−04 | 3.4312e−04 | 3.5422e−04 | |

11 | David Hunter et al. Method | 2012 | 7.8566e−04 | 7.9623e−04 | 8.0146e−04 | |

12 | Gnana Sheela, K & S, N, Deepa Method | 2013 | 0.0020 | 0.0023 | 0.0025 | |

13 | Gue Qian & Hao Yong Method | 2013 | 0.0012 | 0.0015 | 0.0018 | |

14 | Madhiarasan, M & S, N, Deepa Method | 2016 | 5.0210e−04 | 5.4579e−04 | 5.8435e−04 | |

15 | Madhiarasan, M & S, N, Deepa Method | 2016 | 1.9842e−04 | 2.5013e−04 | 2.8917e−04 | |

16 | Proposed Method | 2.9538e−05 | 3.6592e−05 | 4.2505e−05 |

Chosen new criteria based ELMAN neural network possesses 6 input neuron units, single hidden layer with 32 hidden layer neuron units and single output optimized by means of the proposed modified grey wolf optimizer.

For substantiation of the proposed algorithm, the results are compared with some of the existing Meta heuristic algorithms. A similar wind speed forecasting and objective function is used for ELMAN neural network with the Meta heuristic algorithm in

Algorithms | Parameters | Parametric Values |
---|---|---|

MGWO | Population size | 100 Linearly decreased from 2 to 0 300 |

GWO | Population size | 100 Linearly decreased from 2 to 0 300 |

GGSA | Population size | 100 |

CS | Population size | 100 0.3 300 |

ABC | Population size Limit Maximum number of generations | 100 20 300 |

ACO | Population size Initial pheromone ( | 100 1e−07 30 1 0.8 0.5 1 5 300 |

PSO | Population size _{ } Maximum number of generations | 100 1.5 300 |

ES | Population size | 100 10 1 300 |

GA | Type Selection Crossover Mutation Population size Maximum number of generations | Real coded Roulette wheel Single point (probability = 1) Uniform (probability = 0.1) 100 300 |

different wind data sets are solved over 20 runs using each algorithm in order to generate the statistical results. Statistical results are depicted in

In this paper, a novel hybrid method (ELMAN-MGWO) is proposed for wind speed forecasting. Firstly, proper hidden layer neuron unit is estimated based on the proposed new criteria and suggested criteria are validated based on the convergence theorem. Based on the least mean square error

Data sets | Algorithms | MSE AVG ± STD |
---|---|---|

Data set1 | GA-ELMAN ES-ELMAN PSO-ELMAN ACO-ELMAN ABC-ELMAN CS-ELMAN GGSA-ELMAN GWO-ELMAN MGWO-ELMAN | 8.1107e−07 ± 3.0232e−03 2.7001e−07 ± 3.1831e−04 3.1518e−08 ± 7.8981e−07 5.8667e−07 ± 7.1113e−03 7.7032e−09 ± 3.7518e−06 6.7807e−09 ± 9.0901e−07 7.0563e−09 ± 9.9493e−06 2.2192e−10 ± 3.8892e−10 4.1379e−11 ± 1.0567e−15 |

Data set2 | GA-ELMAN ES-ELMAN PSO-ELMAN ACO-ELMAN ABC-ELMAN CS-ELMAN GGSA-ELMAN GWO-ELMAN MGWO-ELMAN | 8.8420e−07 ± 3.4782e−03 3.0084e−07 ± 7.8193e−04 4.5770e−08 ± 6.7010e−06 6.8042e−07 ± 5.4825e−03 8.5455e−09 ± 7.7512e−07 8.8008e−09 ± 2.9377e−08 7.7010e−09 ± 8.1040e−06 3.4050e−10 ± 2.0001e−10 6.3073e−11 ± 3.5708e−15 |

Data set3 | GA-ELMAN ES-ELMAN PSO-ELMAN ACO-ELMAN ABC-ELMAN CS-ELMAN GGSA-ELMAN GWO-ELMAN MGWO-ELMAN | 9.9740e−07 ± 2.8472e−03 4.7001e−07 ± 4.2318e−03 5.1348e−08 ± 2.5301e−06 7.4800e−07 ± 5.5248e−03 9.1277e−09 ± 4.5542e−07 9.8992e−09 ± 3.9105e−08 8.5474e−09 ± 8.4050e−06 4.6181e−10 ± 2.6725e−10 7.5840e−11 ± 1.1613e−14 |

The authors gratefully acknowledge National Oceanic and Atmospheric Administration, United States and Suzlon Energy Private Limited for the provision of data resources. Mr. M. Madhiarasan was supported by the Rajiv Gandhi National Fellowship (F1-17.1/2015-16/RGNF-2015-17-SC-TAM-682/(SA-III/Website)).

M. Madhiarasan,S. N. Deepa, (2016) ELMAN Neural Network with Modified Grey Wolf Optimizer for Enhanced Wind Speed Forecasting. Circuits and Systems,07,2975-2995. doi: 10.4236/cs.2016.710255

Considers different criteria “n” as input parameter number. All considered criteria are satisfied based on the convergence theorem. Some explanations are given as below. “The sequence is called a convergent sequence, when the limit of a sequence is finite, while the sequence is called a divergent sequence when the limit of a sequence does not gravitate to a finite number, stated by Dass [

The convergence theorem characteristics are given as follows:

1) Convergent sequence needed condition is that it has finite limit and is bounded.

2) An oscillatory sequence may not possess a distinctive limit.

Stable network is a network which posses no modification take place in the state of the network, irrespective of working. The neural network always converges to a stable state is the formidable network property. In real-time optimization problem the convergence plays a major role, the threat of fall in local mini ma issue on a neural network is overcome by means of convergence. Due to the discontinuities in the network model, the sequence of convergence has infinite. Convergence properties are used to model real-time neural optimization solvers.

Discuss the convergence of the considered sequence as follows.

Proceeds the considered sequence

Pertain convergence theorem

Consequently, the terms of a sequence have a finite limit value and bounded so examine sequence is convergent sequence.

Proceeds the considered sequence

Pertain convergence theorem

Consequently, the terms of a sequence have a finite limit value and bounded so examine sequence is convergent sequence.

Mr. M. MADHIARASAN has completed his B.E (EEE) in the year 2010 from Jaya Engineering College, Thiruninravur, M.E. (Electrical Drives & Embedded Control) from Anna University, Regional Centre, Coimbatore, in the year 2013. He is currently doing Research (Ph.D) under Anna University, TamilNadu, India. His Research areas include Neural Networks, Modeling and simulation, Renewable Energy System and Soft Computing.

Dr. S. N. Deepa has completed her B.E (EEE) in the year 1999 from Government College of Technology, Coimbatore, M.E.(Control Systems) from PSG College of Technology in the year 2004 and Ph.D.(Electrical Engineering) in the year 2008 from PSG College of Technology under Anna University, TamilNadu, India. Her Research areas include Linear and Non-linear control system design and analysis, Modeling and simulation, Soft Computing and Adaptive Control Systems.

Submit or recommend next manuscript to SCIRP and we will provide best service for you:

Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.

A wide selection of journals (inclusive of 9 subjects, more than 200 journals)

Providing 24-hour high-quality service

User-friendly online submission system

Fair and swift peer-review system

Efficient typesetting and proofreading procedure

Display of the result of downloads and visits, as well as the number of cited articles

Maximum dissemination of your research work

Submit your manuscript at: http://papersubmission.scirp.org/