Apply to the Micro Grid Data Limit Learning Algorithm

More and more microgrid projects are put into operation and completed, and the load data are becoming more and more multidimensional and massive. This requires effective classification of load data. Most of the traditional processing methods are based on neural network to classify the grid data. However, with the development of microgrid, the traditional neural network algorithm is having a hard time meeting the requirement of the classification and operation of massive microgrid data. In this paper, the back propagation neural network (BPNN) algorithm is parallelized based on the traditional reverse neural network algorithm. Multiple algorithms are applied for data learning, for example, the combined application of extreme learning algorithm and simulated annealing algorithm, artificial fish swarm algorithm and other evolutionary algorithms. The input variables in BPNN are optimized in the network training process. After adding the algorithm fitness evaluation function, the combined algorithm of improved back propagation neural network algorithm came out. It is most in line with the real-time data of power grid by means of root mean square error. This result could provide data support and theoretical basis for load management, microgrid optimization, energy storage management and electricity price modeling of microgrid.


Introduction
In recent years, microgrids combined with new energy technologies have been increasingly applied to our daily lives, and power load information has also begun to show an increase in the proportion of new energy sources, diversified power loads, huge amount of power information and inaccurate load classification. Therefore, it is even more necessary to use existing shallow data samples to efficiently and accurately classify different loads. In order to find the best power consumption mode for various users, and find the best operation point for microgrid operation. Power modeling for power response, energy storage management is essential.
At present, decision-making and calculation in the field of power load have experienced advances from original expert analysis and decision-making, to neural network applications, and then to improved artificial intelligence algorithms that combine neural networks with multiple clustering methods. The involved fields also began to involve high-voltage fault diagnosis, power load prediction and classification, power management decision-making and other fields. According to the power load classification literature, He Min used the Hadoop platform to classify the power load characteristics [1]; Qiu Wenyuan also used the power characteristic indicators to classify the power load [2]; Huang Qiyuan also analyzed the microgrid based on the fuzzy clustering algorithm. The load data is uniformly divided [3]. However, it should be noted that the above-mentioned power load classification methods all have the most applicable power energy fields. For the situation of variable load conditions caused by the use of new energy in the microgrid, the use of new energy is unstable. Generally, a single method cannot be used to systematically solve all microgrid operating conditions [4]. As a result, in many cases, the complete power algorithm also leads to inaccurate real-time models of the power grid, large errors, and the impact of power decision-making, thereby affecting the overall operation and decision-making of the microgrid.
In this paper, an improved extreme learning algorithm for microgrid mass data is proposed. It aims to use artificial intelligence to classify loads in parallel for different real-time power loads of microgrids, and to divide the shallow microgrid data structure. The feature combination forms a high-level data structure, and the clustered data is used for parallel computing and classification of extreme learning algorithms. Finally, the most suitable real-time microgrid data classification decision-making method is found through parallel multiple data in the form of fitness preference.

Overview of the Improved VELM Theoretical Model
ELM (Extreme Learning Machine) or "Extra-Limited Learning Machine" is a type of fast learning method based on single hidden layer feedforward neural networks (SLFNs) [5] based on feedforward neural networks in recent years. As with other neural network-based algorithms, it has the characteristics of randomly selecting the parameters of SLFNs hidden layer nodes and the corresponding nodes, which is suitable for unsupervised learning problems. In specific practical problems, because of the randomness of hidden layer nodes, the entire algorithm runs extremely fast [6]. For the massive data of the microgrid, it has strong computational advantages and has the characteristics of good network generalization. In the research of microgrid, we often regard it as an extension of neural network algorithm, or as an improvement of reverse neural network.
In the original microgrid data processing, all the weights and other important parameters involved in the neural network algorithm model need to be set and modified manually [7]. There is a strong correlation between different parameters, and the requirements for manually set parameters are very high. Gradient descent algorithm, simulated annealing algorithm and artificial fish swarm algorithm are often used to determine the optimal solution [8]. However, if these algorithms are used alone in practice, they often fall into the situation of falling into the local optimal solution, and the learning speed is often unable to keep up. These are two problems that can not be solved by the traditional algorithm, which greatly limits the universality of the algorithm [9].
In order to solve these problems and solve the problem of using feedforward neural network to accurately learn multiple heterogeneous input vectors in a short time, Huang and other researchers proposed elm algorithm. Elm algorithm randomly generates hidden layer nodes and corresponding node parameters of feedforward neural network in the operation calculation of feedforward network. The original solving process is transformed from a nonlinear model to a linear model, and the output weight is generated by using the least square method. Compared with other gradient descent algorithms, these methods can effectively improve the computation speed without interfering with the whole algorithm operation process. It has huge advantages when dealing with massive data.
Based on the ELM algorithm, this paper proposes an optimized decision extreme learning machine (VLEM) algorithm, which combines the ELM algorithm with the simulated annealing algorithm and the artificial fish school algorithm to ensure the speed and adaptability of the operation. After data is analyzed, fitness evaluation function is applied to improve the entire algorithm structure and computing performance.

ELM Algorithm
In the ELM algorithm, the overall microgrid data is now divided into N parts (A1, T1) according to the collection category, where A1 contains all the data of a category in the microdot network data (such as voltage collected at different times or different current collected at all times) [10]. After the data classification, the ELM model with the activation function of X hidden layer nodes according to the feedforward neural network can be written as the showing in formula 1.
In the formula, i β is the output weight, i b is the offset of the hidden layer unit of the feedforward neural network, ( ) g a is the activation function of the neural network, i W is the input weight, j A is the transposition of the input data vector, i j W A * is the internal product of i W and j A . The goal of the algorithm is to make the error of the training samples of the feedforward neural network as close to 0 as possible. As the showing in formula (5) The matrix form of the above formula: H is the output matrix of the hidden node, β is the output weight matrix, and T is the expected output matrix, as the showing in formula 8 and 9 ELM decision method: In the process of ELM algorithm, it is necessary to get the i β , i b , i W of feedforward neural network. And causes: This is equivalent to the minimum loss function: The minimum loss function: In order to improve the stability and generalization ability of the network, it is often necessary to modify the output results. Given the correction coefficient η , the least square solution of the output weight is: The corresponding output function of ELM is: Compared with the deformation algorithm (such as gradient descent algorithm) of feed-forward neural network, the ELM algorithm determines the output matrix H uniquely once the input weight i W and the bias i b are randomly determined. It has a certain degree of advancement [11].

Add Data Characteristics
The initial algorithm input belongs to the shallow data itself, and the data complexity is poor. If there is a slight error in the data monitoring, it will be gradually enlarged in the model, which will affect the operation of the overall model. Therefore, it is necessary to perform feature engineering first to remove all data.
Data is selected for feature selection. First, data binning technology is used to preprocess the data to reduce the impact of minor observation errors. Then use one-hot encoding technology to turn the category data into features of the same length. Through feature scaling and standardization operations, the values measured on different scales of data features are adjusted to a conceptual common scale, and independent variables or data feature ranges are standardized [12].
Then we need to build the model with the feature information obtained from the feature engineering. The model is a multi-objective evaluation model combining the voltage, current, electricity, temperature and their corresponding change rate. We build a multi-objective evaluation model based on Pareto optimal solution, NSGA-II algorithm model, which is a fast non dominated multi-objective evaluation model with elite retention strategy, to evaluate the efficiency and quality of the whole system. After data processing, elm algorithm, simulated annealing algorithm and artificial fish swarm algorithm are used to optimize the feature parameters. Each algorithm can get an evaluation value of a system. After running a variety of algorithms, analyze each evaluation value, select the most appropriate algorithm to run the system again to optimize the parameters of the model, get the optimal parameters of the system, and update the parameters.

VELM Method and Its Fitness Optimization
Based on the analysis of the above algorithm, when j A and the correction coefficient η was put into the original ELM algorithm, the final result is always affected. We improve the original LEM algorithm and add fitness selection rules. After the calculation data is calculated through the combination of simulated annealing and artificial fish school algorithm, we add the optimal fitness selection, and select the most suitable algorithm for combining with VLEM at the current stage [13].
VELM determination method: In the improved VELM algorithm, in order to reduce the impact of input va-

Combination of VELM Algorithm and Simulated Annealing Algorithm
Based on the combination of simulated annealing algorithm and ELM algorithm: The generation and acceptance of new solution of simulated annealing algorithm can be divided into four steps as follows: Step The simulated annealing algorithm has nothing to do with the initial value, and the solution obtained by the algorithm has nothing to do with the initial solution state S (which is the starting point of the algorithm iteration); the simulated annealing algorithm has asymptotic convergence. Due to the parallelism of the simulated annealing algorithm, characteristic parameters such as current, voltage, power, temperature and their corresponding rate of change can be simulated and optimized to obtain the corresponding optimal value.
Step 5: Comparison of different algorithms based on fitness-based extreme learning device (VELM). Use extreme learning machine based on voting mechanism to classify all data, then use different optimization algorithms to test, and compare this algorithm with other algorithms that use basic methods [14].

Combination of VLEM Algorithm and Artificial Fish Swarm Algorithm
Research in this section is based on the combination of artificial fish swarm algorithm and VELM algorithm.
The generation and acceptance of new solutions of artificial fish swarm algorithm can be divided into four steps as follows: Step 1: The artificial fish school algorithm and VLEM algorithm are combined to initialize the population. Combining data processing such as voltage and current, randomly distribute the values in the experimental area, and set four behavior modes: foraging, rear-end collision, clustering and random behavior. Among them, the step size is also randomly set by the VLEM algorithm. Thereby reducing computing time and increasing algorithm generalization.
Step 2: Combine the VLEM algorithm to calculate the objective function difference corresponding to the new solution. That is, calculate the evaluation function value S' of the multi-objective optimization model corresponding to the parameter to be optimized (such as the current I1), compare it with the actual value, calculate its relative error, and determine whether to use the local optimal solution trap.
Step Step 4: Comparison of different algorithms based on fitness-based extreme learning device (VELM). One should use extreme learning machine based on voting mechanism to classify all data, then use different optimization algorithms to test, and compare this algorithm with other algorithms that use basic methods [15].
Step diagram as the showing in Figure 1.

Experimental Analysis
In order to test our improved new algorithm, we used the power, voltage and W. X. Jin et al. current data of a universal microgrid. 40 days later, we got the results. In order to ensure the consistency of different shallow data to the results, this article first used data binning technology to pre-process the data to reduce the impact of minor observation errors. Then it used one-hot encoding technology to turn the category data into features of the same length. Finally, the three sets of data are combined to form a deep learning sample, which were divided into four parts.
Then, in the neural network operation, the deep learning algorithm that combines VELM with simulated annealing algorithm and artificial fish school algorithm is used to run the data, and the result comparison chart of the data value      (The overall sample were tested twice. Because the implicit function and nodes are randomly determined, even though it is the same sample, the results of the two tests are still slightly different. In the first test, there is a sample point prediction distortion, but the overall accuracy rate has been significantly improved.) The above figures show that the same set of data, under the VLEM algorithm, one of the first 30 results failed to predict. In the second prediction of the same set of data, all thirty results were successfully predicted. The overall accuracy rate was 98.3%. The operating speed and accuracy have been significantly improved compared with the original ELM algorithm.

Conclusion
This paper proposes a new VLEM algorithm for computing massive data in microgrids. The core idea is to randomly determine the implicit function and node, and merge it with the simulated annealing algorithm and artificial fish school algorithm to make adaptive selection. Experiments on the comprehensive data set show that the extreme learning algorithm can better identify the complex patterns in the data set, and is more effective than the existing extreme learning algorithm. And the load data classification of the massive data of the microgrid was carried out, and the task of load classification was effectively completed, indicating that the VELM algorithm has broad application prospects in the field of electric power big data. Future studies on VELM will continue to overcome the shortcomings of this algorithm and apply it to different fields.