Mutual Information-Based Modified Randomized Weights Neural Networks

Randomized weights neural networks have fast learning speed and good generalization performance with one single hidden layer structure. Input weighs of the hidden layer are produced randomly. By employing certain activation function, outputs of the hidden layer are calculated with some randomization. Output weights are computed using pseudo inverse. Mutual information can be used to measure mutual dependence of two variables quantitatively based on the probability theory. In this paper, these hidden layer’s outputs that relate to prediction variable closely are selected with the simple mutual information based feature selection method. These hidden nodes with high mutual information values are maintained as a new hidden layer. Thus, the size of the hidden layer is reduced. The new hidden layer’s output weights are learned with the pseudo inverse method. The proposed method is compared with the original randomized algorithms using concrete compressive strength benchmark dataset.


Introduction
Machine learning (ML)-based data analysis has been a hot focuses in different disciplines.The most used learning prediction model construction methods are backup propagation neural networks (BPNN) and support vector machines (SVM) [1].However, BPNN suffers from local optima, uncontrolled convergence speed and over-fitting problems.Although SVM can address small samples modeling problem with good generalization, quadratic program (QP) and large kernel matrix problems are difficult to overcome for big sample learning datasets.A special single-layer feed-forward (SLFN) networks-based neural networks learning algorithm, i.e., randomized weights neural networks, was proposed to overcome shortcomings that caused by the gradient-based learning algorithms [2] [3].Its characteristics include: 1) input weights of the hidden layer are chosen randomly; 2) the hidden layer neurons need not be adjusted; and 3) output weights are analytically computed using pseudo inverse or least square method.The normally used pseudo inverse-based output weights calculation method has two advantages: a) optimal solution to the least square problem can be obtained; and b) the optimal output weight matrix is with minimal norm.There, this randomized weights neural networks algorithm has faster learning speed, which has been successfully applied [4] [5].Thus, pseudo inverse-based randomized algorithm solves the local minima problem with good testing performance and fast training time [6].However, how to control and estimate randomization of the input weights is an open issue.Study shows that small norm of the weights is more important than the node number to obtain good generalization performance for feed forward networks [7].The norms of the hidden weights generated by deep learning are small [8].Therefore, a randomized algorithms for nonlinear system identification with deep learning modification is proposed, which regards deep learning as pre-training technique to obtain the hidden layers' input weights [9].Thus, the small norm of the input weights and output weights are obtained by combination of the deep learning and the least-square approaches.However, long training time is needed.An effective and simple randomization control and estimation method needs to be addressed further.
Mutual information (MI) can be used to measure the mutual dependence of the two variables quantitatively based on the probability theory and information theory.Thus, it has been used widely in feature selection.The MI is more comprehensive than the other normal feature selection methods for select optimal input variables [10].However, the popular used MI based feature selection method needs lots of computational consume [11].A simple MI based feature selection method is used in [12] [13].For randomized weights neural networks, if we cannot control the randomization of the input weights effectively or simply, how about to control the hidden layer's outputs?That to say, we can only select some hidden layer's outputs that relate the prediction variables more closely to calculate output weights using pseudo inverse method.
Motivated by the above problems, a modified randomized weight neural networks based on MI is proposed in this paper.At first, the input variables and the random chosen input weights feed into certain activation function to produce outputs of the hidden layer.Then, MI values between these hidden layer's output and predicted variables are calculated, and these outputs with MI values higher than a preset threshold are selected.At last, pseudo inverse method is used to compute weights between these selected hidden layer's outputs and predicted variable.Therefore, input weights' randomization is controlled in some degrees.Simulation based on concrete compressive strength benchmark dataset is used to validate the proposed method.

Randomized Weights Neural Networks
Suppose that SLFNs with L hidden nodes can be represented as: i g denotes the activation function of the th i hidden node, i a is the input weights connecting the input layer to the th i hidden node, i b is the bias of the th i hidden node, i β is the output weight connecting the th i hidden node to the output layer, and ( ) h x is the mapping output of the hidden layer, can be denoted as , ), , ( , , )] Then, Equation ( 1) can be rewritten as: where, Theoretically, SLFNs are able to approximate any continuous target functions with enough hidden layer nodes using the randomized input weights.Give a training set [ , ], [1, ] The solution can be analytically determined by the expression below: where + H is the Moore-Penrose generalized inverse of matrix H .The reason of using Moore-Penrose generalized inverse is that matrix H may be singular and/or be not square.The

Mutual Information
Information entropy can quantify the uncertainty of the random variables and scale the amount of information shared by these variables.Thus, it has been widely used in many fields.The entropy can be represented as: where, ( ) p x is the margin probability density.Mutual information (MI) can measure the mutual dependence of two variables, which is defined as:, ( , ) ( ; ) ( , ) log ( ) ( ) ( ) ( ) where, ( , ) p y x is the joint probability density, and ( ) H Y | X is the conditional entropy at X is known, which is calculated as For the continuous random variables, H(X) p( ) log ( )

Simple Feature Selection Based on Mutual Information
Mutual information feature select (MIFS) algorithm can be described as: calculate MI values between each input feature and output variable, then select the input features with the bigger MI values and penalize the others features have the bigger MI values with the selected features, and obtain the best input feature sub-set using the greedy search method [14].This method is time-consuming for select features from high dimensional data.
A simple method based on MI is: 1) Calculate MI values between each input feature and output variables; 2) Given a pre-set threshold value of the MI based on prior knowledge; 3) The features with higher MI values than the threshold are selected.How to select the optimal pre-threshold value is an open question.

MI Based on Modified Randomized Weights Neural Networks
The proposed MI based modified randomized weights neural networks model are shown in Figure 1.
As shown in Figure 1, after obtain the mapping outputs of the hidden layer nodes Given that pre-set threshold value MI θ , the following equation is used to select hidden layer's outputs: We denote these hidden layer's outputs with where, sel L is the number of the selected hidden layer's outputs.Therefore, sel H has less randomization than that of the original H . Output weights are also computed us- ing the Moore-Penrose method with: sel ˆ( ) Consideration problem of the learning parameters' selection, the MI based randomized weights algorithms can be represented as the following optimization problem: . .
Some intelligent optimization methods can be used to address this problem.Chung Hua University [15].This dataset contains 1030 samples, each sample has nine columns.The first 7 columns are the input parameters, namely cement, blast furnace slag, fly ash, water, super plasticizer, coarse aggregate and fine aggregate in concrete per cubic content of the various ingredients of concrete placement.The eighth column is conserved days, and the last column is concrete compressive strength.

Application on Modeling Concrete Compressive Strength
Given that L = 300, the MI values between hidden layer's outputs and predicted variable are shown in Figure 2.
Figure 2 shows that the maximum MI value is almost 10 times than that of the minimum value.Thus, the hidden layer's outputs are not stability.It is needed to select outputs with high MI values.
The original randomized weights algorithm and MI based modified version are compared with different hidden nodes' number and different MI pre-set threshold values.In order to overcome the randomization of the initial weights, the mean root mean square errors (MRMSEs) with repeated 100 times are used to estimate the model's prediction accuracy.Statistical results are shown in Table 1.   1 shows that: 1) The maximum MI values based on different learning parameters between hidden layer's output and predicted variable increase with the number of the hidden nodes; 2) All smallest prediction errors with different learning parameters (L, MI θ ) occur with L = 30 -40; Thus, it may be the best range for this benchmark dataset; 3) The biggest prediction errors occur at about L = 200.The reason may be relate to the Moore-Penrose method; 4) The prediction performance isn't much improved with the modified approach with L = 40.However, with the other L values, the prediction performance can be improved much with suitable MI preset threshold value.Therefore, the largest prediction error problems at L = 200 can be avoided with the MI based modified approach.Thus, the proposed method has better robustness than that of the original randomized weighting algorithm.

Conclusion
This paper proposes new mutual information based randomized weights neural networks.Input weights of the hidden layer are produced randomly as normal randomized algorithm.Not all the outputs of the hidden layer are used to compute output weights.Mutual information based simple feature selection method is used to select hidden layer's outputs.These selected outputs are used to compute weights of hidden layer with pseudo inverse method.Concrete compressive strength benchmark dataset is used to validate this method.More researches will address some theoretically analysis and to validate this idea with more benchmark datasets.

,
Concrete compressive strength data obtained by the experimental studies of the group led by I.C.Yeh in Taiwan

Figure 1 .
Figure 1.MI based modified randomized weights neural networks model.

Figure 2 .
Figure 2. MI values between hidden layer's outputs and predicted variable.
relations between +H and H include:

Table 1 .
Statistical results (MRMSEs) of different learning parameters with repeated 100 times. MITable