Mutual Information-Based Modified Randomized Weights Neural Networks ()
1. Introduction
Machine learning (ML)-based data analysis has been a hot focuses in different disciplines. The most used learning prediction model construction methods are backup propagation neural networks (BPNN) and support vector machines (SVM) [1]. However, BPNN suffers from local optima, uncontrolled convergence speed and over-fit- ting problems. Although SVM can address small samples modeling problem with good generalization, quadratic program (QP) and large kernel matrix problems are difficult to overcome for big sample learning datasets. A special single-layer feed-forward (SLFN) networks-based neural networks learning algorithm, i.e., randomized weights neural networks, was proposed to overcome shortcomings that caused by the gradient-based learning algorithms [2] [3]. Its characteristics include: 1) input weights of the hidden layer are chosen randomly; 2) the hidden layer neurons need not be adjusted; and 3) output weights are analytically computed using pseudo inverse or least square method. The normally used pseudo inverse-based output weights calculation method has two advantages: a) optimal solution to the least square problem can be obtained; and b) the optimal output weight matrix is with minimal norm. There, this randomized weights neural networks algorithm has faster learning speed, which has been successfully applied [4] [5]. Thus, pseudo inverse-based randomized algorithm solves the local minima problem with good testing performance and fast training time [6]. However, how to control and estimate randomization of the input weights is an open issue. Study shows that small norm of the weights is more important than the node number to obtain good generalization performance for feed forward networks [7]. The norms of the hidden weights generated by deep learning are small [8]. Therefore, a randomized algorithms for nonlinear system identification with deep learning modification is proposed, which regards deep learning as pre-training technique to obtain the hidden layers’ input weights [9]. Thus, the small norm of the input weights and output weights are obtained by combination of the deep learning and the least-square approaches. However, long training time is needed. An effective and simple randomization control and estimation method needs to be addressed further.
Mutual information (MI) can be used to measure the mutual dependence of the two variables quantitatively based on the probability theory and information theory. Thus, it has been used widely in feature selection. The MI is more comprehensive than the other normal feature selection methods for select optimal input variables [10]. However, the popular used MI based feature selection method needs lots of computational consume [11]. A simple MI based feature selection method is used in [12] [13]. For randomized weights neural networks, if we cannot control the randomization of the input weights effectively or simply, how about to control the hidden layer’s outputs? That to say, we can only select some hidden layer’s outputs that relate the prediction variables more closely to calculate output weights using pseudo inverse method.
Motivated by the above problems, a modified randomized weight neural networks based on MI is proposed in this paper. At first, the input variables and the random chosen input weights feed into certain activation function to produce outputs of the hidden layer. Then, MI values between these hidden layer’s output and predicted variables are calculated, and these outputs with MI values higher than a preset threshold are selected. At last, pseudo inverse method is used to compute weights between these selected hidden layer’s outputs and predicted variable. Therefore, input weights’ randomization is controlled in some degrees. Simulation based on concrete compressive strength benchmark dataset is used to validate the proposed method.
2. Randomized Weights Neural Networks
Suppose that SLFNs with
hidden nodes can be represented as:
(1)
where,
(2)
denotes the activation function of the
hidden node,
is the input weights connecting the input layer to the
hidden node,
is the bias of the
hidden node,
is the output weight connecting the
hidden node to the output layer, and
is the mapping output of the hidden layer, can be denoted as
(3)
Then, Equation (1) can be rewritten as:
(4)
where,
(5)
(6)
(7)
Theoretically, SLFNs are able to approximate any continuous target functions with enough hidden layer nodes using the randomized input weights. Give a training set
, the randomized weight neural networks aim to reach the smallest training error and the smallest norm of output weights jointly.
(8)
The solution can be analytically determined by the expression below:
(9)
where
is the Moore-Penrose generalized inverse of matrix
.
The reason of using Moore-Penrose generalized inverse is that matrix
may be singular and/or be not square. The relations between
and
include:
,
,
and
.
In particular, when
has full column rank,
(10)
And when
has full row rank,
. (11)
3. Mutual Information Based Feature Selection
3.1. Mutual Information
Information entropy can quantify the uncertainty of the random variables and scale the amount of information shared by these variables. Thus, it has been widely used in many fields. The entropy can be represented as:
(12)
where,
is the margin probability density.
Mutual information (MI) can measure the mutual dependence of two variables, which is defined as:,
(13)
where,
is the joint probability density, and
is the conditional entropy at
is known, which is calculated as
(14)
For the continuous random variables,
(15)
(16)
(17)
3.2. Simple Feature Selection Based on Mutual Information
Mutual information feature select (MIFS) algorithm can be described as: calculate MI values between each input feature and output variable, then select the input features with the bigger MI values and penalize the others features have the bigger MI values with the selected features, and obtain the best input feature sub-set using the greedy search method [14]. This method is time-consuming for select features from high dimensional data.
A simple method based on MI is: 1) Calculate MI values between each input feature and output variables; 2) Given a pre-set threshold value of the MI based on prior knowledge; 3) The features with higher MI values than the threshold are selected. How to select the optimal pre-threshold value is an open question.
4. MI Based on Modified Randomized Weights Neural Networks
The proposed MI based modified randomized weights neural networks model are shown in Figure 1.
As shown in Figure 1, after obtain the mapping outputs of the hidden layer nodes
, ∙∙∙ ,
, the MI values between these outputs and predicted variable are calculated with:
(18)
Given that pre-set threshold value
, the following equation is used to select hidden layer’s outputs:
(19)
We denote these hidden layer’s outputs with
as:
(20)
where,
is the number of the selected hidden layer’s outputs.
Therefore,
has less randomization than that of the original
. Output weights are also computed using the Moore-Penrose method with:
(21)
Consideration problem of the learning parameters’ selection, the MI based randomized weights algorithms can be represented as the following optimization problem:
(22)
Some intelligent optimization methods can be used to address this problem.
5. Application on Modeling Concrete Compressive Strength
Concrete compressive strength data obtained by the experimental studies of the group led by I.C. Yeh in Taiwan
![]()
Figure 1. MI based modified randomized weights neural networks model.
Chung Hua University [15]. This dataset contains 1030 samples, each sample has nine columns. The first 7 columns are the input parameters, namely cement, blast furnace slag, fly ash, water, super plasticizer, coarse aggregate and fine aggregate in concrete per cubic content of the various ingredients of concrete placement. The eighth column is conserved days, and the last column is concrete compressive strength.
Given that L = 300, the MI values between hidden layer’s outputs and predicted variable are shown in Figure 2.
Figure 2 shows that the maximum MI value is almost 10 times than that of the minimum value. Thus, the hidden layer’s outputs are not stability. It is needed to select outputs with high MI values.
The original randomized weights algorithm and MI based modified version are compared with different hidden nodes’ number and different MI pre-set threshold values. In order to overcome the randomization of the initial weights, the mean root mean square errors (MRMSEs) with repeated 100 times are used to estimate the model’s prediction accuracy. Statistical results are shown in Table 1.
![]()
Figure 2. MI values between hidden layer’s outputs and predicted variable.
![]()
Table 1. Statistical results (MRMSEs) of different learning parameters with repeated 100 times.
Table 1 shows that: 1) The maximum MI values based on different learning parameters between hidden layer’s output and predicted variable increase with the number of the hidden nodes; 2) All smallest prediction errors with different learning parameters (L,
) occur with L = 30 - 40; Thus, it may be the best range for this benchmark dataset; 3) The biggest prediction errors occur at about L = 200. The reason may be relate to the Moore-Penrose method; 4) The prediction performance isn’t much improved with the modified approach with L = 40. However, with the other L values, the prediction performance can be improved much with suitable MI pre- set threshold value. Therefore, the largest prediction error problems at L = 200 can be avoided with the MI based modified approach. Thus, the proposed method has better robustness than that of the original randomized weighting algorithm.
6. Conclusion
This paper proposes new mutual information based randomized weights neural networks. Input weights of the hidden layer are produced randomly as normal randomized algorithm. Not all the outputs of the hidden layer are used to compute output weights. Mutual information based simple feature selection method is used to select hidden layer’s outputs. These selected outputs are used to compute weights of hidden layer with pseudo inverse method. Concrete compressive strength benchmark dataset is used to validate this method. More researches will address some theoretically analysis and to validate this idea with more benchmark datasets.
Acknowledgements
The research was sponsored by the post doctoral National Natural Science Foundation of China (2013M532118, 2015T81082), National Natural Science Foundation of China (61573364, 61273177), State Key Laboratory of Synthetical Automation for Process Industries, China National 863 Projects (2015AA043802).