Unsupervised Feature Selection Based on Low-Rank Regularized Self-Representation ()

Wenyuan Li^{}, Lai Wei^{*}

College of Information Engineering, Shanghai Maritime University, Shanghai, China.

**DOI: **10.4236/oalib.1106274
PDF HTML XML
129
Downloads
380
Views
Citations

College of Information Engineering, Shanghai Maritime University, Shanghai, China.

Feature selection aims to find a set of features that are concise and have good generalization capabilities by removing redundant, uncorrelated, and noisy features. Recently, the regularized self-representation (RSR) method was proposed for unsupervised feature selection by minimizing the L2,1 norm of residual matrix and self-representation coefficient matrix. In this paper, we find that minimizing the L2,1 norm of the self-representation coefficient matrix cannot effectively extract the features with strong correlation. Therefore, by adding the minimum constraint on the kernel norm of the self-representation coefficient matrix, a new unsupervised feature selection method named low-rank regularized self-representation (LRRSR) is proposed, which can effectively discover the overall structure of the data. Experiments show that the proposed algorithm has better performance on clustering tasks than RSR and other related algorithms.

Keywords

Share and Cite:

Li, W. and Wei, L. (2020) Unsupervised Feature Selection Based on Low-Rank Regularized Self-Representation. *Open Access Library Journal*, **7**, 1-12. doi: 10.4236/oalib.1106274.

1. Introduction

In recent years, the field of scientific research has been facing more and more high-dimensional data. The high-dimensionality of data has greatly increased the time and space complexity of data processing, and has made the performance of some algorithms for analyzing problems in low-dimensional spaces greatly declining, this is the so-called “dimensional disaster” [1] problem. The method of extracting typical features from high-dimensional data was proposed to alleviate this problem [2] [3] [4] , that is, to remove some irrelevant or redundant information from the original high-dimensional data, and find out the mapping relationship from complex data sources to low-dimensional space, and then use this relationship to extract the typical features in high-dimensional data, instead of the original data for subsequent processing [5] [6] [7] [8] . In recent years, many novel feature selection methods have been proposed successively, such as FSFS (feature selection using feature similarity) [9] , LS (Laplacian score) [10] , MRFS (feature selection via minimum redundancy) [11] , RSR (regularized self-representation) [12] and so on.

Feature selection methods are mainly divided into three categories: Filter, Wrapper [13] and Embedding [14] [15] [16] [17] [18] . Among them, the initial features are filtered by the filter method firstly, and then the filtered features are used to train the model [19] . The Relief algorithm [20] , which was first proposed by Kira, is a feature weighting algorithm, and it is one of the more well-known filter methods. In this algorithm, the weight of each feature is obtained through the correlation between the feature and the category, and the features with a weight higher than a certain threshold are selected as the feature selection result. For the wrapper method, it uses the clustering or classification performance of the learner as an evaluation criterion to select feature [14] , so as to select a set of feature subsets that are most beneficial to the learner. Since the selected features need to be evaluated by a classifier, the time complexity of the wrapper method is much higher than that of the filter method. In addition, the performance of filter and wrapper methods are affected by the search strategy. The embedding method is currently the most widely used, which integrates the feature selection process and the learner training process. Both steps are completed in the same optimization process, that is, the feature selection is automatically performed during the learner training process [19] . The recently proposed RSR method is one of the better performances of the embedding methods.

The RSR method believes that any feature of the sample can be linearly represented by other relevant features, and the weight of the linear representation reflects the importance of each feature in describing the feature. That is to say, in the RSR method, if one feature is important, it will participate in the reconstruction representation of most other features, thereby generating a row of larger self-representation coefficients in the process of self-representation reconstruction [12] , vice versa. For the entire attribute set, all self-representation coefficient vectors form a reconstructed coefficient matrix. The RSR method makes the matrix rows sparse by minimizing the ${L}_{\mathrm{2,1}}$ norm of the self-representation coefficient matrix, thereby facilitating the selection of features of higher importance. Experiments show that RSR has proved its effectiveness in comparison with other unsupervised feature selection methods.

Nevertheless, we find that only by minimizing the ${L}_{\mathrm{2,1}}$ norm of the self-representation coefficient matrix cannot effectively extract the features with strong correlation. For example, if there are two important features with strong correlation, then the RSR method cannot effectively extract them simultaneously. In order to solve this problem, this paper proposes a new unsupervised feature selection method, that is, low-rank regularized self-representation (LRRSR), by adding a constraint on minimizing the kernel norm of the self-representation coefficient matrix, and gives the optimization of the LRRSR algorithm. Experiments on some UCI datasets and standard face datasets confirm the efficiency of the LRRSR in clustering tasks.

2. Regularized Self-Representation

As one of unsupervised feature selection methods, the regularized self-representation (RSR) method aims to select a set of representative features, which can effectively reconstruct the remaining features. The more important features can participate in the reconstruction of most other features and correspondingly generate self-representation coefficients with larger values. Therefore, the RSR method selects the more important features by finding the row vectors with larger self-representation coefficients.

Let $X\in {\mathbb{R}}^{n\times m}$ be a data matrix containing n samples, each of which has m features. Let $X=XW+E$ , where W is a self-representation matrix and E is a residual matrix. RSR selects features by minimizing the ${\Vert E\Vert}_{\mathrm{2,1}}$ and the ${\Vert W\Vert}_{\mathrm{2,1}}$ , which is transformed into the following formula:

$\stackrel{^}{W}=\mathrm{arg}\underset{W}{\mathrm{min}}{\Vert E\Vert}_{2,1}+\lambda {\Vert W\Vert}_{2,1}\text{\hspace{1em}}\text{s}\text{.t}\text{.},X=XW+E.$ (1)

It can be seen from the formula that RSR learns the optimal W by solving this minimization problem.

Among them, for the residual matrix E, the common way to minimize the residual is to make the F norm of the residual term as small as possible, that is, to minimize
${\Vert E\Vert}_{F}$ . However, considering that there may be outliers in the sample, and F norm is sensitive to outliers. Therefore, RSR minimizes the
${L}_{\mathrm{2,1}}$ norm of E to make it line sparsity, that is, minimizes
${\Vert E\Vert}_{\mathrm{2,1}}$ , so as to both minimize the reconstruction error in the feature selection process and enhance the robustness to abnormal attributes. For the self-representation coefficient matrix W,
$W=\left[{w}_{1}\mathrm{;}{w}_{2}\mathrm{;}\cdots \mathrm{;}{w}_{m}\right]$ , the size of
${w}_{i}$ can reflect the importance of the i^{th} feature of the data. The RSR uses
${\Vert {w}_{i}\Vert}_{\mathrm{2,1}}$ as a weight to measure the importance of the i^{th} feature, and selects the features with higher importance by minimizing
${\Vert w\Vert}_{\mathrm{2,1}}$ .

3. Low-Rank Regularized Self-Representation

As mentioned in the previous chapter, the RSR algorithm can learn the more important features in the data. However, the data in nature often contains a lot of redundant information and the correlation between the data is relatively strong. The RSR ignores the correlation between the features of the data when minimizing the residual. Therefore, on the basis of the RSR model, a low-rank constraint on the self-representation coefficient matrix is added in this paper to find the correlation between the sample data, so as to more effectively find the overall structure information of the sample.

Given a data matrix X, $X=\left[{x}_{1}\mathrm{;}{x}_{2}\mathrm{;}\cdots \mathrm{;}{x}_{n}\right]\in {\mathbb{R}}^{n\times m}$ , that is, the data set has n samples in total, and each sample has m features. The low-rank regularized self-representation (LRRSR) model can be described as the following minimization problem:

$\stackrel{^}{Z}=\mathrm{arg}\underset{\text{Z}}{\mathrm{min}}{\Vert E\Vert}_{2,1}+\lambda {\Vert Z\Vert}_{2,1}+\beta {\Vert Z\Vert}_{\ast}\text{\hspace{1em}}\text{s}\text{.t}.,X=XZ+E.$ (2)

Among them,
$Z\in {\mathbb{R}}^{m\times m}$ is the self-representing coefficient matrix of the data,
$Z=\left[{z}_{1}\mathrm{;}{z}_{2}\mathrm{;}\cdots \mathrm{;}{z}_{m}\right]$ ,
${z}_{i}$ represents the i^{th} self-representation vector of Z; E represents the residual matrix,
$E=X-XZ=\left[{e}_{1};{e}_{2};\cdots ;{e}_{n}\right]$ ,
${e}_{i}\in {\mathbb{R}}^{1\times m}$ is the residual vector of the i^{th} sample;
${\Vert \cdots \Vert}_{2,1}$ represents the
${L}_{\mathrm{2,1}}$ norm,
${\Vert \cdots \Vert}_{\ast}$ represents the kernel norm. Here we find the whole structure of the feature space by minimizing the kernel norm.
$\lambda $ and
$\beta $ are empirical parameters.

As shown in the above formula (2), solving the optimal self-representation coefficient matrix problem is transformed into solving the problem of minimizing the sum of residual and coefficient regularization constraints. Among them, minimizing
${\Vert E\Vert}_{\mathrm{2,1}}$ makes the residuals as small as possible when training Z; the value of
${\Vert {z}_{i}\Vert}_{2}$ can directly reflect the importance of the i^{th} feature in the self-representation process. Therefore, minimizing
${\Vert Z\Vert}_{2,1}$ can effectively extract important features; the rank of a matrix is a measure of the global structure of the matrix and contains the overall structure information of the matrix. Therefore, by adding a low rank constraint to the self-expression coefficient matrix Z, that is,
${\Vert Z\Vert}_{\ast}$ , we can better find the overall structure information of the data and the correlation between its features, so as to effectively extract the features with strong correlation.

4. Algorithm Optimization and Solution

In this section, we optimize and solve the LRRSR. First, we first convert formula (2) into the following equivalence problem:

$\underset{Z,E,W,J}{\mathrm{min}}{\Vert E\Vert}_{2,1}+\lambda {\Vert W\Vert}_{2,1}+\beta {\Vert J\Vert}_{\ast}\text{\hspace{1em}}\text{s}\text{.t}\text{.},X=XZ+E,\mathrm{}Z=J,\mathrm{}Z=W.$ (3)

Then, through the augmented Lagrange method (ALM), formula (3) can be transformed into the following problem:

$\begin{array}{l}\underset{Z,E,W,J,{Y}_{1},{Y}_{2},{Y}_{3}}{\mathrm{min}}{\Vert E\Vert}_{2,1}+\lambda {\Vert W\Vert}_{2,1}+\beta {\Vert J\Vert}_{\ast}+tr\left[{Y}_{1}^{\text{T}}\left(X-XZ-E\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+tr\left[{Y}_{2}^{\text{T}}\left(Z-J\right)\right]+tr\left[{Y}_{3}^{\text{T}}\left(Z-W\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\mu}{2}\left({\Vert X-XZ-E\Vert}_{F}^{2}+{\Vert Z-J\Vert}_{F}^{2}+{\Vert Z-W\Vert}_{F}^{2}\right).\end{array}$ (4)

Among them, $\mu >0$ is the penalty parameter, and ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}{Y}_{3}$ are Lagrange multipliers. The above unknown terms can be solved by iterative updating. During the iteration process, $J\mathrm{,}W\mathrm{,}Z\mathrm{,}E$ can in turn update a single unknown term by fixing other variables. The specific update process can be summarized as follows:

1) Fix other unknowns and update J. Ignoring the terms unrelated to J in formula (4), then at the ${\left(k+1\right)}^{\text{th}}$ iteration, J can be expressed as

$\begin{array}{c}{J}_{k+1}=\mathrm{arg}\underset{{J}_{k}}{\mathrm{min}}\beta {\Vert {J}_{k}\Vert}_{\ast}+\langle {Y}_{2}^{k},{Z}_{k}-{J}_{k}\rangle +\frac{{\mu}_{k}}{2}\left({\Vert {Z}_{k}-{J}_{k}\Vert}_{F}^{2}\right)\\ =\mathrm{arg}\mathrm{}\underset{{J}_{k}}{\mathrm{min}}\frac{\beta}{{\mu}_{k}}{\Vert {J}_{k}\Vert}_{\ast}+\frac{1}{2}{\Vert {J}_{k}-\left({Z}_{k}+\frac{{Y}_{2}^{k}}{{\mu}_{k}}\right)\Vert}_{F}^{2}.\end{array}$ (5)

Then, ${J}_{k+1}={U}_{k}{\Theta}_{1/{\mu}_{k}}\left({S}_{k}\right){V}_{k}^{\text{T}}$ , where the ${U}_{k}\left({S}_{k}\right){V}_{k}^{\text{T}}$ is a singular value decomposition matrix of the $\left({Z}_{k}+{Y}_{2}^{k}/{\mu}_{k}\right)$ , and the $\Theta $ is a singularity value threshold [21] .

2) Fix other unknowns and update W. Ignoring the terms unrelated to W in formula (4), then at the ${\left(k+1\right)}^{\text{th}}$ iteration, W can be expressed as

$\begin{array}{c}{W}_{k+1}=\mathrm{arg}\underset{{W}_{k}}{\mathrm{min}}\lambda {\Vert {W}_{k}\Vert}_{2,1}+\langle {Y}_{3}^{k},{Z}_{k}-{W}_{k}\rangle +\frac{{\mu}_{k}}{2}\left({\Vert {Z}_{k}-{W}_{k}\Vert}_{F}^{2}\right)\\ =\mathrm{arg}\underset{{W}_{k}}{\mathrm{min}}\frac{\lambda}{{\mu}_{k}}{\Vert {W}_{k}\Vert}_{2,1}+\frac{1}{2}{\Vert {W}_{k}-\left({Z}_{k}+\frac{{Y}_{3}^{k}}{{\mu}_{k}}\right)\Vert}_{F}^{2}.\end{array}$ (6)

Here we can solve the above problem by the following Theorem 1 [22] .

Theorem 1 Suppose for a given matrix Q, $Q=\left[{q}_{1}\mathrm{,}{q}_{2}\mathrm{,}\cdots \mathrm{,}{q}_{i}\mathrm{,}\cdots \right]$ , there are the following problems:

$\stackrel{^}{W}=\mathrm{arg}\underset{W}{\mathrm{min}}\lambda {\Vert W\Vert}_{2,1}+\frac{1}{2}{\Vert W-Q\Vert}_{F}^{2}.$ (7)

Then the i^{th} column of the optimal solution W can be expressed as:

$W\left(:,i\right)=\{\begin{array}{l}\frac{{\Vert {q}_{i}\Vert}_{2}-\lambda}{{\Vert {q}_{i}\Vert}_{2}}{q}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\lambda <{\Vert {q}_{i}\Vert}_{2},\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}.\end{array}$ (8)

3) Fix other unknowns and update Z. Ignoring the terms unrelated to Z in formula (4), then at the ${\left(k+1\right)}^{\text{th}}$ iteration, Z can be expressed as

$\begin{array}{c}{Z}_{k+1}=\mathrm{arg}\underset{{Z}_{k}}{\mathrm{min}}\langle {Y}_{1}^{k},X-X{Z}_{k}-{E}_{k}\rangle +\langle {Y}_{2}^{k},{Z}_{k}-{J}_{k+1}\rangle +\langle {Y}_{3}^{k},{Z}_{k}-{W}_{k+1}\rangle \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{{\mu}_{k}}{2}\left({\Vert X-X{Z}_{k}-{E}_{k}\Vert}_{F}^{2}+{\Vert {Z}_{k}-{J}_{k+1}\Vert}_{F}^{2}+{\Vert {Z}_{k}-{W}_{k+1}\Vert}_{F}^{2}\right)\\ =\mathrm{arg}\underset{{Z}_{k}}{\mathrm{min}}{\Vert X-X{Z}_{k}-{E}_{k}+\frac{{Y}_{1}^{k}}{{\mu}_{k}}\Vert}_{F}^{2}+{\Vert {J}_{k+1}-\left({Z}_{k}+\frac{{Y}_{2}^{k}}{{\mu}_{k}}\right)\Vert}_{F}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\Vert {W}_{k+1}-\left({Z}_{k}+\frac{{Y}_{3}^{k}}{{\mu}_{k}}\right)\Vert}_{F}^{2}.\end{array}$ (9)

Then,

${Z}_{k+1}={\left(2I+{X}^{\text{T}}X\right)}^{-1}\left({X}^{\text{T}}X-{X}^{\text{T}}{E}_{k}+{X}^{\text{T}}\frac{{Y}_{1}^{k}}{{\mu}_{k}}+{J}_{k+1}-\frac{{Y}_{2}^{k}}{{\mu}_{k}}+{W}_{k+1}-\frac{{Y}_{3}^{k}}{{\mu}_{k}}\right)\mathrm{.}$ (10)

4) Fix other unknowns and update E. Ignoring the terms unrelated to E in formula (4), then at the ${\left(k+1\right)}^{\text{th}}$ iteration, E can be expressed as

$\begin{array}{c}{E}_{k+1}=\mathrm{arg}\underset{{E}_{k}}{\mathrm{min}}{\Vert {E}_{k}\Vert}_{2,1}+\langle {Y}_{1}^{k},X-X{Z}_{k+1}-{E}_{k}\rangle +\frac{{\mu}_{k}}{2}\left({\Vert X-X{Z}_{k+1}-{E}_{k}\Vert}_{F}^{2}\right)\\ =\mathrm{arg}\underset{{E}_{k}}{\mathrm{min}}\frac{1}{{\mu}_{k}}{\Vert {E}_{k}\Vert}_{2,1}+\frac{1}{2}{\Vert {E}_{k}-\left(X-X{Z}_{k+1}+\frac{{Y}_{1}^{k}}{{\mu}_{k}}\right)\Vert}_{F}^{2}.\end{array}$ (11)

Consistent with the solution formula (6), the above problem is solved by Theorem 1.

5) Fixed other unknowns and updated parameters. The parameter update in the ${\left(k+1\right)}^{\text{th}}$ iteration is summarized as follows:

$\{\begin{array}{l}{Y}_{1}^{k+1}={Y}_{1}^{k}+{\mu}_{k}\left(X-X{Z}_{k+1}-{E}_{k+1}\right)\\ {Y}_{2}^{k+1}={Y}_{2}^{k}+{\mu}_{k}\left({Z}_{k+1}-{J}_{k+1}\right)\\ {Y}_{3}^{k+1}={Y}_{3}^{k}+{\mu}_{k}\left({Z}_{k+1}-{W}_{k+1}\right)\\ {\mu}_{k+1}=\mathrm{min}\left({\mu}_{\mathrm{max}},\rho {\mu}_{k}\right)\end{array}$ (12)

Here ${\mu}_{\mathrm{max}}$ and $\rho $ are two positive parameters, and k represents the number of iterations.

We summarize the algorithm process of LRRSR. Algorithm 5(`)@ describes the complete LRRSR algorithm process.

5. Experiments

We perform experiments on two UCI datasets and face datasets to compare the performance of the proposed LRRSR model with other unsupervised feature selection methods in clustering tasks. In our comparison experiment, since the algorithms LS, UDFS, and MCFS need to construct the nearest neighbor graph, the number of nearest neighbors k is set as 5 according to experience.

5.1. Experiments on the UCI Datasets

We compare the performance of the proposed LRRSR and RSR in the clustering task by using the zoo data set and the dermatology data set. The data statistics of the two data sets are shown in Table 1.

In the comparison experiment of the UCI datasets, we compared the clustering accuracy of the two methods with different features. In order to better show the performance difference between the two methods, we select $\left(\mathrm{5,10}\right)$ and $\left(\mathrm{2,15}\right)$ from all the experimental results as the feature quantity interval of Zoo dataset and Dermatology dataset, respectively. The experimental results are shown in Figure 1. It can be seen from the figure that the clustering accuracy of LRRSR is higher than RSR in most cases. Therefore, the LRRSR proposed in this paper performs better than the RSR model in clustering tasks.

5.2. Experiments on the Face Datasets

5.2.1. Introduction to Experimental Datasets

In this section, we use three commonly used standard face databases for experiments, namely ORL face database, YALEB face database and AR face database.

The ORL face database contains 400 face images taken under different expressions, lighting and shooting angles, including 40 people and 10 images for each person. We used the first 100 images, including 10 people, 10 images for each person, and normalized the images to $32\times 32$ . Some samples are shown in Figure 2.

The YALEB face database contains 38 people, 64 face images for each person. These images are also produced under different lighting and different fixed shooting angles. We selected 10 people’s images for experiments. Each of them

Table 1. Some attributes of the zoo dataset and the dermatology dataset.

Figure 1. Experimental results of two different methods on the zoo dataset (Left) and the dermatology dataset (Right).

Figure 2. Some examples of the images of a class in the ORL face database.

selected 6 images at the same shooting angle and normalized the images to $32\times 32$ . Some samples are shown in Figure 3.

The AR face database contains 14 people and 100 images for each person. We used a total of 140 images of the first 10 people for experiments and normalized the images to $60\times 43$ . Some samples are shown in Figure 4.

5.2.2. Experimental Analysis and Results

We compare the feature weight maps learned by the proposed LRRSR and RSR, as shown in Figure 5. As can be seen from the figure, the feature weight map learned by LRRSR is smoother and more similar to real face features than RSR. The reason is that LRRSR analyzes the correlation between sample data based on RSR, which can better extract the overall structure information. This result also verifies the effectiveness of our proposed algorithm.

We compare the proposed LRRSR method with the feature weight map learned by the RSR method, as shown in Figure 5. It can be seen that the feature weight maps learned by the LRRSR method are smoother than the RSR method is more similar to real face features, because the LRRSR method analyzes the correlation between sample data based on the RSR method, which can better extract the overall structure information, which also verifies the effectiveness of our proposed algorithm.

In the comparison experiment of the face databases, we use RSR, LS, UDFS, MCFS and other algorithms for comparison. The experiments compare the clustering accuracy rates of different algorithms in different feature dimensions. We set the number of selected features to $\left\{\mathrm{60,70,80,90,100,110}\right\}$ . Among them, LS and MCFS need to build a neighbor graph, and we set the number of neighbors to 5; for the algorithm RSR, we set the parameter $\lambda =10$ ; For the proposed algorithm LRRSR, in the experiments of the ORL face database and the YALEB face database, the parameters are set to $\lambda =5$ and $\beta =1$ , and the comparison experimental results are plotted in Figure 6 and Figure 7, respectively. In the experiment of the AR face database, $\lambda =11$ and $\beta =1$ are used as experimental parameters, and the comparison experimental results are shown in Figure 8. It can be seen from the experimental comparison experiments of three different face databases that the method LRRSR has a higher clustering accuracy rate in most cases than the other comparison algorithms. Among them, LS and MCFS are designed to preserve the similarity of the original feature space as much as possible; UDFS introduces discriminative information; MCFS, RSR and LRRSR use regression models for feature selection. Compared with other algorithms, the method LRRSR retains similarity information in the original feature space, and also explores the overall structure of the data and the correlation between data features as much as possible. Therefore, features that can better describe the

Figure 3. Some examples of the images of a class in the YALEB face database.

Figure 4. Some examples of the images of a class in the AR face database.

Figure 5. Feature weight maps learned under the AR face database (two pictures on the left) and the ORL face database (two pictures on the right).

Figure 6. Experimental results of various methods on the ORL face database.

original data can be extracted. Experimental results can prove that the features learned by the proposed LRRSR are more effective in clustering tasks.

Figure 7. Experimental results of various methods on the YALEB face database.

Figure 8. Experimental results of various methods on the AR face database.

6. Conclusion

In this paper, we propose a new feature selection method, that is, the unsupervised feature selection method guided by low-rank regularized self-representation (LRRSR). The regularized self-representation (RSR) selects features by minimizing the ${L}_{\mathrm{2,1}}$ norm of the residual matrix and the self-representation coefficient matrix. Since the minimization of ${L}_{\mathrm{2,1}}$ norm of self-representation coefficient matrix cannot effectively extract features with strong correlation, LRRSR adds a low-rank constraint to self-representation coefficient matrix to better extract the correlation information of the data. In LRRSR, by minimizing the ${L}_{\mathrm{2,1}}$ norm of the residual matrix, the residual matrix is minimized and robust to outliers; minimizing the ${L}_{\mathrm{2,1}}$ norm of the self-representation coefficient matrix makes the rows of the self-representation coefficient matrix sparse, so as to better extract important features; at the same time, minimizing the kernel norm of the self-representation coefficient matrix can discover the overall structure of the sample data and the correlation information among the data features, which can effectively extract the features with strong correlation. Experiments show that the LRRSR has better performance in clustering tasks than RSR and other related algorithms.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Bishop, C.M. (2006) Pattern Recognition and Machine Learning. Springer, Berlin, 738. |

[2] |
Zhu, X., Zhang, S., Jin, Z., Zhang, Z. and Xu, Z. (2011) Missing Value Estimation for Mixed-Attribute Data Sets. IEEE Transactions on Knowledge and Data Engineering, 23, 110-121. https://doi.org/10.1109/TKDE.2010.99 |

[3] | Zheng, M., Bu, J., Chen, C., Wang, C., Zhang, L., Qiu, G. and Cai, D. (2011) Graph Regularized Sparse Coding for Image Representation. IEEE Transactions on Image Processing, 20, 1327-1336. https://doi.org/10.1109/TIP.2010.2090535 |

[4] |
Ma, Z., Yang, Y., Sebe, N. and Hauptmann, A.G. (2014) Knowledge Adaptation with Partially Shared Features for Event Detection Using Few Exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1789-1802.
https://doi.org/10.1109/TPAMI.2014.2306419 |

[5] | Cai, D., Zhang, C. and He, X. (2010) Unsupervised Feature Selection for Multi-Cluster Data. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD Press, Washington DC, 333-342. https://doi.org/10.1145/1835804.1835848 |

[6] |
Gao, S., Tsang, I.W.-H., Chia, L.-T. and Zhao, P. (2010) Local Features Are Not Lonely—Laplacian Sparse Coding for Image Classification. In: The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, IEEE Press, San Francisco, 3555-3561. https://doi.org/10.1109/CVPR.2010.5539943 |

[7] | Yang, Y., Zhuang, Y.-T., Wu, F. and Pan, Y.-H. (2008) Harmonizing Hierarchical Manifolds for Multimedia Document Semantics Understanding and Cross-Media Retrieval. IEEE Transactions on Multimedia, 10, 437-446. https://doi.org/10.1109/TMM.2008.917359 |

[8] |
Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R.P., Tang, J. and Liu, H. (2018) Feature Selection: A Data Perspective. ACM Computing Surveys, 50, 94.
https://doi.org/10.1145/3136625 |

[9] | Mitra, P., Murthy, C.A. and Pal, S.K. (2002) Unsupervised Feature Selection Using Feature Similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 301-312. https://doi.org/10.1109/34.990133 |

[10] | He, X., Cai, D. and Niyogi, P. (2005) Laplacian Score for Feature Selection. In: International Conference on Neural Information Processing Systems, MIT Press, Cambridge, 507-514. |

[11] | Zhao, Z., Wang, L. and Liu, H. (2010) Efficient Spectral Feature Selection with Minimum Redundancy. Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI Press, Atlanta, 673-678. |

[12] |
Zhu, P., Zuo, W., Zhang, L., Hu, Q. and Shiu, S.C.K. (2015) Unsupervised Feature Selection by Regularized Self-Representation. Pattern Recognition, 48, 438-446.
https://doi.org/10.1016/j.patcog.2014.08.006 |

[13] | Kohavi, R. and John, G.H. (1997) Wrappers for Feature Subset Selection. Artificial Intelligence, 97, 273-324. https://doi.org/10.1016/S0004-3702(97)00043-X |

[14] | Kotsiantis, S.B. (2011) Feature Selection for Machine Learning Classification Problems: A Recent Overview. Artificial Intelligence Review, 42, 1-20. https://doi.org/10.1007/s10462-011-9230-1 |

[15] | Zhang, S., Zhou, H., Jiang, F. and Li, X. (2015) Robust Visual Tracking Using Structurally Random Projection and Weighted Least Squares. IEEE Transactions on Circuits and Systems for Video Technology, 25, 1749-1760. https://doi.org/10.1109/TCSVT.2015.2406194 |

[16] |
Wang, W., Cai, D., Wang, L., Huang, Q., Xu, X. and Li, X. (2016) Synthesized Computational Aesthetic Evaluation of Photos. Neurocomputing, 172, 244-252.
https://doi.org/10.1016/j.neucom.2014.12.106 |

[17] | Abualigah, L.M. and Khader, A.T. (2017) Unsupervised Text Feature Selection Technique Based on Hybrid Particle Swarm Optimization Algorithm with Genetic Operators for the Text Clustering. The Journal of Supercomputing, 73, 4773-4795. https://doi.org/10.1007/s11227-017-2046-2 |

[18] |
Han, K., Wang, Y., Zhang, C., Li, C. and Xu, C. (2018) Autoencoder Inspired Unsupervised Feature Selection. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, 15-20 April 2018, 2941-2945.
https://doi.org/10.1109/ICASSP.2018.8462261 |

[19] | Zhou, Z. (2016) Machine Learning. Tsinghua University Press, Beijing, 247-263. |

[20] | Kira, K. and Rendell, L.A. (1992) The Feature Selection Problem: Traditional Methods and a New Algorithm. In: Proceedings of the 10th National Conference on Artificial Intelligence, AAAI Press, Atlanta, 129-134. |

[21] | Cai, J.-F., Candes, E.J. and Shen, Z. (2010) A Singular Value Thresholding Algorithm for Matrix Completion. SIAM Journal on Optimization, 20, 1956-1982. https://doi.org/10.1137/080738970 |

[22] | Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y. and Ma, Y. (2012) Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Transactions on Software Engineering, 35, 171-184. https://doi.org/10.1109/TPAMI.2012.88 |

Journals Menu

Copyright © 2021 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.