Random Subspace Learning Approach to High-Dimensional Outliers Detection

Abstract

We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned.

Share and Cite:

Liu, B. and Fokoué, E. (2015) Random Subspace Learning Approach to High-Dimensional Outliers Detection. Open Journal of Statistics, 5, 618-630. doi: 10.4236/ojs.2015.56063.

1. Introduction

We are given a dataset, where, under the special scenario in

which refers to as high dimensional low sample size (HDLSS) setting. It is assumed that the basic distribution of the’sis multivariate Gaussian, so that the density of is given by, with

. (1)

It is also further assumed that the data set is contaminated, with a proportion where, of observations that are outliers, so that under -contamination regime, the probability density function of is given by

, (2)

where represents the contamination of the location parameter, while captures the level of contamination of the scatter matrix. Given a dataset with the above characteristics, the goal of all outlier detection techniques and methods is to select and isolate as many outliers as possible so as to perform robust statistical procedures non-aversely affected by those outliers. In such scenarios, where the multivariate Gaussian is the assumed basic underlying distribution, the classical Mahalanobis distance is the default measure of the proximity of the observations, namely

. (3)

And experimenters of often addressing and tackling the outlier detection task in such situations using either the so-called Minimum Covariance Determinant (MCD) algorithm [1] or some extensions or adaptations thereof. The MCD is described as followed:

Minimum Covariance Determinant (MCD)

Step 1. Select h observations, and form the dataset,;

Step 2. Compute the empirical covariance and mean;

Step 3. Compute the Mahalanobis distances;

Step 4. Select the h observations having the smallest Mahalanobis distance;

Step 5. Update and repeat Steps 2 to 5 until no longer decreases;

The MCD algorithm can be formulated as an optimization problem.

. (4)

The MCD algorithm can be formulated as an optimization problem. The seminal MCD algorithm proposed by [1] , which is turned out to be rather slow and did not scale well as a function of the sample size n. That limitation of MCD leads its author to the creation of the so-called FAST-MCD [2] , focused on solving the outlier detection problem in a more computationally efficient way. Since the algorithm only needs to select a limited number h of observations for each loop, its complexity can be reduced when sample size n is large, since only a small fraction of the data is used. However, it must be noted that the bulk of the computations in MCD has to do with the estimation of determinants and the Mahalanobis distances, both requiring a complexity of where p is the dimensionality of the input space as defined earlier. Therefore, it becomes crucial to find out how MCD fares when n is large and p is also large, even the now quite ubiquitous scenario where n is small but p is very larger, and indeed much larger than n. This p larger than n scenario, referred to as high dimension low sample size (HDLSS) is very common nowadays in application domains such as gene expression datasets from RNA-sequencing and microarray, audio processing, image processing, just to name a few. As noted before, with the MCD algorithm, h observations have to be selected to compute the robust estimator. Unfortunately, when, neither the inverse nor the determinant of covariance matrix can be computed. As we’ll show later, the complexity of matrix inversion and determinant computation renders MCD untenable for p as moderate as 500. Therefore, it is natural, in the presence of HDLSS datasets, to contemplate at least some intermediate dimensionality reduction step prior to performing the outlier detection task. Several algorithms have been proposed, among which PCOut by [3] , Regularized MCD (R-MCD) by [4] and other ideas by [5] -[8] . When instability in the data makes the computation of problematic in p dimension, regularized MCD may be used with objective function

, (5)

where is the so-called regularizer or tuning parameter, chosen to stabilize the procedure. However, it turns

out that even the above Regularized MCD cannot be contemplated when, since is always zero

in such cases. The solution to that added difficulty is addressed by solving

, (6)

where the regularized covariance matrix is given by

. (7)

With, for many HDLSS datasets however, the dimensionality p of the input space is often large, with numbers like or even rather very common. As a result, even the above direct regularization is computationally intractable, because when p is large, the complexity of the needed matrix inversion and determinant calculation makes the problem computationally untenable. The fastest matrix inversion algorithms like [9] [10] are theoretically around and, and so complicated that there are virtually no useful implementation of any of them. In short, the regularization approach to MCD like algorithms is impractical and unusable for HDLSS datasets even for values of p around a few hundreds. Another approach to outlier detection in the HDLSS context has revolved around extensions and adaptations of principle component analysis (PCA). Classical PCA seeks to project high dimensional vectors onto a lower dimensional orthogonal space while maximizing the variance. By reducing the dimensionality of the original data, one seeks to create a new data representation that evades the curse of dimensionality. However, PCA, in its generic form, is not robust, for the obvious reason that it is built by a series of transformations of means and covariance matrices whose generic estimators are notoriously non robust. It is therefore of interest to seek to perform PCA in a way that does not suffer from the presence of outliers in the data, and thereby identify the outlying observations as a byproduct of such a PCA. Many authors have worked on the robustification of PCA, and among them [11] whose proposed ROBPCA, a robust PCA method, which essentially robustifies PCA by combining MCD with the famous projection pursuit technique ([12] [13] ). Interestingly, if instead of reducing the dimensionality based on robust estimators, one can first apply PCA to the whole data, then outliers may surprisingly lie on several directions where they are then exposed more clearly and distinctly. Such an insight appears to have motivated the creation of the so-called PCOut algorithm proposed by [3] . PCOut uses PCA as part of its preprocessing step after the original data has been scaled by Median Absolute Deviation (MAD). In fact, in PCOut, each attribute is transformed as follows

, (8)

where and is the median of. Then, PCA can be performed, namely

. (9)

From which the principal component scores may then be used for the purpose of outlier detection. In fact, it also turns out that the principal component scores may be re-scaled to achieve a much lower dimension with 99% variance retained. Unlike MCD, PCA based re-scaled method is not only practical but also performs better with high dimensional datasets. 99% of simulated outliers are detected when. A higher false positive rate is reported in low dimensional cases, and less than half of the outliers were identified in scenarios with. It is clear by now that with HDLSS datasets, some form of dimensionality reduction is needed prior to performing outlier detection. Unlike the authors just mentioned who all resorted to some extension or adaptation of principal component analysis wherein dimensionality reduction is based on transformational projection, we herein propose an approach where dimensionality reduction is not only stochastic but also selection-based rather than projection-based. The rest of this paper is organized as follows: in Section 2, we present a detailed description of our proposed approach, along with all the needed theoretical and conceptual justifications. In the interest of completeness, we close this section with the general description of a nonparametric machine learning kernel method for novelty detection known as the one-class support vector machine, which under suitable conditions is an alternative to the outlier detection approach proposed in this paper. Section 3 contains our extensive computational demonstrations on various scenarios. We specifically pre- sent the comparisons of the predictive/detection performances between our RSSL based approach and the PCA based methods discussed earlier. We mainly used simulated data here, with simulations seeking to assess the impact of various aspects of the data such as the dimensionality p of the input space, the contamination rate and other aspects like the magnitude of the contamination of the scatter matrix. We conclude with Section 4, in which we provide a thorough discussion of our results along with various pointers to our current and future work on this rather compelling theme of outlier detection.

2. Random Subspace Learning Approach to Outlier Detection

2.1. Rationale for Random Subspace Learning

We herein propose a technique that combines the concept underlying Random Subspace Learning (RSSL) by [14] with some of the key ideas behind minimum covariance determinant (MCD) to achieve a computational efficient, scalable, intuitive appealing and highly accurate outlier detection method for both HDLSS and LDHSS datasets. With our proposed method, the computation of the robust estimators of both location and scatter matrix can be achieved by tracing the optimal subspaces directly. Besides, we demonstrate via practical examples that our RSSL based method is computationally very efficient, specifically because it turns out that, unlike the other methods mentioned earlier, our method does not require the computationally expensive calculations of determinants and Mahalanobis distances at each step. Moreover, whenever such calculations are needed, they are all performed in very low dimensional spaces, further emphasizing the computational strength of our approach. The original MCD algorithm formulates the outlier detection problem as the problem of finding the smallest determinants of covariance computed from a sequence of different subsets of the original data set. Each subset contains h observations. More precisely, if is the subset of whose observations yield the estimated covariance matrix with the smallest (minimum) determinant out of all the m subsets considered, then we must have follows

, (10)

where m is the number of iterations needed for the MCD algorithm to converge. is the subset of that produces the estimated covariance matrix with the smallest determinant. The MCD estimates of the location vector and scatter matrix parameters are given by follows

. (11)

The number h of observations in each subset is required to be. It turns out that reaches its highest possible breakdown value according to [15] . It is obvious that with being the highest breakdown point, cannot be achieved in the HDLSS context, since in such a context. It is therefore intuitively appealing to contemplate a subspace of the input space, and define/construct such a subspace in such a way that its dimensionality is also such that to allow the seamless computation of the needed distances.

2.2. Description Random Subspace Learning for Outlier Detection

Random Subspace Learning in its generic form is designed for precisely this kind of procedure. In a nutshell, RSSL combines instance-bagging (bootstrap i.e. sampling observations with replacement) with attribute-bag- ging (sampling indices of attributes without replacement), to allow efficient ensemble learning in high dimensional spaces. Random Subspace Learning (Attribute Bagging) proceeds very much like traditional bagging, with the added crucial step consisting of selecting a subset of the variables from the input space for training rather than building each base learners using all the p original variables.

Random Subspace Learning (RSSL): Attribute-bagging step

Step 1. Randomly draw the number of variables to consider;

Step 2. Draw without replacement the indices of d variables of the origina p variables;

Step 3. Perform learning/estimation in the d-dimensional subspace.

This attribute-bagging step is the main ingredient of our outlier detection approach in high dimensional spaces.

Random Subspace Outlier

Step 1. Draw with replacement, from to form the bootstrap sample;

Step 2. Start for to B do:

Draw without replacement from a subset of d variables

Drop unselected variables from o that is d dimensional

Build the b th determinant of covariance

End for

Step 3. Sort the ensemble;

Step 4. Form;

Step 5. Compute and base on.

We can build the robust distance by

. (12)

The RSSL outlier detection algorithm computes a determinant of covariance for each subsample, with each subsample residing in a subspace spanned by the d randomly selected variables, where d is usually selected to be

. A total of B subsets are generated, and their low dimensional covariance matrices are formed

along with the corresponding determinants. Then the best subsample, meaning the one with the smallest covariance determinant is singled. It turns out that in the LDHSS context, our RSSL outlier detection algorithm always robustly yields the robust estimators and needed to compute the Mahalanobis distance for all the observations. Then the outliers can be selected using the typical cut-off built on classical. In HDLSS context, in order to handle the curse of dimensionality, we need to involve a new variable selection procedure to adjust our framework and concurrently stabilize the detection. The modified version of our RSSL outlier detection algorithm in HDLSS is then given by

Random Subspace Learning for Outlier Detection when

Step 1. Draw with replacement, from to form the bootstrap sample;

Step 2. Start for to B do:

Draw without replacement from a subset of d variables

Drop unselected variables from so that is d dimensional

Build the b th determinant of covariance

End for

Step 3. Sort the ensemble;

Step 4. Keep the k smallest samples based on elbow to form, where and;

Step 5. Start for to d do:

Select most frequent variables left in to compute

End for

Step 6. Form;

Step 7. Compute and base on.

We can build the robust distance by the same way:

.

Without selecting the smallest determinant of covariance, we choose to select a certain number of subsamples to achieve the variable selection through a sort of voting process. The portion of the most frequently appearing variables are elected to build an optimal space that allow us to compute our robust estimators. The simulation results and other details will be discussed later.

2.3. Justification Random Subspace Learning for Outlier Detection

Conjecture 1. Let be the dataset under consideration. Assume that a proportion of the observations in are outliers. If, then with high probability, the proposed RSSL outlier detection algorithm will efficiently correctly identify a set of data that contains very few of the outliers.

Sketch 1. Let be a random observation in the original dataset. Let denote the b th boot-

strapped sample from. Let represent the proportion of observations that are in but also

present in. It is easy to prove

. (13)

In other words, if denotes the observations from not present in, we must

have

. (14)

Since is known to converge to as n goes to infinity. Therefore for each given bootstrapped

sample, there is a probability close to that any given outlier will not corrupt the estimation of location vector and scatter matrix parameters. Since the outliers as well as all other observations have an asymptotic probability of of not affecting the bootstrapped estimator that we build. Therefore over a large enough re-sampling process (large B), there will be many bootstrapped samples with very few outliers leading to a sequence of small covariance determinants as desired, if. It is therefore reasonable to deduce that by averaging this exclusion of outliers over many replications, robust estimators will naturally be generated by the RSSL algorithm.

2.4. Alternatives to Parametric Outlier Detection Methods

The assumption of multivariate Gaussianity of the’s is obviously limiting as it could happen that the data does not follow a Gaussian distribution. Outside of the realm where location and scatter matrix play a central role, other methods have been proposed, especially in the field of machine learning, and specifically with similarity measures known as kernels. One such method is known as One-Class Support Vector Machine (OCSVM) proposed by [16] to solve the so-called novelty detection problem. It is important to emphasize right away that novelty detection although similar in spirit to outlier detection, can be quite different when it comes to the way the algorithms are trained. OCSVM approach to novelty detection is interesting to mention here because despite some conceptual differences from the covariance methods explored earlier, it is formidable at handling HDLSS data thanks to the power of kernels. Let. The one-class SVM novelty detection solves

, (15)

subject to

, (16)

using we get

, (17)

so that any with is declared an outlier. The’s and are determined by solving the quadratic programming problem formulated above. The parameter controls the proportion of outliers detected. One of the most common kernel is the so-called RBF kernel defined by

. (18)

OCSVM has been extensively studied and applied by many researchers among which [17] -[19] , and later enhanced by [20] . OCSVM is often applied to semi-supervised learning tasks where training focuses on all the positive examples (non-outliers) and then the detection of anomalies is performed by searching points that fall geometrically outside of the estimated/learned decision boundary of the good (non-outlying trained instances). It is a concrete and quite popular algorithm for solving one-class problems in fields like digital recognition and documentation categorization. However, it is crucial to note that OCSVM cannot be used with many other real life datasets for which outliers are not well-defined and/or for which there are no clearly identified all-positive training examples available such as gene expression mentioned before.

3. Computational Demonstrations

3.1. Setup of Computational Demonstration and Initial Results

In this section, we conduct a simulation study to assess the performance of our algorithm based on various important aspects of the data, and we also provide a comparison of the predictive/detection performance of our method against existing approaches. All our simulated data are generated according to the e-contami- nated multivariate Gaussian introduced via Equation (1) and Equation (2). In order to assess the effect the covariance between the attributes, we use an AR-type covariance matrix of the following form

. (19)

where is the p-dimensional identity matrix, while is p-dimensional vector of ones. For the remaining parameters, we consider 3 different levels of contamination, namely mild contamination to strong contamination. The dimensionality p will increase in low-dimensional case as and high dimensional case as and the number of observations are fixed at 1500 and 100. We compare our algorithm to existing PCA based algorithms PCOut and PCDist, both of which are available in Rwithin the package called rrcovHD.

As can be seen in Figure 1, the overwhelming majority of samples lead to determinants that are small as evidenced by the heavy right skewness with concentration around zero. This further confirms our conjecture that as long as which is a rather reasonable and easily realized assumption, we should isolate samples with few or no outliers.

Figure 1. (left) Histogram of the distribution of the determinants from when n = 100, p = 3000; (right) Histogram of log determinants for all the bootstrap samples.

Since each bootstrapped sample selected has a small chance of being affected by the outliers, we can select the dimensionality that maximize this benefits. In our HDLSS simulations, determinants are computed based on all the randomly selected subspaces, and are ruled by predominantly small values, which imply the robustness of the classifier. Figure 1 patently shows the dominance of small values of determinants, which in this case are the determinants of all bootstrapped samples based on our simulated data. A distinguishable elbow is presented in Figure 2. The next crucial step lies in selecting a certain number of bootstrap samples, say k, to build an optimal subspace. Since most of the determinants are close to each other, it is a non-trivial problem, which means that k needs to be carefully chosen to avoid going beyond the elbow. However, it is important to notice if k is too small then the variable selection in later steps of the algorithm will become a random pick, because there is no opportunity for each variable to appear in the ensemble. Here, we choose k to be the number of roughly the first 30% to 80% of B bootstrap samples according to their ascending order of the determinants. This choice is based on our empirical experimentations. It is not too difficult to infer the asymptotic normal distribution of the frequencies of all variables in as we can observe in Figure 2. Thus, the most frequently appearing variables located on the left tail can be adopted/kept to build our robust estimator. Once the selection of k is made, the frequencies of variables appearing in this ensemble can be obtained/computed for variable selection. The 2 to m most frequently appearing variables are included to compute the determinants in Figure 2. m is usually small, since we assume from the start that the true dimensionality of the data is indeed small. Here for instance, we choose 20 for the purposes of our computational demonstration. A sharp maximum indicates the number of dimension from that sorted ensemble that we need to choose. Thus, with the bootstrapped observations having the smallest determinant with the subspace that generates the largest determinant, we can successfully compute. Then the robust estimators can be formed by and. Theoretically then we are in a presence of a minimax formulation of our outlier detection problem, namely

. (20)

By Equation (20), it should be understood that we need to isolate the precious subsample that achieves the smallest overall covariance determinant, but then concurrently identify along with the subspace that yields the highest value of that covariance determinant among all the possible subspaces considered.

3.2. Further Results and Computational Comparisons

As indicated in our introductory section, we use the Mahalanobis distance as our measure of proximity. As since we are operating under the assumption of multivariate normality, we use the traditional distribution quantiles as our cut-off with the typical and. As usual, all observations with distances larger

Figure 2. (left) Tail of sorted determinants in high dimensional, where B = 450. k can be selected before reaching the elbow; (right) The concave shape can be observed by computing determinants of covariance from 2 to m dimension.

than are classified as outliers. The data for simulation study are generated with repre- senting both easy and hard situation for RSSL algorithm to detect the outliers, and as the rate of contamination. Throughout, we use replications for each combination of parameters for each algorithm, and we use the average test error AVE as our measure of predictive/detection performance. Specifically,

, (21)

where is the predicted label of the test set observation i yielded by f in the r-th replication. The loss

function used here is the basic zero-one loss defined by

. (22)

It will be seen later that our proposed method produces predictive accurate outlier detection results, typically competing favorably against other techniques, and usually outperforming them. Firstly however, we show in Figure 3 the detection performance of our algorithm based on two randomly selected subspaces. The outliers detected by our algorithm are identified by red triangles and contained in the red contour, while the black circles are the normal data.

The improvement of our random subspace learning algorithm in low dimensional data with dimensionality such that and relative large sample size, is demonstrated in Figure 4 in comparison to PCOut and PCDist. Given a relatively easy task, namely with, the outliers are scattered widely and shifted far from normal, the RSSL with equals 95% and 90% perform consistently very well, typically outperforming the competition. When the rate of contamination is increasing in this scenario, almost 100% accuracy can be achieved with RSSL based algorithm. When the outliers are spread more narrowly and closer to the mean with, the predictive accuracy of our random subspace based algorithm is slightly less powerful but still very strong, namely with a predictive detection rate close to 96% to 99%. In high dimensional settings, namely with and low sample size, RSSL is also performs reasonably well as shown in Figure 5. With chi-squared cut-off, when, 96% to 98% of outliers can be detected constantly among all simulated high dimensions. Under more difficult conditions, as with, a decent amount of outliers can be detected with accuracy around 92% to 96%. Based on the properties of robust PCA based algorithms, the situation that we define as “easy” for RSSL algorithms is actually “harder” for PCOut and PCDist. The principle component space is selected based on the visibility of outliers, and especially for PCOut, the components with nonzero robust kurtosis are assigned higher weights by

Figure 3. (left) The outliers detected in a two dimensional subspace are marked as red triangles. Selection is based on.

Figure 4. The average error and standard deviation in low dimensional simulation with (left column) and (right column).

Figure 5. The average error and standard deviation in high dimensional simulation with (left column) and (right column).

the absolute value of their kurtosis coefficients. This method is shown to yield good performances when dealing with small shift of mean and scatter of the covariance matrix. However, if the outliers lied on larger and where excessive choices can be made then, it is more difficult for PCA to find the dimensionality to make the outliers “stick out”. Reversely, with a small values of and, the most obvious directions are emphasized by PCA but less chance for algorithms like RSSL to obtain the most sensible subspace to build robust estimators. So in Figure 5, when the accuracy reduced to around 92% but in all other high-dimensional settings the performance of RSSL is consistent with PCOut and identically stable.

4. Conclusion

We have presented what we can rightfully claim to be a computational efficient, scalable, intuitive appealing and highly predictively accurate outlier detection method for both HDLSS and LDHSS datasets. As an adaptation of both random subspace learning and minimum covariance determinant, our proposed approach can be readily used on vast number of real life examples where both its component building blocks have been successfully applied. The particular appeal of the random subspace learning aspect of our method comes in handy for many outlier detection tasks on high dimension low sample size datasets like DNA Microarray Gene Expression datasets for which the MCD approach is proved to be computational untenable. As our computational demonstrations section above reveal, our proposed approach competes favorably with other existing methods, sometimes outperforming them predictively despite its straightforwardness and relatively simple implementation. Specifically, our proposed method is shown to be very competitive for both low dimensional space and high dimensional space outlier detection and is computationally very efficient. We are currently seeking out interesting real life datasets on which to apply our method. We also plan to extend our method beyond settings where the underlying distribution is Gaussian.

Acknowledgements

Ernest Fokoué wishes to express his heartfelt gratitude and infinite thanks to our lady of perpetual help for her ever-present support and guidance, especially for the uninterrupted flow of inspiration received through her most powerful intercession.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Rousseeuw, P.J. (1984) Least Median of Squares Regression. Journal of the American Statistical Association, 79, 871-880.
http://dx.doi.org/10.1080/01621459.1984.10477105
[2] Rousseeuw, P. and Van Driessen, K. (1999) A Fast Algorithm for the Minimum Covariance Determinant Estimator. Technometrics, 41, 212-223.
http://dx.doi.org/10.1080/00401706.1999.10485670
[3] Filzmoser, P., Maronna, R. and Werner, M. (2008) Outlier Identification in High Dimensions. Computational Statistics & Data Analysis, 52, 1694-1711.
http://dx.doi.org/10.1016/j.csda.2007.05.018
[4] Fritsch, V., Varoquaux, G., Thyreau, B., Poline, J.-B. and Thirion, B. (2011) Detecting Outlying Subjects in High-Dimensional Neuroimaging Datasets with Regularized Minimum Covariance Determinan 225. In: Fichtinger, G., Martel, A. and Peters, T., Eds., Medical Image Computing and Computer-Assisted Intervention MICCAI 2011, Springer, Berlin Heidelberg, 264-271.
[5] Angiulli, F. and Pizzuti, C. (2002) Fast Outlier Detection in High Dimensional Spaces. In: Tapio, E., Heikki, M. and Hannu, T., Eds., Principles of Data Mining and 230 Knowledge Discovery, Springer, Rende, 15-27.
http://dx.doi.org/10.1007/3-540-45681-3_2
[6] Aggarwal, C. and Yu, S. (2005) An Effective and Efficient Algorithm for High-Dimensional Outlier Detection. The VLDB Journal, 14, 211-221.
http://dx.doi.org/10.1007/s00778-004-0125-5
[7] Ghoting, A., Parthasarathy, S. and Otey, M.E. (2008) Fast Mining of Distance-Based Outliers in High-Dimensional 235 Datasets. Data Mining and Knowledge Discovery, 16, 349-364.
http://dx.doi.org/10.1007/s10618-008-0093-2
[8] Kriegel, H.-P., Kröger, P., Schubert, E. and Zimek, A. (2009) Outlier Detection in Axis-Parallel Subspaces of High Dimensional Data. In: Editor, Ed., Advances in Knowledge Discovery and Data Mining, Springer, München, 831-838.
http://dx.doi.org/10.1007/978-3-642-01307-2_86
[9] Coppersmith, D. and Winograd, S. (1990) Matrix Multiplication via Arithmetic Progressions. Journal of Symbolic Computation, 9, 251-280.
http://dx.doi.org/10.1016/S0747-7171(08)80013-2
[10] Le Gall, F. (2014) Powers of Tensors and Fast Matrix Multiplication. Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, New York, 23-25 July 2014. http://dx.doi.org/10.1145/2608628.2608664
[11] Hubert, M. and Engelen, S. (2004) Robust PCA and Classification in Biosciences. Bioinformatics, 20, 1728-1736.
http://dx.doi.org/10.1093/bioinformatics/bth158
[12] Croux, C. and Ruiz-Gazen, A. (1996) A Fast Algorithm for Robust Principal Components Based on Projection Pursuit. In: Prat, A., Ed., COMPSTAT, Springer, Heidelberg, 211-216.
http://dx.doi.org/10.1007/978-3-642-46992-3_22
[13] Li, G.Y. and Chen, Z.L. (1985) Projection-Pursuit Approach to Robust Dispersion Matrices and Principal Components: Primary Theory. Journal of the American Statistical Association, 80, 759-766.
http://dx.doi.org/10.1080/01621459.1985.10478181
[14] Ho, T.K. (1998) The Random Subspace Method for Constructing Decision Forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 832-844.
http://dx.doi.org/10.1109/34.709601
[15] Lopuhaa, H.P. and Rousseeuw, P.J. (1991) Breakdown Points of Affine Equivariant Estimators of Multivariate Location and Covariance. The Annals of Statistics, 19, 229-248.
http://dx.doi.org/10.1214/aos/1176347978
[16] Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J. and Williamson, R.C. (1999) Estimating the Support of a High-Dimensional Distribution. Neural Computation, 13, 1443-1471.
[17] Hubert, M., Rousseeuw, P.J. and VandenBranden, K. (2005) Robpca: A New Approach to Robust Principal Component Analysis. Technometrics, 47, 64-79.
http://dx.doi.org/10.1198/004017004000000563
[18] Manevitz, L.M. and Yousef, M. (2002) One-Class SVMs for Document Classification. The Journal of Machine Learning Research, 2, 139-154.
[19] Zhang, R., Zhang, S., Muthuraman, S. and Jiang, J. (2007) One Class Support Vector Machine for Anomaly Detection in the Communication. Proceedings of the 5th Conference on Applied Electromagnetics, Wireless and Optical Communications, ELECTROSCIENCE’07, 14-16 December 2007, Tenerife, World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wisconsin, 31-37.
[20] Amer, M., Goldstein, M. and Abdennadher, S. (2013) Enhancing One-Class Support Vector Machines for Unsupervised Anomaly Detection. Proceedings of the ACM SIGKDD Workshop on Outlier Detection and Description, ODD’13, ACM, New York, 2013, 8-15.
http://dx.doi.org/10.1145/2500853.2500857

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.