Fast Tensor Principal Component Analysis via Proximal Alternating Direction Method with Vectorized Technique

Abstract

This paper studies the problem of tensor principal component analysis (PCA). Usually the tensor PCA is viewed as a low-rank matrix completion problem via matrix factorization technique, and nuclear norm is used as a convex approximation of the rank operator under mild condition. However, most nuclear norm minimization approaches are based on SVD operations. Given a matrix , the time complexity of SVD operation is O(mn2), which brings prohibitive computational complexity in large-scale problems. In this paper, an efficient and scalable algorithm for tensor principal component analysis is proposed which is called Linearized Alternating Direction Method with Vectorized technique for Tensor Principal Component Analysis (LADMVTPCA). Different from traditional matrix factorization methods, LADMVTPCA utilizes the vectorized technique to formulate the tensor as an outer product of vectors, which greatly improves the computational efficacy compared to matrix factorization method. In the experiment part, synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA. Results have shown that LADMVTPCA outperforms matrix factorization based method.

Share and Cite:

Fan, H. , Kuang, G. and Qiao, L. (2017) Fast Tensor Principal Component Analysis via Proximal Alternating Direction Method with Vectorized Technique. Applied Mathematics, 8, 77-86. doi: 10.4236/am.2017.81007.

1. Introduction

A tensor is a multidimensional array. For example, a first-order tensor is a vector, a second-order tensor is a matrix, and tensors with three or higher-order are called higher-order tensors. Principal component analysis (PCA) finds a few linear combinations of the original variables. The PCA plays an important role in dimension reduction and data analysis related research areas [1] . Although the PCA and eigenvalue problem for the matrix has been well studied in the literature, little work has been done on the study of tensor PCA analysis.

The tensor PCA is of great importance in practice and has many applications, such as computer vision [2] , social network analysis [3] , diffusion Magnetic Resonance Imaging (MRI) [4] and so on. Similar to its matrix form, the problem of finding the PCs is related to the most variance of a tensor F , which can be specifically formulated as [5] :

min x 1 , x 2 , , x m F λ x 1 x 2 x m F 2 , s t λ , x i = 1 , i = 1 , 2 , , m , (1)

which is equivalent to

min x 1 , x 2 , , x m F ( x 1 x 2 x m ) , s t x i = 1 , i = 1 , 2 , , m , (2)

where denotes the inner product between two tensors, and the inner product of two tensors F 1 , F 2 n 1 × n 2 × × n m is denoted as

F 1 F 2 = i 1 , i 2 , , i m ( F 1 ) i 1 i 2 i m ( F 2 ) i 1 i 2 i m . (3)

And denotes the outer product between vectors, i.e. [6] [7] :

( x 1 x 2 x m ) i 1 i 2 i m = k = 1 m ( x k ) i k . (4)

The above solution is called the leading PC. Once the leading PC is found, the other PCs can be computed sequentially via the so-called deflation technique. For example, the second PC could be gotten in the following ways: 1) Generate the first leading PC of the tensor, 2) Subtract the first leading PC of the tensor from the original tensor, 3) Generate the leading PC of the rest tensor. This leading PC is noted as the second PC of the original Tensor. And the rest PCs could be obtained in a similar way [8] [9] . The deflation procedure is presented in Algorithm 1. Although theoretical analysis of deflation procedure for matrix is well established (see [10] and the references therein for more details), the tensor counterpart has not been completely studied. However, the deflation process does provide an efficient heuristic way to compute multiple principal components of a tensor. The time consumption of different scaled between full SVD and leading PC is shown in Table 1. When the size of matrix increases, the computational cost for leading PC is far less than that of full SVD. Therefore,

Table 1. The comparison of running time (seconds) between full SVD and leading PC.

although more iterations are needed for greedy atom decomposition methods to reach convergence, their total computational costs are much less compared with SVD based matrix completion methods. Thus, in the rest of this paper, we focus on finding the leading PC of a tensor.

If F is a super-symmetrical tensor, problem (2) can be reduced to [9]

min x F ( x x x ) , s t x = 1. (5)

In fact, the algorithm for supersymmetric tensor PCA problem can be extended to tensors that are not super-symmetric [5] . Therefore, this paper focuses on the PCA analysis of super-symmetric tensors. Problem (5) is NP-hard and is known as the maximum Z-eigenvalue problem. Note that a variety of eigenvalues and eigenvectors of a real symmetric tensor were introduced by Lim [11] and Qi [9] independently in 2005. Since then, various methods have been proposed to find the Z-eigenvalues, which however may correspond only to local optimums.

Another research line, like CANDECOMP (canonical decomposition) and PARAFAC (parallel factors) propose imposing rank-one constraint of tensor to realize the tensor decomposition:

min F X , s t k K ( n , m ) m ! j = 1 n k j ! X 1 k 1 2 k 2 n k n = 1 , rank ( X ) = 1 , X S n m (6)

where X = x x x and K ( n , m ) = { k = ( k 1 , , k n ) + m | j = 1 n k j = m } .

X ) = 1 , X S n m indicates that X should be super-symmetrical. The first

equality constraint is due to the fact that k K ( n , m ) m ! j = 1 n k j ! X 1 k 1 2 k 2 n k n = x d = 1 .

The difficulty of problem (6) lies in the dealing of the rank constraint rank ( X ) = 1 . Not only the rank function itself is difficult to deal with, but also determining the rank of a specific given tensor is already a difficult task, which is NP-hard in general. One way to deal with the difficulty is to convert the tensor optimization problem into a matrix optimization problem. [5] proved that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint. For low-rank matrix optimization problem, a nuclear norm penalty is often adopted to enforce a low-rank solution. However, most nuclear norm minimization approaches are based on SVD operations. Given a matrix X m × n , the time complexity of SVD operation is O ( m n 2 ) , which will bring prohibitive computational complexity in large problems (refer to Table 1).

To avoid the matrix SVD operation, we reformulate the problem (5) with vectorized technique, and consider the following optimization problem:

min x 1 , x 2 , , x m Vec ( F ) Vec ( x 1 x 2 x m ) , s t x i = 1 , i = 1 , 2 , , m , x 1 = x 2 = = x m (7)

where Vec ( F ) is the vectorized form of tensor F . Related stream of algorithms to solve problem (7) are the ADM-type algorithms [12] [13] . Such a kind of algorithms has recently been shown effective to handle some non-convex optimization problems [14] [15] . However, the results of [14] require a lot well-justified assumption. Besides, the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, [16] proposed a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. In this paper, we adopt LADM algorithm for solving the optimization problem (7).

The rest of this paper is organized as follows. In Section 2, a brief review of LADM algorithm is firstly given. And then, the detailed description of using LADM with vectorized technique to solve tensor principal component problem is presented. Section 3 is the experiment part, in which synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA. The last section gives concluding remarks.

2. Linearized Alternating Direction Method (LADM)

In this section, we first review the Linearized Alternating Direction Method of Multipliers (LADM) [16] , and then we present the linearized ADM for solving tensor principal component problem.

2.1. Algorithm

Considering the convex optimization problem,

min x , y l ( x ) + r ( y ) , s . t . A x B y = 0. (8)

where x d , y l , A p × d , B p × l , and b p . It is well known that problem (8) can be solved by the standard ADMM with typical iteration written as

x k + 1 : = arg min x L β ( x , y k , λ k ) , (9)

λ k + 1 : = λ k β ( A x k + 1 B y k ) , (10)

y k + 1 : = arg min y L β ( x k + 1 , y , λ k + 1 ) , (11)

where the augmented Lagrangian function L β ( x , y , λ ) is defined as

L β ( x , y , λ ) = l ( x ) + r ( y ) λ , A x B y + β 2 A x B y 2 . (12)

The penalty parameter β > 0 is a constant dual step-size. The inefficient of ADMM inspires a linearized ADMM algorithm [16] by linearizing l ( x ) in the x -subproblem. Specifically, it considers a modified augmented Lagrangian function:

L ¯ β ( x , x ^ , y , λ ) = l ( x ^ ) + l ( x ^ ) , x x ^ + r ( y ) λ , A x B y + β 2 A x B y 2 . (13)

Then the LADM algorithm solves problem (8) by generating a sequence

{ x k + 1 , λ k + 1 , z k + 1 } as follows:

x k + 1 : = arg min x L ¯ β ( x , x k , y k , λ k ) , (14)

λ k + 1 : = λ k β ( A x k + 1 B y k ) , (15)

y k + 1 : = arg min y L ¯ β ( x k + 1 , x k , y , λ k + 1 ) . (16)

The framework of linearized ADMM is given in Algorithm 2.

2.2. Linearized ADM with Vectorized Technique

In the following of this section, we present the linearized ADM method to solve the leading principal component problem. Without loss of generality, we consider a 4-th order tensor in this section for the leading principal component problem with rank-one constraint, we formulated the original problem as problem (17).

min x , y , z Vec ( F ) Vec ( x y z w ) , s . t . x = y , y = z , z = w , x 2 = 1 , y 2 = 1 , z 2 = 1 , w 2 2 = 1. (17)

where x , y , z , w are variables updated from iteration to iteration, the constraints x = y , y = z , z = w limite the variables are close to each other and finally overlap, and the constraints x 2 = 1 , y 2 = 1 , z 2 = 1 , w 2 2 = 1 make the solution robust to the scaling. After adding the constraints into the loss function, the problem (17) is equivalently reformed as

min x , y , z Vec ( F ) Vec ( x y z w ) + λ 1 , x y + λ 2 , y z + λ 3 , z w + λ 4 , w x + ρ 2 ( x y 2 2 + y z 2 2 + z w 2 2 + w x 2 2 ) , s . t . x , y , z , w S , (18)

where S = { x | x 2 = 1 } is a unit ball and ρ is a parameter to balance the object loss and the smooth terms. It should be noted that tensors with different orders could be deduced in a similarly way. The augmented Lagrangian function L ( x , y , z , w , λ 1 , λ 2 , λ 3 , λ 4 , ρ ) is defined as

L ( x , y , z , w , ρ ) : = Vec ( F ) Vec ( x y z w ) + λ 1 , x y + λ 2 , y z + λ 3 , z w + λ 4 , w x + ρ 2 ( x y 2 2 + y z 2 2 + z w 2 2 + w x 2 2 ) . (19)

where x , y , z , w S . In the k -th iteration, we denote the variables by

x k , y k , z k , w k , and ρ k . Given x k , y k , z k , w k , ρ k , iterates as follows

x k + 1 : = arg min x S L ( x , y k , z k , w k , ρ k ) , (20a)

y k + 1 : = arg min y S L ( x k + 1 , y , z k , w k , ρ k ) , (20b)

z k + 1 : = arg min z S L ( x k + 1 , y k + 1 , z , w k , ρ k ) , (20c)

w k + 1 : = arg min w S L ( x k + 1 , y k + 1 , z k + 1 , w , ρ k ) , (20d)

Update λ 1 , λ 2 , λ 3 , λ 4 (20e)

ρ k + 1 : = ( α ρ k Balancedcases ρ k / α Others . (20f)

In the following part, we show how to solve these subproblems in the algorithm through a linearized way with vectorized technique. After all these subproblems solved, we will give the framework of the algorithm that summarize our algorithm for solving (18) in Algorithm 3. In order to achieve the saddle fast and improve the quality of the solution, we adjust the parameter ρ adaptively to balance the decrease speed of these two parts.

In a traditional way, x k + 1 is obtained by minimizing L with respect to variable x while y , z , w , ρ are fixed with the value y k , y k , w k , ρ k respectively, and the Lagrange function is put forward as follow:

x k + 1 : = arg min x S Vec ( F ) Vec ( x y k z k w k ) + λ 1 , x y k + λ 2 , y k z k + λ 3 , z k w k + λ 4 , w k x + ρ 2 ( x y k 2 2 + y k z k 2 2 + z k w k 2 2 + w k x 2 2 ) . (21)

We slightly modify the above LADM algorithm by imposing a proximal term

δ 2 x x k 2 on the subproblem of x and update x k + 1 via

x k + 1 : = arg min x S Vec ( F ) Vec ( x y k z k w k ) + λ 1 , x y k + λ 2 , y k z k + λ 3 , z k w k + λ 4 , w k x + ρ 2 ( x y k 2 2 + y k z k 2 2 + z k w k 2 2 + w k x 2 2 ) + δ 2 x x k 2 . (22)

The minimum value will be obtained while the derivative of L with respect to x setting to zero, and utilize this condition, we obtain an equation which implies the solution of the subproblem

0 Vec ( F ) Vec ( y k z k w k ) + λ 1 λ 4 , x + δ ( x x k ) + ρ k ( 2 x y k w k ) . (23)

where Vec ( F ) is the vectorized form of tensor F , after solving the above equation, we get the x k + 1 . For parsimony purpose, we just present the solving of the first subproblem here.

3. Numerical Experiments

In this subsection, we report the numerical experiments and results on Algorithm 3 to solve the tensor leading PC problem (18). As the ADMPCA [5] is one of these fastest methods based on matrix factorization and outperforms the SVD based methods, we will mainly focus on the comparison of our approach with ADMPCA proposed in [5] . The MATLAB codes of ADMPCA were downloaded from Professor Shiqian Ma’s [5] homepage.

We apply our approach to synthetic datasets. The data is generated with uniform distributed eigen vectors x i and eigen value λ i , and the tensor is generated through the summation F = i = 1 n λ i v i v i v i , in which the rank of tensor is controlled by the number of eigen vectors n , the order of tensor is controlled by number of v i appear in the outer product, and the dimension of tensor is controlled by the dimension of the vector v i .

We compare LADMVTPCA with ADMPCA for solving problem (18). In Table 2, we report the results on randomly created tensor with order = 4 and dimension = 4 , 8 , 16 , 32 . “objDiff” is used to denote the relative difference of the solutions obtained by ADMPCA and LADMVTPCA,

objDiff = F LADMVTPCA F ADMPCA F max { 1 , F ADMPCA F } . “objVal” is used to denote the relative difference of the objective eigen vectors, objVal = v LADMVTPCA v ADMPCA F max { 1 , v ADMPCA F } . “Time”

Table 2. Results of different algorithms for solving randomly generated tensor’s leading principal component.

denote the CPU times (in seconds) of ADMPCA and LADMVTPCA, respectively. From Table 2 we can see that, LADMVTPCA produced comparable solutions compared to ADMPCA; however, LADMVTPCA was much faster than ADMPCA, especially for large-scale problem, i.e. n = 16, 32. Note that when n = 16, LADMVTPCA was about 4000 times faster than ADMPCA.

4. Conclusion

Tensor PCA is an emerging area of research with many important applications in image processing, data analysis, statistical learning, and bio-informatics. In this paper, we propose a new efficient and scalable algorithm for tensor principal component analysis called LADMVTPCA. A vectorized technique is introduced in the processing procedure and linear alternating direction method is used to solve the optimization problem. LADMVTPCA provides an efficient way to compute the leading PC. We empirically evaluate the proposed algorithm on synthetic tensor data with different orders. Results have shown that LADMVTPCA has much better computational cost beyond matrix factorization based method. Especially for large-scale problems, matrix factorization based method is much more time-consuming than our method.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500.
https://doi.org/10.1137/07070111X
[2] Wang, H. and Ahuja, N. (2004) Compact Representation of Multidimensional Data Using Tensor Rank-One Decomposition. Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, 26-26 August 2004, 104-107.
[3] Huang, F., Niranjan, U., Hakeem, M.U. and Anandkumar, A. (2013) Fast Detection of Overlapping Communities via Online Tensor Methods. arXiv:1309.0787
[4] Qi, L., Yu, G. and Wu, E.X. (2010) Higher Order Positive Semidefinite Diffusion Tensor Imaging. SIAM Journal on Imaging Sciences, 3, 416-433.
https://doi.org/10.1137/090755138
[5] Jiang, B., Ma, S. and Zhang, S. (2014) Tensor Principal Component Analysis via Convex Optimization. Mathematical Programming, 1-35.
[6] Hitchcock, F.L. (1927) The Expression of a Tensor or a Polyadic as a Sum of Products. Journal of Mathematics and Physics, 6, 164-189.
[7] Kofidis, E. and Regalia, P.A. (2002) On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors. SIAM Journal on Matrix Analysis and Applications, 23, 863-884.
https://doi.org/10.1137/S0895479801387413
[8] Kolda, T.G. and Mayo, J.R. (2011) Shifted Power Method for Computing Tensor Eigenpairs. SIAM Journal on Matrix Analysis and Applications, 32, 1095-1124.
https://doi.org/10.1137/100801482
[9] Qi, L. (2005) Eigenvalues of a Real Supersymmetric Tensor. Journal of Symbolic Computation, 40, 1302-1324.
https://doi.org/10.1016/j.jsc.2005.05.007
[10] Mackey, L.W. (2009) Deflation Methods for Sparse PCA. Advances in Neural Information Processing Systems, 21, 1017-1024.
[11] Lim, L.H. (2005) Singular Values and Eigenvalues of Tensors: A Variational Approach. IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Puerto Vallarta, Mexico, 13-15 December 2005, 129-132.
[12] Zhong, L.W. and Kwok, J.T. (2013) Fast Stochastic Alternating Direction Method of Multipliers. Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, 2013.
[13] Zhao, P.l., Yang, J.W., Zhang, T. and Li, P. (2013) Adaptive Stochastic Alternating Direction Method of Multipliers. arXiv:1312.4564
[14] Magnússon, S., Weeraddana, P.C., Rabbat, M.G. and Fischione, C. (2014) On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems. arXiv:1409.8033 [math.OC]
[15] Hong, M.Y., Luo, Z.-Q. and Razaviyayn, M. (2015) Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, 19-24 April 2015, 3836-3840.
https://doi.org/10.1109/ICASSP.2015.7178689
[16] Yang, J. and Yuan, X. (2013) Linearized Augmented Lagrangian and Alternating Direction Methods for Nuclear Norm Minimization. Mathematics of Computation, 820, 301-329.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.