Fast Tensor Principal Component Analysis via Proximal Alternating Direction Method with Vectorized Technique

This paper studies the problem of tensor principal component analysis (PCA). Usually the tensor PCA is viewed as a low-rank matrix completion problem via matrix factorization technique, and nuclear norm is used as a convex approximation of the rank operator under mild condition. However, most nuclear norm minimization approaches are based on SVD operations. Given a matrix m n × ∈ X  , the time complexity of SVD operation is ( ) 2 O mn , which brings prohibitive computational complexity in large-scale problems. In this paper, an efficient and scalable algorithm for tensor principal component analysis is proposed which is called Linearized Alternating Direction Method with Vectorized technique for Tensor Principal Component Analysis (LADMVTPCA). Different from traditional matrix factorization methods, LADMVTPCA utilizes the vectorized technique to formulate the tensor as an outer product of vectors, which greatly improves the computational efficacy compared to matrix factorization method. In the experiment part, synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA. Results have shown that LADMVTPCA outperforms matrix factorization based method.


Introduction
A tensor is a multidimensional array.For example, a first-order tensor is a vector, a second-order tensor is a matrix, and tensors with three or higher-order are called higher-order tensors.Principal component analysis (PCA) finds a few linear combinations of the original variables.The PCA plays an important role in dimension reduction and data analysis related research areas [1].Although the PCA and eigenvalue problem for the matrix has been well studied in the literature, little work has been done on the study of tensor PCA analysis.
The tensor PCA is of great importance in practice and has many applications, such as computer vision [2], social network analysis [3], diffusion Magnetic Resonance Imaging (MRI) [4] and so on.Similar to its matrix form, the problem of finding the PCs is related to the most variance of a tensor  , which can be specifically formulated as [5]: , , , , , which is equivalent to ( ) , , where ⋅ denotes the inner product between two tensors, and the inner product of two tensors is denoted as ( ) ( ) And ⊗ denotes the outer product between vectors, i.e. [6] [7]: The above solution is called the leading PC.Once the leading PC is found, the other PCs can be computed sequentially via the so-called deflation technique.
For example, the second PC could be gotten in the following ways: 1) Generate the first leading PC of the tensor, 2) Subtract the first leading PC of the tensor from the original tensor, 3) Generate the leading PC of the rest tensor.This leading PC is noted as the second PC of the original Tensor.And the rest PCs could be obtained in a similar way [8] [9].The deflation procedure is presented in Algorithm 1.Although theoretical analysis of deflation procedure for matrix is well established (see [10] and the references therein for more details), the tensor counterpart has not been completely studied.However, the deflation process does provide an efficient heuristic way to compute multiple principal components of a tensor.The time consumption of different scaled between full SVD and leading PC is shown in Table 1.When the size of matrix increases, the computational cost for leading PC is far less than that of full SVD.Therefore,   although more iterations are needed for greedy atom decomposition methods to reach convergence, their total computational costs are much less compared with SVD based matrix completion methods.Thus, in the rest of this paper, we focus on finding the leading PC of a tensor.If  is a super-symmetrical tensor, problem (2) can be reduced to [9] ( ) In fact, the algorithm for supersymmetric tensor PCA problem can be extended to tensors that are not super-symmetric [5].Therefore, this paper focuses on the PCA analysis of super-symmetric tensors.Problem ( 5) is NP-hard and is known as the maximum Z-eigenvalue problem.Note that a variety of eigenvalues and eigenvectors of a real symmetric tensor were introduced by Lim [11] and Qi [9] independently in 2005.Since then, various methods have been proposed to find the Z-eigenvalues, which however may correspond only to local optimums.
Another research line, like CANDECOMP (canonical decomposition) and PARAFAC (parallel factors) propose imposing rank-one constraint of tensor to realize the tensor decomposition: where indicates that  should be super-symmetrical.The first equality constraint is due to the fact that ( ) The difficulty of problem ( 6) lies in the dealing of the rank constraint ( ) . Not only the rank function itself is difficult to deal with, but also determining the rank of a specific given tensor is already a difficult task, which is NP-hard in general.One way to deal with the difficulty is to convert the tensor optimization problem into a matrix optimization problem.[5] proved that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa.The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint.For low-rank matrix optimization problem, a nuclear norm penalty is often adopted to enforce a low-rank solution.However, most nuclear norm minimization approaches are based on SVD operations.

Given a matrix
m n × ∈ X  , the time complexity of SVD operation is ( ) O mn , which will bring prohibitive computational complexity in large problems (refer to Table 1).
To avoid the matrix SVD operation, we reformulate the problem (5) with vectorized technique, and consider the following optimization problem: ( ) ( ) where ( ) Vec  is the vectorized form of tensor  .Related stream of algorithms to solve problem (7) are the ADM-type algorithms [12] [13].Such a kind of algorithms has recently been shown effective to handle some non-convex optimization problems [14] [15].However, the results of [14] require a lot well-justified assumption.Besides, the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities.To address this issue, [16] proposed a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems.
In this paper, we adopt LADM algorithm for solving the optimization problem (7).The rest of this paper is organized as follows.In Section 2, a brief review of LADM algorithm is firstly given.And then, the detailed description of using LADM with vectorized technique to solve tensor principal component problem is presented.Section 3 is the experiment part, in which synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA.The last section gives concluding remarks.x + by solving (7) Update Break.end if end while

Linearized Alternating Direction Method (LADM)
In this section, we first review the Linearized Alternating Direction Method of Multipliers (LADM) [16], and then we present the linearized ADM for solving tensor principal component problem.that problem (8) can be solved by the standard ADMM with typical iteration written as ( ) ( ) : arg min , , , where the augmented Lagrangian function ( ) x y l x r y Ax By Ax By The penalty parameter 0 β > is a constant dual step-size.The inefficient of ADMM inspires a linearized ADMM algorithm [16] by linearizing ( ) l x in the x -subproblem.Specifically, it considers a modified augmented Lagrangian function: x x y l x l x x x r y Ax By Ax By Then the LADM algorithm solves problem (8) by generating a sequence as follows: ( ) : , : arg min , , , .
The framework of linearized ADMM is given in Algorithm 2.

Algorithm 2 LADM
Choose the parameter β such that Equation ( 9) is satisfied; Initialize an iteration counter 0 k ← and a bounded starting point ( )

Linearized ADM with Vectorized Technique
In the following of this section, we present the linearized ADM method to solve the leading principal component problem.Without loss of generality, we consider a 4-th order tensor in this section for the leading principal component problem with rank-one constraint, we formulated the original problem as prob-lem (17).
min Vec Vec , . ., , , x y z x y z w s t x y y z z w where , , , x y z w are variables updated from iteration to iteration, the constraints where  is a unit ball and ρ is a parameter to balance the object loss and the smooth terms.It should be noted that tensors with different orders could be deduced in a similarly way.The augmented Lagrangian function ( ) , , , , , , , , , , x y z w x y z w x y y z z w w x x y y z z w w x where , , , x y z w∈  .In the k -th iteration, we denote the variables by , , , x y z w , and k ρ .Given , , , , x y z w ρ , iterates as follows : arg min , , , , , : arg min , , , , , : arg min , , , , , Update , , , Others.
In the following part, we show how to solve these subproblems in the algorithm through a linearized way with vectorized technique.After all these subproblems solved, we will give the framework of the algorithm that summarize our algorithm for solving (18) in Algorithm 3. In order to achieve the saddle fast and improve the quality of the solution, we adjust the parameter ρ adaptively to balance the decrease speed of these two parts.
In a traditional way, x + is obtained by minimizing  with respect to va- riable x while , , , y z w ρ are fixed with the value , , , and the Lagrange function is put forward as follow: ( We slightly modify the above LADM algorithm by imposing a proximal term The minimum value will be obtained while the derivative of  with respect to x setting to zero, and utilize this condition, we obtain an equation which implies the solution of the subproblem ) ( ) where ( ) Vec  is the vectorized form of tensor  , after solving the above equation, we get the 1 k x + .For parsimony purpose, we just present the solving of the first subproblem here.

Numerical Experiments
In this subsection, we report the numerical experiments and results on Algorithm 3 to solve the tensor leading PC problem (18).As the ADMPCA [5] is one of these fastest methods based on matrix factorization and outperforms the SVD based methods, we will mainly focus on the comparison of our approach with ADMPCA proposed in [5].The MATLAB codes of ADMPCA were downloaded from Professor Shiqian Ma's [5] homepage.
We apply our approach to synthetic datasets.The data is generated with uniform distributed eigen vectors i x and eigen value i λ , and the tensor is gener- ated through the summation , in which the rank of tensor is controlled by the number of eigen vectors n , the order of tensor is controlled by number of i v appear in the outer product, and the dimension of tensor is controlled by the dimension of the vector i v .

Conclusion
Tensor PCA is an emerging area of research with many important applications in image processing, data analysis, statistical learning, and bio-informatics.In this paper, we propose a new efficient and scalable algorithm for tensor principal component analysis called LADMVTPCA.A vectorized technique is introduced in the processing procedure and linear alternating direction method is used to solve the optimization problem.LADMVTPCA provides an efficient way to compute the leading PC.We empirically evaluate the proposed algorithm on synthetic tensor data with different orders.Results have shown that LADMVTPCA has much better computational cost beyond matrix factorization based method.Especially for large-scale problems, matrix factorization based method is much more time-consuming than our method.
and p b ∈  .It is well known make the solution robust to the scaling.After adding the constraints into the loss function, the problem (17) is equivalently reformed as , we report the results on randomly created tensor with order 4 = and dimension 4,8,16,32 = ."objDiff" is used to denote the relative difference of the solutions obtained by ADMPCA and LADMVTPCA, " is used to denote the relative difference of the objective eigen vectors,

Table 2 .
Results of different algorithms for solving randomly generated tensor's leading principal component.theCPUtimes (in seconds) of ADMPCA and LADMVTPCA, respectively.From Table2we can see that, LADMVTPCA produced comparable solutions compared to ADMPCA; however, LADMVTPCA was much faster than ADMPCA, especially for large-scale problem, i.e. n = 16, 32.Note that when n = 16, LADMVTPCA was about 4000 times faster than ADMPCA. denote