Fast Tensor Principal Component Analysis via Proximal Alternating Direction Method with Vectorized Technique

Abstract

This paper studies the problem of tensor principal component analysis (PCA). Usually the tensor PCA is viewed as a low-rank matrix completion problem via matrix factorization technique, and nuclear norm is used as a convex approximation of the rank operator under mild condition. However, most nuclear norm minimization approaches are based on SVD operations. Given a matrix , the time complexity of SVD operation is O(mn2), which brings prohibitive computational complexity in large-scale problems. In this paper, an efficient and scalable algorithm for tensor principal component analysis is proposed which is called Linearized Alternating Direction Method with Vectorized technique for Tensor Principal Component Analysis (LADMVTPCA). Different from traditional matrix factorization methods, LADMVTPCA utilizes the vectorized technique to formulate the tensor as an outer product of vectors, which greatly improves the computational efficacy compared to matrix factorization method. In the experiment part, synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA. Results have shown that LADMVTPCA outperforms matrix factorization based method.

Share and Cite:

Fan, H. , Kuang, G. and Qiao, L. (2017) Fast Tensor Principal Component Analysis via Proximal Alternating Direction Method with Vectorized Technique. Applied Mathematics, 8, 77-86. doi: 10.4236/am.2017.81007.

1. Introduction

A tensor is a multidimensional array. For example, a first-order tensor is a vector, a second-order tensor is a matrix, and tensors with three or higher-order are called higher-order tensors. Principal component analysis (PCA) finds a few linear combinations of the original variables. The PCA plays an important role in dimension reduction and data analysis related research areas [1] . Although the PCA and eigenvalue problem for the matrix has been well studied in the literature, little work has been done on the study of tensor PCA analysis.

The tensor PCA is of great importance in practice and has many applications, such as computer vision [2] , social network analysis [3] , diffusion Magnetic Resonance Imaging (MRI) [4] and so on. Similar to its matrix form, the problem of finding the PCs is related to the most variance of a tensor $\mathcal{F}$ , which can be specifically formulated as [5] :

$\begin{array}{l}\underset{{x}^{1},{x}^{2},\cdots ,{x}^{m}}{\mathrm{min}}{‖\mathcal{F}-\lambda {x}^{1}\otimes {x}^{2}\otimes \cdots \otimes {x}^{m}‖}_{F}^{2},\\ st\lambda \in ℝ,\text{}‖{x}^{i}‖=1,i=1,2,\cdots ,m,\end{array}$ (1)

which is equivalent to

$\begin{array}{l}\underset{{x}^{1},{x}^{2},\cdots ,{x}^{m}}{\mathrm{min}}-\mathcal{F}\cdot \left({x}^{1}\otimes {x}^{2}\otimes \cdots \otimes {x}^{m}\right),\\ st‖{x}^{i}‖=1,i=1,2,\cdots ,m,\end{array}$ (2)

where $\cdot$ denotes the inner product between two tensors, and the inner product of two tensors ${\mathcal{F}}_{1},{\mathcal{F}}_{2}\in {ℝ}^{{n}_{1}×{n}_{2}×\cdots ×{n}_{m}}$ is denoted as

${\mathcal{F}}_{1}\cdot {\mathcal{F}}_{2}={\sum }_{{i}_{1},{i}_{2},\cdots ,{i}_{m}}{\left({\mathcal{F}}_{1}\right)}_{{i}_{1}{i}_{2}\cdots {i}_{m}}{\left({\mathcal{F}}_{2}\right)}_{{i}_{1}{i}_{2}\cdots {i}_{m}}.$ (3)

And $\otimes$ denotes the outer product between vectors, i.e. [6] [7] :

${\left({x}^{1}\otimes {x}^{2}\otimes \cdots \otimes {x}^{m}\right)}_{{i}_{1}{i}_{2}\cdots {i}_{m}}={{\prod }_{k=1}^{m}\left({x}^{k}\right)}_{{i}_{k}}.$ (4)

The above solution is called the leading PC. Once the leading PC is found, the other PCs can be computed sequentially via the so-called deflation technique. For example, the second PC could be gotten in the following ways: 1) Generate the first leading PC of the tensor, 2) Subtract the first leading PC of the tensor from the original tensor, 3) Generate the leading PC of the rest tensor. This leading PC is noted as the second PC of the original Tensor. And the rest PCs could be obtained in a similar way [8] [9] . The deflation procedure is presented in Algorithm 1. Although theoretical analysis of deflation procedure for matrix is well established (see [10] and the references therein for more details), the tensor counterpart has not been completely studied. However, the deflation process does provide an efficient heuristic way to compute multiple principal components of a tensor. The time consumption of different scaled between full SVD and leading PC is shown in Table 1. When the size of matrix increases, the computational cost for leading PC is far less than that of full SVD. Therefore,

Table 1. The comparison of running time (seconds) between full SVD and leading PC.

although more iterations are needed for greedy atom decomposition methods to reach convergence, their total computational costs are much less compared with SVD based matrix completion methods. Thus, in the rest of this paper, we focus on finding the leading PC of a tensor.

If $\mathcal{F}$ is a super-symmetrical tensor, problem (2) can be reduced to [9]

$\begin{array}{l}\underset{x}{\mathrm{min}}-\mathcal{F}\cdot \left(x\otimes x\otimes \cdots \otimes x\right),\\ st‖x‖=1.\end{array}$ (5)

In fact, the algorithm for supersymmetric tensor PCA problem can be extended to tensors that are not super-symmetric [5] . Therefore, this paper focuses on the PCA analysis of super-symmetric tensors. Problem (5) is NP-hard and is known as the maximum Z-eigenvalue problem. Note that a variety of eigenvalues and eigenvectors of a real symmetric tensor were introduced by Lim [11] and Qi [9] independently in 2005. Since then, various methods have been proposed to find the Z-eigenvalues, which however may correspond only to local optimums.

Another research line, like CANDECOMP (canonical decomposition) and PARAFAC (parallel factors) propose imposing rank-one constraint of tensor to realize the tensor decomposition:

$\begin{array}{l}\mathrm{min}-\mathcal{F}\cdot \mathcal{X},\\ st{\sum }_{k\in \mathbb{K}\left(n,m\right)}\frac{m!}{{\prod }_{j=1}^{n}{k}_{j}!}{\mathcal{X}}_{{1}^{{k}_{1}}{2}^{{k}_{2}}\cdots {n}^{{k}_{n}}}=1,\\ \text{rank}\left(\mathcal{X}\right)=1,\text{}\mathcal{X}\in {\mathcal{S}}^{{n}^{m}}\end{array}$ (6)

where $\mathcal{X}=x\otimes x\otimes \cdots \otimes x$ and $\mathbb{K}\left(n,m\right)=\left\{k=\left({k}_{1},\cdots ,{k}_{n}\right)\in {ℤ}_{+}^{m}|{\sum }_{j=1}^{n}{k}_{j}=m\right\}$ .

$\mathcal{X}\right)=1,\text{}\mathcal{X}\in {\mathcal{S}}^{{n}^{m}}$ indicates that $\mathcal{X}$ should be super-symmetrical. The first

equality constraint is due to the fact that ${\sum }_{k\in \mathbb{K}\left(n,m\right)}\frac{m!}{{\prod }_{j=1}^{n}{k}_{j}!}{\mathcal{X}}_{{1}^{{k}_{1}}{2}^{{k}_{2}}\cdots {n}^{{k}_{n}}}={‖x‖}^{d}=1$ .

The difficulty of problem (6) lies in the dealing of the rank constraint $\text{rank}\left(\mathcal{X}\right)=1$ . Not only the rank function itself is difficult to deal with, but also determining the rank of a specific given tensor is already a difficult task, which is NP-hard in general. One way to deal with the difficulty is to convert the tensor optimization problem into a matrix optimization problem. [5] proved that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint. For low-rank matrix optimization problem, a nuclear norm penalty is often adopted to enforce a low-rank solution. However, most nuclear norm minimization approaches are based on SVD operations. Given a matrix $X\in {ℝ}^{m×n}$ , the time complexity of SVD operation is $O\left(m{n}^{2}\right)$ , which will bring prohibitive computational complexity in large problems (refer to Table 1).

To avoid the matrix SVD operation, we reformulate the problem (5) with vectorized technique, and consider the following optimization problem:

$\begin{array}{l}\underset{{x}^{1},{x}^{2},\cdots ,{x}^{m}}{\mathrm{min}}-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left({x}^{1}\otimes {x}^{2}\otimes \cdots \otimes {x}^{m}\right),\\ st‖{x}^{i}‖=1,i=1,2,\cdots ,m,\\ {x}^{1}={x}^{2}=\cdots ={x}^{m}\end{array}$ (7)

where $\text{Vec}\left(\mathcal{F}\right)$ is the vectorized form of tensor $\mathcal{F}$ . Related stream of algorithms to solve problem (7) are the ADM-type algorithms [12] [13] . Such a kind of algorithms has recently been shown effective to handle some non-convex optimization problems [14] [15] . However, the results of [14] require a lot well-justified assumption. Besides, the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, [16] proposed a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. In this paper, we adopt LADM algorithm for solving the optimization problem (7).

The rest of this paper is organized as follows. In Section 2, a brief review of LADM algorithm is firstly given. And then, the detailed description of using LADM with vectorized technique to solve tensor principal component problem is presented. Section 3 is the experiment part, in which synthetic tensor data with different orders are used to empirically evaluate the proposed algorithm LADMVTPCA. The last section gives concluding remarks.

2. Linearized Alternating Direction Method (LADM)

In this section, we first review the Linearized Alternating Direction Method of Multipliers (LADM) [16] , and then we present the linearized ADM for solving tensor principal component problem.

2.1. Algorithm

Considering the convex optimization problem,

$\begin{array}{l}\underset{x,y}{\mathrm{min}}l\left(x\right)+r\left(y\right),\\ s.t.\text{}Ax-By=0.\end{array}$ (8)

where $x\in {ℝ}^{d}$ , $y\in {ℝ}^{l}$ , $A\in {ℝ}^{p×d}$ , $B\in {ℝ}^{p×l}$ , and $b\in {ℝ}^{p}$ . It is well known that problem (8) can be solved by the standard ADMM with typical iteration written as

${x}^{k+1}:=\underset{x}{\mathrm{arg}\mathrm{min}}{\mathcal{L}}_{\beta }\left(x,{y}^{k},{\lambda }^{k}\right),$ (9)

${\lambda }^{k+1}:={\lambda }^{k}-\beta \left(A{x}^{k+1}-B{y}^{k}\right),$ (10)

${y}^{k+1}:=\underset{y}{\mathrm{arg}\mathrm{min}}{\mathcal{L}}_{\beta }\left({x}^{k+1},y,{\lambda }^{k+1}\right),$ (11)

where the augmented Lagrangian function ${\mathcal{L}}_{\beta }\left(x,y,\lambda \right)$ is defined as

${\mathcal{L}}_{\beta }\left(x,y,\lambda \right)=l\left(x\right)+r\left(y\right)-〈\lambda ,Ax-By〉+\frac{\beta }{2}{‖Ax-By‖}^{2}.$ (12)

The penalty parameter $\begin{array}{c}\beta >0\end{array}$ is a constant dual step-size. The inefficient of ADMM inspires a linearized ADMM algorithm [16] by linearizing $l\left(x\right)$ in the $x$ -subproblem. Specifically, it considers a modified augmented Lagrangian function:

$\begin{array}{l}{\stackrel{¯}{\mathcal{L}}}_{\beta }\left(x,\stackrel{^}{x},y,\lambda \right)=l\left(\stackrel{^}{x}\right)+〈\nabla l\left(\stackrel{^}{x}\right),x-\stackrel{^}{x}〉+r\left(y\right)\\ \text{}-〈\lambda ,Ax-By〉+\frac{\beta }{2}{‖Ax-By‖}^{2}.\end{array}$ (13)

Then the LADM algorithm solves problem (8) by generating a sequence

$\left\{{x}^{k+1},{\lambda }^{k+1},{z}^{k+1}\right\}$ as follows:

${x}^{k+1}:=\underset{x}{\mathrm{arg}\mathrm{min}}{\stackrel{¯}{\mathcal{L}}}_{\beta }\left(x,{x}^{k},{y}^{k},{\lambda }^{k}\right),$ (14)

${\lambda }^{k+1}:={\lambda }^{k}-\beta \left(A{x}^{k+1}-B{y}^{k}\right),$ (15)

${y}^{k+1}:=\underset{y}{\mathrm{arg}\mathrm{min}}{\stackrel{¯}{\mathcal{L}}}_{\beta }\left({x}^{k+1},{x}^{k},y,{\lambda }^{k+1}\right).$ (16)

The framework of linearized ADMM is given in Algorithm 2.

2.2. Linearized ADM with Vectorized Technique

In the following of this section, we present the linearized ADM method to solve the leading principal component problem. Without loss of generality, we consider a 4-th order tensor in this section for the leading principal component problem with rank-one constraint, we formulated the original problem as problem (17).

$\begin{array}{l}\underset{x,y,z}{\mathrm{min}}-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left(x\otimes y\otimes z\otimes w\right),\\ s.t.\text{}x=y,y=z,z=w,\\ {‖x‖}_{2}=1,\text{}{‖y‖}_{2}=1,\text{}{‖z‖}_{2}=1,\text{}{‖w‖}_{2}^{2}=1.\end{array}$ (17)

where $x,y,z,w$ are variables updated from iteration to iteration, the constraints $x=y,y=z,z=w$ limite the variables are close to each other and finally overlap, and the constraints ${‖x‖}_{2}=1,\text{}{‖y‖}_{2}=1,\text{}{‖z‖}_{2}=1,\text{}{‖w‖}_{2}^{2}=1$ make the solution robust to the scaling. After adding the constraints into the loss function, the problem (17) is equivalently reformed as

$\begin{array}{l}\underset{x,y,z}{\mathrm{min}}-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left(x\otimes y\otimes z\otimes w\right)+〈{\lambda }_{1},x-y〉\\ \text{}+〈{\lambda }_{2},y-z〉+〈{\lambda }_{3},z-w〉+〈{\lambda }_{4},w-x〉\\ \text{}+\frac{\rho }{2}\ast \left({‖x-y‖}_{2}^{2}+{‖y-z‖}_{2}^{2}+{‖z-w‖}_{2}^{2}+{‖w-x‖}_{2}^{2}\right),\\ s.t.\text{}x,y,z,w\in \mathcal{S},\end{array}$ (18)

where $\mathcal{S}=\left\{x|{‖x‖}_{2}=1\right\}$ is a unit ball and $\rho$ is a parameter to balance the object loss and the smooth terms. It should be noted that tensors with different orders could be deduced in a similarly way. The augmented Lagrangian function $\mathcal{L}\left(x,y,z,w,{\lambda }_{1},{\lambda }_{2},{\lambda }_{3},{\lambda }_{4},\rho \right)$ is defined as

$\begin{array}{l}\mathcal{L}\left(x,y,z,w,\rho \right)\hfill \\ :=-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left(x\otimes y\otimes z\otimes w\right)+〈{\lambda }_{1},x-y〉\hfill \\ \text{}+〈{\lambda }_{2},y-z〉+〈{\lambda }_{3},z-w〉+〈{\lambda }_{4},w-x〉\hfill \\ \text{}+\frac{\rho }{2}\ast \left({‖x-y‖}_{2}^{2}+{‖y-z‖}_{2}^{2}+{‖z-w‖}_{2}^{2}+{‖w-x‖}_{2}^{2}\right).\hfill \end{array}$ (19)

where $x,y,z,w\in \mathcal{S}$ . In the $k$ -th iteration, we denote the variables by

${x}^{k},{y}^{k},{z}^{k},{w}^{k}$ , and ${\rho }^{k}$ . Given ${x}^{k},{y}^{k},{z}^{k},{w}^{k},{\rho }^{k}$ , iterates as follows

${x}^{k+1}:=\underset{x\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}\mathcal{L}\left(x,{y}^{k},{z}^{k},{w}^{k},{\rho }^{k}\right),$ (20a)

${y}^{k+1}:=\underset{y\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}\mathcal{L}\left({x}^{k+1},y,{z}^{k},{w}^{k},{\rho }^{k}\right),$ (20b)

${z}^{k+1}:=\underset{z\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}\mathcal{L}\left({x}^{k+1},{y}^{k+1},z,{w}^{k},{\rho }^{k}\right),$ (20c)

${w}^{k+1}:=\underset{w\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}\mathcal{L}\left({x}^{k+1},{y}^{k+1},{z}^{k+1},w,{\rho }^{k}\right),$ (20d)

$\text{Update}{\lambda }_{1},{\lambda }_{2},{\lambda }_{3},{\lambda }_{4}$ (20e)

${\rho }^{k+1}:=\left(\begin{array}{ll}\alpha {\rho }^{k}\hfill & \text{Balancedcases}\hfill \\ {\rho }^{k}/\alpha \hfill & \text{Others}\text{.}\hfill \end{array}$ (20f)

In the following part, we show how to solve these subproblems in the algorithm through a linearized way with vectorized technique. After all these subproblems solved, we will give the framework of the algorithm that summarize our algorithm for solving (18) in Algorithm 3. In order to achieve the saddle fast and improve the quality of the solution, we adjust the parameter $\rho$ adaptively to balance the decrease speed of these two parts.

In a traditional way, ${x}^{k+1}$ is obtained by minimizing $\mathcal{L}$ with respect to variable $x$ while $y,z,w,\rho$ are fixed with the value ${y}^{k},{y}^{k},{w}^{k},{\rho }^{k}$ respectively, and the Lagrange function is put forward as follow:

$\begin{array}{c}{x}^{k+1}:=\underset{x\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left(x\otimes {y}^{k}\otimes {z}^{k}\otimes {w}^{k}\right)+〈{\lambda }_{1},x-{y}^{k}〉\\ \text{}+〈{\lambda }_{2},{y}^{k}-{z}^{k}〉+〈{\lambda }_{3},{z}^{k}-{w}^{k}〉+〈{\lambda }_{4},{w}^{k}-x〉\\ \text{}+\frac{\rho }{2}\ast \left({‖x-{y}^{k}‖}_{2}^{2}+{‖{y}^{k}-{z}^{k}‖}_{2}^{2}+{‖{z}^{k}-{w}^{k}‖}_{2}^{2}+{‖{w}^{k}-x‖}_{2}^{2}\right).\end{array}$ (21)

We slightly modify the above LADM algorithm by imposing a proximal term

$\frac{\delta }{2}{‖x-{x}^{k}‖}^{2}$ on the subproblem of $x$ and update ${x}^{k+1}$ via

$\begin{array}{c}{x}^{k+1}:=\underset{x\in \mathcal{S}}{\mathrm{arg}\mathrm{min}}-\text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left(x\otimes {y}^{k}\otimes {z}^{k}\otimes {w}^{k}\right)+〈{\lambda }_{1},x-{y}^{k}〉\\ \text{}+〈{\lambda }_{2},{y}^{k}-{z}^{k}〉+〈{\lambda }_{3},{z}^{k}-{w}^{k}〉+〈{\lambda }_{4},{w}^{k}-x〉\\ \text{}+\frac{\rho }{2}\ast \left({‖x-{y}^{k}‖}_{2}^{2}+{‖{y}^{k}-{z}^{k}‖}_{2}^{2}+{‖{z}^{k}-{w}^{k}‖}_{2}^{2}+{‖{w}^{k}-x‖}_{2}^{2}\right)\\ \text{}+\frac{\delta }{2}{‖x-{x}^{k}‖}^{2}.\end{array}$ (22)

The minimum value will be obtained while the derivative of $\mathcal{L}$ with respect to $x$ setting to zero, and utilize this condition, we obtain an equation which implies the solution of the subproblem

$\begin{array}{l}0\in \text{Vec}\left(\mathcal{F}\right)\cdot \text{Vec}\left({y}^{k}\otimes {z}^{k}\otimes {w}^{k}\right)+〈{\lambda }_{1}-{\lambda }_{4},x〉\hfill \\ \text{}+\delta \left(x-{x}^{k}\right)+{\rho }^{k}\left(2x-{y}^{k}-{w}^{k}\right).\hfill \end{array}$ (23)

where $\text{Vec}\left(\mathcal{F}\right)$ is the vectorized form of tensor $\mathcal{F}$ , after solving the above equation, we get the ${x}^{k+1}$ . For parsimony purpose, we just present the solving of the first subproblem here.

3. Numerical Experiments

In this subsection, we report the numerical experiments and results on Algorithm 3 to solve the tensor leading PC problem (18). As the ADMPCA [5] is one of these fastest methods based on matrix factorization and outperforms the SVD based methods, we will mainly focus on the comparison of our approach with ADMPCA proposed in [5] . The MATLAB codes of ADMPCA were downloaded from Professor Shiqian Ma’s [5] homepage.

We apply our approach to synthetic datasets. The data is generated with uniform distributed eigen vectors ${x}_{i}$ and eigen value ${\lambda }_{i}$ , and the tensor is generated through the summation $\mathcal{F}={\sum }_{i=1}^{n}{\lambda }_{i}\ast {v}_{i}\otimes {v}_{i}\cdots \otimes {v}_{i}$ , in which the rank of tensor is controlled by the number of eigen vectors $n$ , the order of tensor is controlled by number of ${v}_{i}$ appear in the outer product, and the dimension of tensor is controlled by the dimension of the vector ${v}_{i}$ .

We compare LADMVTPCA with ADMPCA for solving problem (18). In Table 2, we report the results on randomly created tensor with $\text{order}=4$ and $\text{dimension}=4,8,16,32$ . “objDiff” is used to denote the relative difference of the solutions obtained by ADMPCA and LADMVTPCA,

$\text{objDiff}=\frac{{‖{\mathcal{F}}_{\text{LADMVTPCA}}-{\mathcal{F}}_{\text{ADMPCA}}‖}_{F}}{\mathrm{max}\left\{1,{‖{\mathcal{F}}_{\text{ADMPCA}}‖}_{F}\right\}}$ . “objVal” is used to denote the relative difference of the objective eigen vectors, $\text{objVal}=\frac{{‖{v}_{\text{LADMVTPCA}}-{v}_{\text{ADMPCA}}‖}_{F}}{\mathrm{max}\left\{1,{‖{v}_{\text{ADMPCA}}‖}_{F}\right\}}$ . “Time”

Table 2. Results of different algorithms for solving randomly generated tensor’s leading principal component.

denote the CPU times (in seconds) of ADMPCA and LADMVTPCA, respectively. From Table 2 we can see that, LADMVTPCA produced comparable solutions compared to ADMPCA; however, LADMVTPCA was much faster than ADMPCA, especially for large-scale problem, i.e. n = 16, 32. Note that when n = 16, LADMVTPCA was about 4000 times faster than ADMPCA.

4. Conclusion

Tensor PCA is an emerging area of research with many important applications in image processing, data analysis, statistical learning, and bio-informatics. In this paper, we propose a new efficient and scalable algorithm for tensor principal component analysis called LADMVTPCA. A vectorized technique is introduced in the processing procedure and linear alternating direction method is used to solve the optimization problem. LADMVTPCA provides an efficient way to compute the leading PC. We empirically evaluate the proposed algorithm on synthetic tensor data with different orders. Results have shown that LADMVTPCA has much better computational cost beyond matrix factorization based method. Especially for large-scale problems, matrix factorization based method is much more time-consuming than our method.

Conflicts of Interest

The authors declare no conflicts of interest.

 [1] Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500. https://doi.org/10.1137/07070111X [2] Wang, H. and Ahuja, N. (2004) Compact Representation of Multidimensional Data Using Tensor Rank-One Decomposition. Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, 26-26 August 2004, 104-107. [3] Huang, F., Niranjan, U., Hakeem, M.U. and Anandkumar, A. (2013) Fast Detection of Overlapping Communities via Online Tensor Methods. arXiv:1309.0787 [4] Qi, L., Yu, G. and Wu, E.X. (2010) Higher Order Positive Semidefinite Diffusion Tensor Imaging. SIAM Journal on Imaging Sciences, 3, 416-433. https://doi.org/10.1137/090755138 [5] Jiang, B., Ma, S. and Zhang, S. (2014) Tensor Principal Component Analysis via Convex Optimization. Mathematical Programming, 1-35. [6] Hitchcock, F.L. (1927) The Expression of a Tensor or a Polyadic as a Sum of Products. Journal of Mathematics and Physics, 6, 164-189. [7] Kofidis, E. and Regalia, P.A. (2002) On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors. SIAM Journal on Matrix Analysis and Applications, 23, 863-884. https://doi.org/10.1137/S0895479801387413 [8] Kolda, T.G. and Mayo, J.R. (2011) Shifted Power Method for Computing Tensor Eigenpairs. SIAM Journal on Matrix Analysis and Applications, 32, 1095-1124. https://doi.org/10.1137/100801482 [9] Qi, L. (2005) Eigenvalues of a Real Supersymmetric Tensor. Journal of Symbolic Computation, 40, 1302-1324. https://doi.org/10.1016/j.jsc.2005.05.007 [10] Mackey, L.W. (2009) Deflation Methods for Sparse PCA. Advances in Neural Information Processing Systems, 21, 1017-1024. [11] Lim, L.H. (2005) Singular Values and Eigenvalues of Tensors: A Variational Approach. IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Puerto Vallarta, Mexico, 13-15 December 2005, 129-132. [12] Zhong, L.W. and Kwok, J.T. (2013) Fast Stochastic Alternating Direction Method of Multipliers. Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, 2013. [13] Zhao, P.l., Yang, J.W., Zhang, T. and Li, P. (2013) Adaptive Stochastic Alternating Direction Method of Multipliers. arXiv:1312.4564 [14] Magnússon, S., Weeraddana, P.C., Rabbat, M.G. and Fischione, C. (2014) On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems. arXiv:1409.8033 [math.OC] [15] Hong, M.Y., Luo, Z.-Q. and Razaviyayn, M. (2015) Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, 19-24 April 2015, 3836-3840. https://doi.org/10.1109/ICASSP.2015.7178689 [16] Yang, J. and Yuan, X. (2013) Linearized Augmented Lagrangian and Alternating Direction Methods for Nuclear Norm Minimization. Mathematics of Computation, 820, 301-329.