On Decompositions of Real Polynomials Using Mathematical Programming Methods

We present a procedure that gives us an SOS (sum of squares) decomposition of a given real polynomial in n variables, if there exists such decomposition. For the case of real polynomials in n non-commutative variables we extend this procedure to obtain a sum of hermitian squares SOHS) decomposition whenever there exists any. This extended procedure is the main scientific contribution of the paper.


Introduction
Operations research, especially mathematical programming as subfield, very often makes progress by using results from other ("pure") areas of mathematics, such as analysis, algebra, probability etc. Rarely happens the contrary, i.e. that the new methods, developed by mathematical programming community, inspires further research in pure mathematics.
One of the main breakthrough in operations research in last decades was development of theoretically and practically efficient interior-point methods for convex optimization problems.These methods were initially developed for linear programming (LP) problems and later extended to many other convex optimization problems.The first non-trivial class of optimization problems, to which the interior point methods were extended, was the class of linear programs over the cone of positive semidefinite matrices or over the second order cone (see Section 2 and [1]).
Once we became able to solve semidefinite programs efficiently (i.e.we can find  -optimal solution in the time, which is a polynomial function of log  and the size of the input data), the class of instances of the problems of this type and the methods to solve them (called semidefinite programming ) became very important tool in optimization, control and also many other areas of applied mathematics and engineering, see e.g.[1].
Computing global minimum of a given real polyno-mial over the set, defined by polynomial inequalities (we call such set a semi-algebraic set) is an NP-hard problem, since it includes linear binary problems.Several authors, in particular Lasserre [2][3][4], Parillo [5], Parrilo and Sturmfels [6] and Schweighofer [7,8] have shown how to solve such a problem approximately by a sequence of semidefinite programs.
The main results needed to construct such a sequence are the fact that if a given polynomial is a sum of squares (SOS), then it is non-negative, and the dual theory of moments [2].Precise and comprehensive overview of the results about topic has been done by M. Laurent in [9].
Checking whether given polynomial is non-negative is an NP-hard problem, but checking if given polynomial is SOS can be done efficiently in theory and practise by semidefinite programming.This topic has been well exploited in recent years and currently there are available software packages to detect SOS polynomials and to do polynomial optimization.Readers interested in solving sums of squares problems for commuting polynomials are referred to one of the great existing packages SOSTOOLS [10,11], GloptiPoly [12], YALMIP [13,14], and Sparse-POP [15].
Comparing to commutative polynomials is the problem of writing a on-commutative polynomial as a sum of hermitian squares (SOHS) much less exploited-theoretically and practically.However, several results show that this area is interesting and important.
Helton [16] proved that real given polynomial in NC variables is SOHS if and only if it yields a positive semidefinite matrix (PSD) after substituting the variable by symmetric real matrices of the same size.For a beautiful exposition, we refer the reader to [17].
Together with coworkers Helton pursued line of research of NC polynomials further, studied positivity and convexity of NC polynomials and gave applications to control theory, optimization, systems engineering, etc.; see [18] for a nice survey of these beginnings of free semialgebraic geometry.The first author in [19] connected sums of hermitian squares of NC polynomials to an old open problem of Connes on von Neumann algebras, and, somewhat related, found applications to mathematical physics [20].Many of these results were obtained with the aid of computer programs written in an ad-hoc manner.
Despite the fast rise of free semialgebraic geometry, it seems that [21] is the first publication about theoretical algorithm and publicly available software for computing (or determining existence) of sums of hermitian squares (SOHS) decompositions of NC polynomials.
In this paper we illustrate the procedures mentioned above on a number of well-chosen examples.

Notation
In the paper we use standard notation from optimization and algebra.By 0 X  we denote that X is positive semidefinite (i.e.X is symmetric and has only non-negative eigenvalues).The scalar product of two matrices X and Y of the same size is


.By I we denote the identity matrix.
A set of real polynomials in n (commutative) variables (algebraists call this set algebra of real polyno- are therefore polynomials, which can be written as follows  , which define the exponents of monomials.The degree of monomial and the degree of polynomial   . If all monomials in p have the same degree d, we say that p is d -form.
Note that the coefficient i c and the exponent vectors

Semidefinite Programming
Semidefinite programming consists of problems, where we are interested for the optimum of a linear function subject to linear constraints and additional constraint that all variables are taken from positive semidefinite matrix.More precisely, given symmetric matrices The conic dual problem to PSDP is the semidefinite program in the standard dual form (DSDP) As we already mentioned, the importance of semidefinite programming was spurred by development of efficient methods, which can find  -optimal solution in a polynomial time of , n m and log  , where n is the order of matrix variables Z and X .There exist several freeware package, which also in practice find such solutions.If the problem is of the medium size (i.e.1000 n  and 10.000 m  ), these packages are based on the interior point methods, while packages for larger semidefinite programs use some variant of the first order methods (see web page [22] with a comprehensive list of state-of-the-art SDP solvers and also [23]).

SOS Decompositions of Commutative Polynomials by SDP
SOS decomposition appears naturally when we are interested in finding the global optimum of a given poly-nomial.Indeed, the problem Both problems are in general very difficult (i.e.NPhard), therefore we are forced to relax the problems in order to obtain a tractable one.One of the possibilities is using the SOS decomposition.
has a sum of squares decomposition if there exist polynomials If the polynomial p can be written as a sum of squares, then clearly we have 0 p  , while the converse is in general not true.This leads to the following lower bound for the problem (1): The inequality from above might be strict, as also follows from the following example.
is well-known as Motzkin form and is defined by four coefficients and four vectors: It has been shown (see [24,25]) that   , , 0 M x x x  while M has no SOS decomposition.
There exist only few sets of polynomials where it holds that a polynomial from this set is non-negative if and only if it has SOS decomposition.This is true for all polynomials in two variables, for all 2-forms and for 4forms in 3 variables.Obviously the Motzkin form from above is not in any of these sets.
Testing whether a given polynomial has SOS decomposition can be done efficiently by semidefinite programming, as follows from the following theorem.For the proof see e.g. a proof of non-commutative version [ . For the matrix Q we have linear constraints that its components must coincide with the coefficients of p .We have in p monomial 4   1   x with coefficient 2, therefore it must hold 1,1 = 2 q and similarly 2,2 = 5 q .Monomial 3 1 2 x x has coefficient 2 in and can be obtained as 2 1 1 2 x x x  , hence we have also constraint 1,3 3,1 = 2 q q  or equivalently 1,3 x x can be obtained as a product of 2 1 x in 2   2   x or as a square of 1 2 x x , therefore 1,2 x x does not appear p , hence 2,3 3,2 = 0 q q  and since Q is symmetric: 2,3 3,2 = =0 q q .If we put things together, we are looking for a positive semidefinite matrix Q with components satisfying the constraints above.Note that the objective function is not important here, i.e. we have the semidefinite feasibility problem.We can solve it by heart: by setting 3,3 = 5 q we obtain 1,2 2,1 = = 3 q q  .The matrix is now completely defined:

Q
We have the following SOS decomposition: Remark 2.Even though we have an SDP feasibility problem, we usually use the following objective function

 
trace Q , since it is widely accepted as an efficient heuristics to obtain the feasible solution with lowest rank.This is important since the rank of Q is exactly the number of factors in SOS decomposition.
In general we must include in the vector U all monomials of degree d, if the polynomial is 2d-form.We have is sparse (has only few monomials) it is possible to considerable decrease the number of monomials in U .We can use a result, first formulated in [26], that characterizes the monomials that can appear in a sum of squares repre-sentation.Define the Newton polytope p  of given polynomial p of degree 2d as the integer lattice points in the convex hull of the degrees  , which appear in p .Then, it can be shown that the only monomials x  that can appear in a sum of squares representation are those such that 2 is in the

Example 5. If p is from Example 4, we have
hence it defines exactly the monomials which appear in the SOS decomposition.
The package SO Stools and some other packages for SOS decompositions are essentially based on the Newton polytope algorithm.

SOHS Decompositions for Non-Commutative Polynomials
A non-commutative polynomial p x   has a sumof-Hermitian-squares (SOHS) decomposition if there exist polynomials 1 , , k q q  such that * = i i i p q q  .One can prove a result, similar to Theorem 1: p x   of degree 2d has SOHS decomposition if and only if * = , p U QU (2) for some 0  Q , where U is vector of all monomials of degree d .
If we consider all possible monomials of degree d ,  components.This is much larger comparing to the commutative case.Nevertheless, we have also many criteria which tell us when we can leave out some of the monomials from U .We demonstrate the procedure to find SOHS decomposition by the following example.
Since p has degree 4, there are 3 2 1= 7  monomials which might appear in the vector U from the SOHS decomposition:

T x x x x x x x x U
Due to the structure of p we can cross out some com-ponents of U : for any monomial from U of (maximum) degree 2 it must hold that its hermitian square must be in p .If this is note the case for some monomial, we can eliminate it from U .
Therefore we can cross out 2 1 x in 2 2 x from U to obtain   = 1, , , , .

T x x x x x x U
We have if and only if Q satisfies equations: Beside these constraints we get also a bunch of homogenous equations: since 2   2   x does not appear in p , Q must satisfy 3,3 = 0 q .We also miss 2 1 2 x x and 2 2 1 x x in p , hence 2,4 3,5 4,2 5,3 = 0 q q q q    .We obtain 13 equations in total, hence we have to solve SDP of order 5 with 13 linear constraints.
By using objective
Previous example clearly shows the main steps of the general algorithm to obtain the SOHS decomposition, presented in Figure 1.
We can obtain in general case much larger SDP in the non-commutative case, but we have also much more possibilities to exploit the fact that the polynomials are noncommutative in order to reduce the size of the SDP.In 2) Construct linear equations on matrix Q.
3) Solve SDP in variable Q.   x the monomial, where i x has highest/lowest degree, must be symmetric.The first observation is very important: it leads to the so-called Newton chip method [21], where we obtain the vector U simply by taking all possible right chips (tails) of all possible symmetric monomials in p , which have length between half of the minimum and half of the maximum degree in p , see also paper [21].
We have also written a Matlab based software package NCsostools, which is a non-commutative version of the Sostools package.It includes an implementation of the algorithm from above and also implementations of most important operations from x  . It is freely available from http://ncsostools.fis.unm.si,see also the detailed description of commands [28].However, since all constraints in the resulting SDP are always orthogonal, we see a strong potential in the boundary point method [23], which performs very good on such SDP problems, especially if we have large scale SDP.

Conclusions
In this paper we demonstrated the power of mathematical programming in other, more pure areas of mathematics.Semidefinite programming turned out to be a very strong tool in approximating optimal values of polynomials, since it gives SOS or SOHS decompositions of real polynomials, when they exist.The next step of the research will be publishing the NCsostools package, which will contain implementation of the algorithm for finding the SOHS decompositions and also implementations of most important operations in x  .A very challenging task for future research is a procedure to extract an exact rational solution of the SDP from the numerical solution Q , obtained by SDP solver.It is very important since in practice SDP solvers return only approximately feasible solutions, which are sufficient for most purposes.But if we want to have correct proof for SOS or SOHS decomposition, we need an exact rational solution.This is in general very difficult problem (NPhard) and not much research has been done in this direction.
The set (free algebra) of non-commutative polynomials in variables  

1 )
Construct the vector of possible monomials U.

4 )
IF the SDP is infeasible, then p has no SOHS decomposition.
, are no more equal in the non-commutative case.Instead of this we keep working with monomials, i.e. with the words.
Since p if 4-form, it follows from above that p has SOS decomposition if and only if it is non-negative.
At least one of the monomials of highest and of lowest degree must be symmetric.
  p must be symmetric, i.e. * = p p . For every variable i