Revisiting the Evaluation of a Multidimensional Gaussian Integral

The evaluation of Gaussian functional integrals is essential on the application to statistical physics and the general calculation of path integrals of stochastic processes. In this work, we present an elementary extension of an usual result of the literature as well as an alternative new derivation.

KEYWORDS

Cite this paper

Mondaini, R. and Albuquerque Neto, S. (2017) Revisiting the Evaluation of a Multidimensional Gaussian Integral. Journal of Applied Mathematics and Physics, 5, 449-452. doi: 10.4236/jamp.2017.52039.

1. Introduction

In the present work, we apply theorems of Linear Algebra to derive and extend an usual result of the literature on evaluation of multidimensional Gaussian integrals of the form [1] :

${\int }_{-\infty }^{\infty }{e}^{-{x}^{\text{T}}Ax}\text{d}{x}_{1}\cdots \text{d}{x}_{n}$

where ${x}^{\text{T}}$ is the transpose of every non-zero column vector $x\in I\text{​}{R}^{n}$ and ${x}^{\text{T}}Ax$ is a real positive definite quadratic form of $n$ variables. In order to guarantee the convergence of the integrals, we should have

${x}^{\text{T}}Ax>0$ (1)

We can also write $A$ as a sum of its symmetric and skew-symmetric components, $A=\left(\frac{A+{A}^{\text{T}}}{2}\right)+\left(\frac{A-{A}^{\text{T}}}{2}\right)$ and we have

${x}^{\text{T}}Ax={x}^{\text{T}}\left(\frac{A+{A}^{\text{T}}}{2}\right)x>0$ (2)

since ${x}^{\text{T}}\left(\frac{A-{A}^{\text{T}}}{2}\right)x\equiv 0$ .

2. Application of the Spectral Theorem of Linear Algebra

From the Spectral Theorem of Linear Algebra [2] , a real matrix will be diagonalized by an orthogonal transformation if and only if this matrix is symmetric.

We then apply an orthogonal transformation to the quadratic form ${x}^{\text{T}}\left(\frac{A+{A}^{\text{T}}}{2}\right)x$ :

$x=\theta y\text{ };\text{}{x}^{\text{T}}={y}^{\text{T}}{\theta }^{\text{T}};\text{}{\theta }^{\text{T}}\theta =\text{1}\text{​}l$ (3)

where the columns of the matrix $\theta$ are the orthonormal eigenvectors of the matrix $\left(\frac{A+{A}^{\text{T}}}{2}\right)$ .

We then have

${\theta }^{\text{T}}\left(\frac{A+{A}^{\text{T}}}{2}\right)\theta ={\left(\frac{A+{A}^{\text{T}}}{2}\right)}_{d}$ (4)

where ${\left(\frac{A+{A}^{\text{T}}}{2}\right)}_{d}$ is the corresponding diagonal form.

From Equation (3) and Equation (4) we have:

$\mathrm{det}\left(\frac{A+{A}^{\text{T}}}{2}\right)=\mathrm{det}{\left(\frac{A+{A}^{\text{T}}}{2}\right)}_{d}={\lambda }_{1}^{{n}_{1}}{\lambda }_{2}^{{n}_{2}}\cdots {\lambda }_{l}^{{n}_{l}}$ (5)

where ${\lambda }_{1},\cdots ,{\lambda }_{l}$ are the eigenvalues and ${n}_{1},\cdots ,{n}_{l}$ their algebraic multiplicities [2] with

${n}_{1}+{n}_{2}+\cdots +{n}_{l}=n$ (6)

The transformation of the volume element is

$\text{d}{x}_{1}\cdots \text{d}{x}_{n}=\mathrm{det}\theta \text{ }\text{d}{y}_{1}\cdots \text{d}{y}_{n}$ (7)

and we can choose

$\mathrm{det}\theta =1$ (8)

from Equation (3) and the adequate organization of the orthonormal eigenvectors as the columns of the matrix $\theta$ .

The quadratic form can then be written as

$\begin{array}{c}{x}^{\text{T}}Ax={y}^{\text{T}}{\left(\frac{A+{A}^{\text{T}}}{2}\right)}_{d}y\\ ={\lambda }_{1}\underset{j=1}{\overset{{n}_{1}}{\sum }}{y}_{j}^{2}+\cdots +{\lambda }_{l}\underset{j=n-{n}_{l}+1}{\overset{n}{\sum }}{y}_{j}^{2}\end{array}$ (9)

From Equation (8) and Equation (9), the multidimensional integral will result

$\begin{array}{c}{\int }_{-\infty }^{\infty }{e}^{-{x}^{\text{T}}Ax}\text{d}{x}_{1}\cdots \text{d}{x}_{n}={\int }_{-\infty }^{\infty }{e}^{-{y}^{\text{T}}{\left(\frac{A+{A}^{\text{T}}}{2}\right)}_{d}y}\text{d}{y}_{1}\cdots \text{d}{y}_{n}\\ =\underset{j=1}{\overset{{n}_{1}}{\prod }}{\int }_{-\infty }^{\infty }{e}^{-{\lambda }_{1}{y}_{j}^{2}}\text{d}{y}_{j}\cdots \underset{j=n-{n}_{l}+1}{\overset{n}{\prod }}{\int }_{-\infty }^{\infty }{e}^{-{\lambda }_{l}{y}_{j}^{2}}\text{d}{y}_{j}\end{array}$ (10)

since each unidimensional integral is given by

${\int }_{-\infty }^{\infty }{e}^{-{\lambda }_{k}{y}_{j}^{2}}\text{d}{y}_{j}={\left(\frac{\text{π}}{{\lambda }_{k}}\right)}^{1/2},\text{}k=1,\cdots ,l.$ (11)

We finally write, from Equations ((5), (10), (11)),

${\int }_{-\infty }^{\infty }{e}^{-{x}^{\text{T}}Ax}\text{d}{x}_{1}\cdots \text{d}{x}_{n}={\left(\frac{{\text{π}}^{n}}{{\lambda }_{1}^{{n}_{1}}{\lambda }_{2}^{{n}_{2}}\cdots {\lambda }_{l}^{{n}_{l}}}\right)}^{1/2}={\left(\frac{{\text{π}}^{n}}{\mathrm{det}\left(\frac{A+{A}^{\text{T}}}{2}\right)}\right)}^{1/2}$ (12)

and we see from Equation (12) that the original matrix $A$ does not need to be diagonalizable [1] . The usual result of the literature will follows if $A={A}^{\text{T}}$ , i.e., if $A$ is itself a symmetric matrix.

3. Application of Sylvester’s Criterion Theorem

We now present an alternative derivation of the result obtained above. We will show that there is no need to apply an orthogonal transformation to diagonalize a quadratic form in order to derive formula (12).

Let us write the $I\text{​}{R}^{n}$ vectors:

$x=\underset{j=1}{\overset{n}{\sum }}{x}_{j}{\stackrel{^}{e}}_{j},\text{}{b}_{j}=\underset{k=1}{\overset{n}{\sum }}{b}_{jk}{\stackrel{^}{e}}_{k}$ (13)

where ${\stackrel{^}{e}}_{j},\text{}j=1,\cdots ,n$ is an orthonormal basis,

${\stackrel{^}{e}}_{j}\cdot {\stackrel{^}{e}}_{k}={\delta }_{jk}$ (14)

We now define the matrices

${B}_{j×j}=\left(\begin{array}{cccc}{b}_{1}\cdot {\stackrel{^}{e}}_{1}& \cdots & {b}_{1}\cdot {\stackrel{^}{e}}_{j-1}& {b}_{1}\cdot {\stackrel{^}{e}}_{j}\\ ⋮& \ddots & ⋮& ⋮\\ {b}_{j}\cdot {\stackrel{^}{e}}_{1}& \cdots & {b}_{j}\cdot {\stackrel{^}{e}}_{j-1}& {b}_{j}\cdot {\stackrel{^}{e}}_{j}\end{array}\right)=\left(\begin{array}{cccc}{b}_{11}& \cdots & {b}_{1j-1}& {b}_{1\text{ }j}\\ ⋮& \ddots & ⋮& ⋮\\ {b}_{j1}& \cdots & {b}_{jj-1}& {b}_{j\text{ }j}\end{array}\right)$ (15)

${B}_{\begin{array}{l}x\\ j×j\end{array}}=\left(\begin{array}{cccc}{b}_{1}\cdot {\stackrel{^}{e}}_{1}& \cdots & {b}_{1}\cdot {\stackrel{^}{e}}_{j-1}& {b}_{1}\cdot x\\ ⋮& \ddots & ⋮& ⋮\\ {b}_{j}\cdot {\stackrel{^}{e}}_{1}& \cdots & {b}_{j}\cdot {\stackrel{^}{e}}_{j-1}& {b}_{j}\cdot x\end{array}\right)=\left(\begin{array}{cccc}{b}_{11}& \cdots & {b}_{1j-1}& {b}_{1}\cdot x\\ ⋮& \ddots & ⋮& ⋮\\ {b}_{j1}& \cdots & {b}_{jj-1}& {b}_{j}\cdot x\end{array}\right)$ (16)

The first $\left(j-1\right)$ terms of the expansion of $x$ will produce null determinants of the ${B}_{{}_{j×j}^{x}}$ matrix. The ${j}^{th}$ term will correspond to the determinant $\mathrm{det}{B}_{j×j}$ times ${x}_{j}$ . The ${\left(j+1\right)}^{th}$ term will lead to a determinant of a ${B}_{{}_{j×j}^{j+1}}$

matrix which is obtained by replacement of ${j}^{th}$ column of the matrix ${B}_{j×j}$ by a column whose elements are ${b}_{1j+1}\cdots {b}_{jj+1}$ , times ${x}_{j+1}$ . The ${n}^{th}$ term will correspond to the determinant of a ${B}_{{}_{j×j}^{n}}$ matrix which is obtained by replacement of the ${j}^{th}$ column of the matrix ${B}_{j×j}$ by a column whose elements are ${b}_{1n}\cdots {b}_{j\text{ }n}$ , times ${x}_{n}$ . We can then write,

$\mathrm{det}{B}_{{}_{j×j}^{x}}={x}_{j}\mathrm{det}{B}_{j×j}+\underset{k=j+1}{\overset{n}{\sum }}{x}_{k}\mathrm{det}{B}_{{}_{j×j}^{k}}$ (17)

It should be noted that if ${B}_{n×n}=B$ is a symmetric matrix like $B=\frac{A+{A}^{\text{T}}}{2}\text{ },\text{}\forall A$ ,

the quadratic form ${x}^{\text{T}}Bx$ can be written as

${x}^{\text{T}}Ax\equiv {x}^{\text{T}}Bx=\underset{j=1}{\overset{n}{\sum }}\frac{{\left(\mathrm{det}{B}_{{}_{j×j}^{x}}\right)}^{2}}{\mathrm{det}{B}_{j-1×j-1}\mathrm{det}{B}_{j×j}}$ (18)

where

$\mathrm{det}{B}_{0×0}=1\text{ },\text{}\mathrm{det}{B}_{1×1}={b}_{11}\text{ },\text{}\mathrm{det}{B}_{{}_{1×1}^{x}}=b{\text{​}}_{1}\cdot x$

From Equation (17), we can write Equation (18) as

${x}^{\text{T}}Ax=\underset{j=1}{\overset{n}{\sum }}\frac{\mathrm{det}{B}_{j×j}}{\mathrm{det}{B}_{j-1×j-1}}{\left({x}_{j}+\frac{1}{\mathrm{det}{B}_{j×j}}\underset{k=j+1}{\overset{n}{\sum }}{x}_{k}\mathrm{det}{B}_{{}_{j×j}^{k}}\right)}^{2}$ (19)

From Sylvester’s Criterion [3] , the quadratic form ${x}^{\text{T}}Bx$ is positive definite if and only if all upper left determinants $\left(\mathrm{det}{B}_{j×j},j=1,\cdots ,n\right)$ of the symmetric matrix $B$ are positive. We should note [4] that for each variable ${x}_{j}$ :

${\int }_{-\infty }^{\infty }{e}^{-\frac{\mathrm{det}{B}_{j×j}}{\mathrm{det}{B}_{j-1×j-1}}{\left({x}_{j}+\frac{1}{\mathrm{det}{B}_{j×j}}\underset{k=j+1}{\overset{n}{\sum }}{x}_{k}\mathrm{det}{B}_{{}_{j×j}^{k}}\right)}^{2}}\text{d}{x}_{j}={\left(\frac{\text{π}}{\frac{\mathrm{det}{B}_{j×j}}{\mathrm{det}{B}_{j-1×j-1}}}\right)}^{1/2}$ (20)

since the other variables ${x}_{j+1},\cdots ,{x}_{n}$ which are contained on the term $\frac{1}{\mathrm{det}{B}_{j×j}}\underset{k=j+1}{\overset{n}{\sum }}{x}_{k}\mathrm{det}{B}_{{}_{j×j}^{k}}$ do not contribute to unidimensional integrals of the form

${\int }_{-\infty }^{\infty }{e}^{-\alpha {\left({x}_{j}+f\left({x}_{j+1},\cdots ,{x}_{n}\right)\right)}^{2}}\text{d}{x}_{j}={\left(\frac{\text{π}}{\alpha }\right)}^{1/2}$

where $\alpha$ is a real constant and $f$ a generic function of its arguments.

We then have from Equation (19) and Equation (20):

$\begin{array}{c}{\int }_{-\infty }^{\infty }{e}^{-{x}^{\text{T}}Ax}\text{d}{x}_{1}\cdots \text{d}{x}_{n}=\underset{j=1}{\overset{n}{\prod }}{\left(\frac{\text{π}}{\frac{\mathrm{det}{B}_{j×j}}{\mathrm{det}{B}_{j-1×j-1}}}\right)}^{1/2}={\left(\frac{{\text{π}}^{n}}{\mathrm{det}B}\right)}^{1/2}\\ ={\left(\frac{{\text{π}}^{n}}{\mathrm{det}\left(\frac{A+{A}^{\text{T}}}{2}\right)}\right)}^{1/2}q.e.d.\end{array}$ (21)

Conflicts of Interest

The authors declare no conflicts of interest.

 [1] Chaichian, M. and Demichev, A. (2001) Path Integrals in Physics. Vol. 1, Stochastic Processes and Quantum Mechanics, IOP Publishing, Bristol. [2] Lay, D.C. (2012) Linear Algebra and its Applications. Addison-Wesley, Boston. [3] Gilbert, G.T. (1991) Positive Definite Matrices and Sylvester’s Criterion. American Mathematical Monthly, 98, 44-46. [4] Yu, B., Rumer, M. and Rivkin, Sh. (1980) Thermodynamics, Statistical Physics and Kinetics. Mir Publishers, Moscow.