Helmholtz Decomposition of Vector Fields Using an Optimal Preconditioned Conjugate Gradient Algorithm ()
1. Introduction
Several problems and applications require a deep knowledge of a vector field over a region. An important tool in this context is the Helmholtz decomposition theorem (HDT) that guarantees, under certain hipothesis [1] [2] , a unique decomposition of a vector field
defined in a domain
into its solenoidal
and irrotational
components. This is,
(1)
This decomposition is important, for example, because the properties like incompressibility and vorticity can be studied directly by studying these components.
The application areas of the HD include (but are not restricted to) electromagnetism, linear elasticity, fluid mechanic [3] , astrophysics [4] , geophysics [5] , computer vision and robotic [6] [7] [8] , image processesing [9] . The HDT also establishes that the Helmoltz decomposition components can be obtained in terms of a vector potential (the solenoidal) and a scalar potential (the irrotational), solution of certain Poisson equations with different kinds of boundary conditions, one of them being
, on
. These elliptic equations cannot be solved explicitly, so it is necessary to apply numerical methods to solve it. The most common mesh dependant methods are the finite element methods (FEM) [6] [10] [11] , Finite difference methods (FDM) [9] [12] , Galerkin Formulation Using Finite Differences [13] , Fourier and Wavelets Domains [14] [15] [16] . Among the meshsless methods, we can mention the Smoothed Particle Hydrodynamics (SPH) [17] [18] [19] and the Statistical Learning Using Matrix-Valued Kernels [20] [21] [22] .
The problem we are considering in this work is the numerical decomposition, in a given domain
, of a vector field
into its solenoidal and irrotational components with the more general boundary condition:
is satisfied only in
, a section of
where the solenoidal vector field is tangential to the boundary. This kind of boundary conditions appears, for example when we want to adjust vector fields from experimental or incomplete data. Let u be given and defined in the Lipschitz bounded domain
, with boundary
. We want to solve numerically the following model:
(2)
(3)
(4)
(5)
where p is a scalar potential,
is the unitary outer vector, normal to the boundary
and
.
We apply a methodology based on the solution of saddle point problems to find simultaneously the pair
(and consequently
) by using an optimal preconditioned conjugate gradient algorithm combined with a mixed finite element method to solve elliptic problems. The associated stiffness matrix components are aproximated by a trapesoidal rule that produce a diagonal matrix so that the solution of the stiffness system is easy. This property together with the use of an optimal preconditioned resulted in a very fast algorithm.
The structure of this article is as follows: In Section 2, we consider the mathematical formulation of the problem and we describe the classical elliptic problems for scalar potentials allowing the decomposition. In Section 3, we describe the variational version of the problem. After reformulating the problem, we describe a preconditioned conjugate gradient (PCG-algorithm), where a mixed finite element method is used to solve the elliptic subproblems at each iteration. In Section 4, we present and discuss the numerical results, and finally, in Section 5, we give some concluding remarks.
2. Mathematical Formulation of the Helmholtz Decomposition
The Helmholtz decomposition theorem [2] , establishes that if
is a bounded Lipschitz domain with boundary
, then, each
have a unique Helmholtz decomposition
(6)
if we know that the solenoidal component satisfies
in
, i.e.
and
, where
(7)
(8)
and
(9)
Spaces
and
are orthogonal in
.
If
and the boundary condition
in
is given, the irrotational vector field
can be calculated by obtaining the scalar field
, as solution of the Poisson problem with Neumann boundary condition
(10)
where
.
If we first solve the elliptic problem (10) to find
then we calculate
and finally we calculate
.
In order to describe our methodology for finding the Helmholtz decomposition, we consider a given vector field
satisfying (2)-(5). From (2), we have
or
From this,
Since
we can write (if
in
and
on
)
where
is the space of vector fields defined as
(11)
Considering condition (3) we conclude that the pair
, satisfies
(12)
Next section is devoted to find the pair
, satisfying (12).
3. A Preconditioned Conjugate Gradient Algorithm
Following [23] , assume that
is solution of problem (12) with
(13)
Considering that
, it follows that
, so we have that p satisfies the following operational equation
(14)
where
is the operator defined by
(15)
with
the solution of
(16)
Note that subscript s in (13) have not the same meaning as p and
in (16). The properties of operator A (linear, self-adjoint and strongly elliptic, [23] ), allow us to solve equation (14) by using a conjugate gradient algorithm operating in
. Also, we can use the optimal preconditioner
defined by
(17)
where
solves the problem
(18)
(19)
(20)
with
(21)
Next we enumerate the three stages of the preconditioned conjugate gradient algorithm to solve the equation
:
1) Initialization:
given,
,
,
.
2) Descent: For
, assuming we know
, find
by
where
.
3) Test of convergence and new conjugate direction:
If
, take
and stop.
Else
with
Do
and return to 2.
Using Equations (15)-(16), which define operator A, and Equations (17)-(18) which define operator B, the detailed preconditioned conjugate gradient algorithm (PCG) to obtain p and
(and
) is as follows:
Initialization
1) Given
, solve
2) Let
, where
.
3) Solve
4) Let
,
.
Descent
For
, assuming
,
,
,
,
are known, compute
,
,
,
and
, using the following steps:
5) Solve
6) Let
.
7) Let
.
8) Solve
9) Set
Test of convergence and new descent direction
If
, then do
,
and stop. Otherwise, do the following:
10) Compute
11) Set
.
12) Do
and return to 5.
Note that in this algorithm,
and p are computed simultaneously.
To approximate the functions belonging to the spaces
and
, we make use of the Bercovier-Pironneau finite element approximation using two triangulation of
, the coarse mesh
and the fine mesh
obtained from the coarse triangulation through a regular subdivision of each triangle
, as shown in Figure 1.
The vector-valued functions such as
are approximated in the fine mesh and the scalar functions, such as p are approximated in the coarse mesh. Since
is obtained on the fine mesh, its resolution is the same as that obtained when solving the elliptics problems (10) in
.
To measure the global difference between the exact solenoidal field
and the computed solenoidal field
we take the relative error
(22)
Also, we computed the
-norm of the divergence of
,
, which we denote by ndiv in the next section.
4. Numerical Results
To show the performance of the PCG-algorithm we chose two synthetic vector fields. In Example 1 we consider a vector field satisfying the boundary condition
(23)
In Example 2 we consider a vector field satisfying the boundary condition
(24)
but only in
, a section of
.
Figure 1. Element in
: triangle ABC. Elements in
: triangles AQP, PRC, PQR and QBR.
All numerical calculations were done in a Toshiba PC: Portege R705, Windows 7, 64 Bits, Intel Processor CORE i3, 2.27 GHz, 3 GB Ram.
Example 1. We consider the 2D vector field
(25)
for which the solenoidal and irrotational components are
(26)
(27)
In this case,
(28)
Consequently,
. To solve the elliptic problems in the PCG algorithm we use simultaneously a coarse triangular mesh
(blue in Figure 2) to approximate the scalar potential p and a fine triangular mesh
(red in Figure 2) to approximate the solenoidal component.
In Figure 3, we show the exact vector fields
,
and
.
We want to see how well we can approximate the solenoidal component and the scalar potential p applying the PCG algorithm. A summary of the numerical results is shown in Table 1, where we show the relative error
of the solenoidal component, the
-norm of the divergence ndiv of the solenoidal component, as well as the number of iterations to get convergence, up to the given tolerance (
), for several mesh sizes.
Figure 2. An example of a 5 × 5 coarse mesh and a 9 × 9 refined mesh. The scalar potential p is approximated in the coarse mesh
(blue). The solenoidal component
is approximated in the fine mesh
(red). On each blue triangle there are four red triangles.
Figure 3. Exact vector fields for Example 1. Left:
. Middle: solenoidal component
. Right: irrotational component
.
Table 1. Numerical results for Example 1.
.
In Figure 4, we show the stream lines for the exact (left) and aproximated (right) solenoidal vector fields. In Figure 5, we show the exact (left) and approximated (right) p.
It is important to mention that the error bounds in Table 1 were obtained using in the stop criterion of the conjugate gradient algorithm the relaxed value
and the results can be improved if we use a more restrictive
, as is shown in the next example.
Example 2. Here we consider the 2D vector field
(29)
for which the solenoidal and irrotational components are
(30)
(31)
In this example,
(32)
so that,
is the top and vertical boundary of
. Our formulation implies that
in
, but this is only possible in the top part of
, and we do not want (and we are not able) to impose additional boundary conditions in p, so we impose exact boundary conditions in
in the vertical boundary. Table 2 shows the numerical results obtained for this test problem.
For comparison reasons we have repeated the calculations in example 2 but using a more restrictive stop tolerance of
for the conjugate gradient
Figure 4. Stream lines for the exact (left) and aproximated (right) solenoidal vector fields for Example 1, in a velocity mesh of 65 × 65 (a coarse mesh of 33 × 33).
Figure 5. Exact (left) and approximated (right) p for Example 1, in a coarse mesh of 33 × 33 (a velocity mesh of 65 × 65).
Table 2. Numerical results for Example 2.
.
method with and without preconditioning. We also solved a problem equivalent to (10) but with exact (not diagonal) stiffness matrix. A summary of the numerical results with the three algorithms is shown in Table 3, where we show the relative solenoidal error
, average of divergence mdiv, as well as the number of iterations to get convergence, up to the given tolerance (
, in the CG
Table 3. Numerical results for Example 3.
.
and PCG algorithms) for different mesh sizes. This table also contains the CPU time (sec) spent in each numerical experiment, and, in particular, in the last column the CPU time spent by solving the elliptic problem equivalent to (10). For this elliptic problem the results for
and mdiv were not as good as those of the CG algorithms.
It is clear that the PCG-algorithm performs much better than the other algorithms. Another nice feature of the performance of the PCG-algorithm in this example is that the number of iterations is independent of the mesh size (7 iterations in each case). Also, the relative error between the computed and the exact solenoidal vector field is of the same order for the CG and PCG algorithms, with the largest difference occurring on the top boundary. For this example, we observe a loss of accuracy on the mean divergence when the PCG-algorithm is employed. Anyway, the mean divergence obtained with the PCG-algorithm is still very accurate, from a practical point of view, since we are using polynomial of degree 1 to approximate the unknowns.
5. Conclusions
We have applied an optimal preconditioner for the conjugate gradient algorithm, to solve the operator equation associated with the saddle-point formulation of the Helmholtz decomposition problem. Numerical results show that the new preconditioned conjugate gradient algorithm, and its stable approximation by a mixed finite element discretization, preserve the good properties of the non-preconditioned conjugate gradient algorithm, but it is much faster, and produces much better results than the methods based on the elliptic problem (10). Some additional nice properties of this algorithm are:
● It enforces very accurately mass conservation in the computed vector fields, considering that the approximation is second order.
● The number of iterations is reduced from several hundreds to less than 7 in the examples considered in this article. Also, the number of iterations of the PCG-algorithm is almost independent with respect to mesh refinement, while the number of iterations in the non-preconditioned algorithm doubles at each mesh refinement in most cases. Therefore, there is a substantial reduction in the computational time in both examples. However, this behavior must be corroborated in 3-D problems with adaptive meshes in complex domains, and with millions of degrees of freedom, in order to have a better idea of the performance of the method with realistic scenarios.
● It is not necessary to impose boundary conditions on the multiplier p, as it is done with the elliptic approaches. Furthermore, post-processing the scalar potential to find the vectorial field from the multiplier is not required, since the multiplier and the solenoidal vector field are found simultaneously by the algorithm.
The application of the methodology presented here to the more realistic three-dimensional case is an extension of the present work.
Acknowledgements
Sincere thanks to the members of JAMP for their professional attitude of high quality.