Solution of Nonlinear Stochastic Langevin’s Equation Using WHEP, Pickard and HPM Methods ()
Keywords:
It is appropriate to assume that the nonlinear term in SPDE affecting the phenomena under study is small enough; then its intensity is controlled by means of a frank small parameter, saying ε [12]. In (WHE) approach, there is no randomness directly involved in the computations.
The application of WHE aims to find a truncated series of the solution process of the differential equation. The truncated series composes of two major parts: the first is the Gaussian part which consists of the first two terms, while the rest of the series constitutes the non-Gaussian part. In nonlinear cases, there exist difficulties in solving the resultant set of the deterministic set of integro-differential obtained after the direct application of WHE. The WHEP technique was introduced in [6] using perturbation method to solve perturbed nonlinear equation.
The WHE was developed by Norbert Wiener [13]. Wiener constructed an orthonormal random base for expanding homogeneous chaos depending on white noise and used it to study problems in statistical mechanics. Cameron and Martin [14] developed a more explicit and intuitive formulation for Wiener-Hermite expansion, and this technique is now known as Wiener-Chaos expansion (WCE). Their development was based on an explicit discretization of the white noise process through its Fourier expansion, which was missed in Wiener’s original formalism. Since Cameron and Martin’s work, WHE has become a useful tool in stochastic analysis involving white noise (Brownian motion) [14]. Also, another formulation was suggested and applied by Meecham and his co-workers [15,16]. They have developed a theory of turbulence involving a truncated WHE of the velocity field. This technique was also used by M. El-Tawil and his co-workers [6-10].
The main goal of this paper is to apply the WHEP technique in solving the nonlinear stochastic Langevin’s equation and compare the solution with those of Picard’s and HPM techniques. In Section 2, the problem will be introduced; in Section 3, the application of WHEP technique is considered; the Picard approximation technique is applied in Section 4; Section 5 describes the application of HPM method; Section 6 represents the results for every method; and Section 7 shows comparisons between the different techniques.
2. Problem Formulation
In this paper, the nonlinear stochastic Langevin’s equation with external stochastic excitation in the form of white noise is considered. It formulates general relation between the velocity
of a unit mass particle and the non homogeneity random forces
of a fluid medium with phenomenological coefficient
such that [8,17]:
(1)
The variable
is a random output of a triple probability space
, where
is a sample space, B is a
-algebra associated with
and P is a probability measure. For simplicity, ω will be dropped later on.
The following assumptions are taken for simplicity:
・ Constant phenomenological coefficient;
.
・ deterministic initial condition;
.
・ Nonlinearity of second order;
.
・ Medium forces in terms of white noise
enveloped by a deterministic function
and scaled by scalar amplitude
i.e.
.
Under the above hypothesis; the model becomes
(2)
Theorem: The solution of Equation (2), if exists, is a power series in
when using Picard approximation technique i.e.

The theorem can be proved using the mathematical induction with the Picard iterative technique as in [7].
The deterministic solution of Equation (2) at
[8], is:
(3)
which can be expanded as follows,
(4)
The solution is a series of successive addition and subtractions of the ascendant orders of the quantity
resulting in oscillatory behavior around the exact solution; the motion within a medium with highly fractional forces will vanish rapidly compared to a medium with lower fractional forces [8].
3. WHEP Technique
Due to the completeness of the Wiener-Hermite set [15], any arbitrary stochastic process
can be expanded as:
(5)
where
and
is a k-dimensional integral over the disposable variables
. The
functional
is the nth order Wiener-Hermite time-independent functional. The Wiener-Hermite functional form a complete set with
and
which is the white noise. The WHE method can be used in solving Equation (2). A set of deterministic integro-differential equations are obtained in the de- terministic kernels [7], ![]()
To obtain approximate solutions of these deterministic kernels, one can use perturbation technique in the case of having a perturbed system depending on a small parameter
. Expanding the kernels as a power series of
another set of simpler iterative equations in the kernel series components are obtained.
If
then it will be convergent if [7]:
.
where NC refers to the number of corrections, m is the order, the statistical properties of the solution can be calculated as [7]:
(6)
Applying the WHEP algorithm [18] to get the deterministic set of differential equations. The envelop function
which will be chosen as a constant function
and as a decaying exponential function
. The initial conditions are assumed deterministic as follows for
:
For NC = 1: we have the following
(7)
For NC = 2: We have the above equations in addition to:
(8)
For NC = 3: We have the above equations in addition to:
(9)
4. Picard Approximation
It is based on generating recursive convergent approximations for the solution until reaching certain level of accuracy defined according to the nature of the problem or to the measure in the scope of the analysis [19,20].
In the case of nonlinear Langevin’s equation, Picard algorithm converges to the exact instant of the time at which the solution explodes. In this case there is a relation between the number of approximations and the values of parameters. The more number of approximations, the closer we are to the exact solution.
First, an arbitrary initial approximation should be suggested
, and the numerical recursive equation will be [21]:
(10)
which is substituted in Equation (2). First let us discuss the convergence of the model taking
and
then the equation will be linear model in terms of Brownian motion as:
(11)
(12)
(13)
We shall choice two arbitrary initial approximations in Picard approximations to simulate the model of Langevin’s equation, the two cases of initial approximations are:
![]()
5. Homotopy Perturbation Method (HPM)
The HPM is proved to be an effective, simple, and convenient to solve nonlinear PDE problems [22,23]. In HPM, a parameter,
is embedded in a homotopy function
which satisfies
(14)
where
is an initial approximation to the solution of the equation:
(15)
with the boundary conditions:
(16)
In which A is a nonlinear differential operator which can be decomposed into a linear operator R and a nonlinear operator N, B is a boundary operator and
is a known analytical function. The homotopy introduces a continuously deformed solution for the case of
,
is the original equation. The basic assumption of the HPM method is that the solution of the original equation can be expanded as a power series in p as,
(17)
Now, setting
, the approximate solution is obtained as
(18)
The rate of convergence of HPM depends greatly on the initial approximation
which is considered as the main disadvantage of HPM [9-11]. The idea of the embedded parameter can be utilized to solve nonlinear problems by embedding this parameter to the problem and then forcing it to be unity in the obtained approximate solution if the convergence can be assured.
HPM is a simple technique which enables the extension of the applicability of the perturbation methods from small value applications to general ones. Applying HPM technique on Equation (2) with,
(19)
(20)
(21)
(22)
The homotopy function takes the following form :
(23)
Equivalently:
(24)
Using Equation (17) in Equation (24) and equating the similar powers of p in both sides of the equation, one can get the following:
(25)
(26)
(27)
(28)
(29)
We have many choices in guessing the initial approximation together with its initial conditions which greatly affects the consequent approximations. We shall consider the choice of
(30)
which satisfies the initial condition. The statistical measures of the different orders can be obtained as:
1st order:
(31)
2nd order:
(32)
3rd order:
(33)
4th order:
(34)
6. Results
6.1. Results of WHEP Technique
Figures 1-8 show the mean and the variance of fourth order for first, second and third corrections. As shown in Figure 1 for case of
there is fluctuation of the solution to positive and negative infinity (first and third corrections to negative and second to positive) we can guess that the fourth correction will diverge to positive infinity, this guess will be proved later in this paper but in Figure 2 for case of
, the three corrections converge in
, the second and the third corrections will deviate from first one.
![]()
Figure 1. Fourth order mean of first, second and third corrections. Comparison between the different corrections for ε = 0.5 of pure white noise.
![]()
Figure 2. Fourth order variance of first, second and third corrections. Comparison between the different corrections for ε = 0.5 of pure white noise.
![]()
Figure 3. Fourth order mean of first, second and third corrections. Comparison between the different corrections for ε = 0.01 of pure white noise.
![]()
Figure 4. Fourth order variance of first, second and third corrections. Comparison between the different corrections for ε = 0.01 of pure white noise.
![]()
Figure 5. Fourth order mean of first, second and third corrections. Comparison between the different corrections for ε = 0.5 of decaying white noise.
![]()
Figure 6. Fourth order variance of first, second and third corrections. Comparison between the different corrections for ε = 0.5 of decaying white noise.
![]()
Figure 7. Fourth order mean of first, second and third corrections. Comparison between the different corrections for ε = 0.01 of decaying white noise.
![]()
Figure 8. Fourth order variance of first, second and third corrections. Comparison between the different corrections for ε = 0.01 of decaying white noise.
For small value of nonlinearity strength
, the divergence of solution which proved in [8] using the growth condition occurred in later interval such as in Figures 3, 4, 7 and 8 was occurred after t = 3 and in Figures 1, 2, 5 and 6 for
, the divergence was occurred nearly at
.
There is no obvious difference in mean and variance for case of
or
in Figures 1, 2, 5 and 6. In Figures 2 and 6 represent the convergence of all corrections for
until instant of time then the divergence of the solution begins; this convergence proves that for some higher corrections or orders we can obtain accurate solution and know at which instant of time the divergence exactly happens.
Figures 6 and 8 for
, decreasing in the magnitude of the variance has been occurred, we can conclude that increasing the value of
, somehow works on eliminating the effect of the unknown pole which causes the explosion of solution.
Decreasing the nonlinearity strength in Figures 6 and 8 make the variance increasing with time until fixed to certain time after which it begin to make the explosion.
To know the effect of increasing order in the accurate of the solution, fixed the third correction as case of test as follows.
Figure 9 shows the mean in different models which give us knowledge about decreasing
which make the divergence very slowly and delay it to long instant of time but the question is what is the value of this parameter until the system doesn’t make explosion? from the study, decreasing the nonlinearity strength need long interval of time to know the behavior of the solution, for
and
the mean deceasing with time until reach to zero and continued its trajectory to negative infinity and the effect of
is the same for delaying the explosion, but the question is, which of this parameters is stronger?. From previous models the nonlinearity strength has the strong effect in mean and variance but this doesn’t decay the strong effect of
in variance as
![]()
Figure 9. Third correction mean of first, second, third and fourth orders. Comparison between the different orders for ε = 0.5, ε = 0.01. Case of pure and decaying white noise.
said before.
Figures 10 shows the coincident of all orders such as that in mean but in this case of the variance, we find the first order deviate from the others at increasing time and the interval of that depend on the parameter’s value, all plots of variance has the same trajectory of increasing variance to reach its peak then decreasing to certain time after which it make the deviation.
6.2. Results of Picard Approximation
Figures 11-14 prove that Picard approximation gives good result for
. The two cases of arbitrary initial approximations have almost the same results in mean and in variance until instant of time where the deviation occurred as follows in Figures 15-18 for
which mean we need to decrease
and there was a convergence between the two cases of initial approximation in the results of Picard’s approximation.
6.3. Results of HPM Method
Figures 5 and 6 are semi-similar to Figures 19 and 20 at which we can see the simulations of the orders in HPM are look like the simulations of the corrections in WHEP technique and the fourth order here is similar to our guess of fourth correction which diverge to positive infinity.
Figures 7 and 8 are semi-similar to Figures 21 and 22 which prove that increasing order of solution will give the accurate location at which the solution explodes and more convergence to the exact solution of this model.
![]()
Figure 10. Third correction variance of first, second, third and fourth orders. Comparison between the different orders for ε = 0.5, ε = 0.01. Case of pure and decaying white noise.
![]()
Figure 11. Mean of last iteration (n) for ε = 0.5; 0.01 of first case of initial approximation.
![]()
Figure 12. Variance of last iteration (n) for ε = 0.5; 0.01 of first case of initial approximation.
![]()
Figure 13. Mean of last iteration (n) for ε = 0.5; 0.01 of second case of initial approximation.
![]()
Figure 14. Variance of last iteration (n) for ε = 0.5; 0.01 of second case of initial approximation.
![]()
Figure 15. Comparison between Picard mean using the first and second cases of initial approximations for ε = 0.5.
![]()
Figure 16. Comparison between Picard variance using the first and second cases of initial approximations for ε = 0.5.
![]()
Figure 17. Comparison between Picard mean using the first and second cases of initial approximations for ε = 0.01.
The changing of ε makes a difference in the results, in Figure 23 increasing ε than 1 doesn’t make any prob- lem in the solution but decreasing this value gives good results, and also changing in ρ doesn’t make any change in the mean, but in variance it makes change in delaying the explosion and gives good solution especially for
![]()
Figure 18. Comparison between Picard variance using the first and second cases of initial approximations for ε = 0.01.
![]()
Figure 19. Comparison of the first, second, third and fourth orders mean for ε = 0.5 and λ = 2.
![]()
Figure 20. Comparison of the first, second, third and fourth orders variance for ε = 0.5 and λ = 2.
![]()
Figure 21. Comparison of the first, second, third and fourth orders mean for ε = 0.01 and λ = 2.
![]()
Figure 22. Comparison of the first, second, third and fourth orders variance for ε = 0.01 and λ = 2.
decreasing this parameter.
6.4. Comparison between Different Methods
The comparison between all methods depends greatly on decreasing the nonlinearity strength
, decreasing this value gives more convergence, Figures 24 and 25 show the convergence between the three methods for first case of Picard initial approximation, although the convergence between the three methods for the second case of Picard initial approximation is less than the first one, because of the increasing of the term of white noise Picard in the second case of Picard. We can deduce that the first case of initial approximation make Picard approxi- mation technique converges to WHEP and HPM techniques. It is worth in Figures 24 and 25 to note that, Picard n = 6 means using Picard iterate up to the 6th approximation. Also WHEP 4, 3 means using the WHEP technique with 4th order and 3rd correction. HPM 4 means using the 4th order of HPM method.
7. Conclusion
Picard is reaching the solution in a numerical way to deduce the mathematical formulas which deplete too much effort despite of the convergence between the two cases of initial approximations: the first case of initial appro- ximation was more convergent to WHEP technique and HPM methods than the second one. The WHEP technique seems to be efficient because of its corrections in spite of being analytically lengthy. The HPM is easier in computation but for higher order to calculate the variance, and it is complicated but not more than the WHEP which needs huge analytical calculations and formulations. HPM expectedly depends highly on the initial guess. WHEP and HPM are convergent to each other than any other two compared methods, and every order in HPM
![]()
Figure 23. The fourth order variance for different ε and λ = 2.
![]()
Figure 24. Comparison between Picard, WHEP and HPM fourth order mean for first case of Picard.
![]()
Figure 25. Comparison between Picard, WHEP and HPM fourth order variance for first case of Picard.
has the same trajectory of the opposite correction in WHEP, which means the first order in HPM looks like the first correction in WHEP and so until the fourth one.
References