Mean Square Numerical Methods for Initial Value Random Differential Equations

Randomness may exist in the initial value or in the differential operator or both. In [1,2], the authors discussed the general order conditions and a global convergence proof is given for stochastic Runge-Kutta methods applied to stochastic ordinary differential equations (SODEs) of Stratonovich type. In [3,4], the authors discussed the random Euler method and the conditions for the mean square convergence of this problem. In [5], the authors considered a very simple adaptive algorithm based on controlling only the drift component of a time step. Platen, E. [6] discussed discrete time strong and weak approximation methods that are suitable for different applications. Other numerical methods are discussed in [7-12]. In this paper the random Euler and random RungeKutta of the second order methods are used to obtain an approximate solution for Equation (1.1). This paper is organized as follows. In Section 2, some important preliminaries are discussed. In Section 3, the existence and uniqueness of the solution of random differential initial value problem is discussed and the convergence of random Euler and random Runge-Kutta of the second order methods is discussed. In Section 4, the statistical properties for the exact and numerical solutions are studied. Section 5 presents the solution of some numerical examples of first order random differential equations using random Euler and random Runge-Kutta of the second order methods showing the convergence of the numerical solutions to the exact ones (if possible). The general conclusions are presented in the end section.


Introduction
Random differential equations (RDE) are defined as differential equations involving random inputs.In recent years, increasing interest in the numerical solution of (RDE) has led to the progressive development of several numerical methods.This paper is interested in studying the following random differential initial value problem (RIVP) of the form: Randomness may exist in the initial value or in the differential operator or both.In [1,2], the authors discussed the general order conditions and a global convergence proof is given for stochastic Runge-Kutta methods applied to stochastic ordinary differential equations (SODEs) of Stratonovich type.In [3,4], the authors discussed the random Euler method and the conditions for the mean square convergence of this problem.In [5], the authors considered a very simple adaptive algorithm based on controlling only the drift component of a time step.Platen, E. [6] discussed discrete time strong and weak approximation methods that are suitable for different applications.Other numerical methods are discussed in [7][8][9][10][11][12].
In this paper the random Euler and random Runge-Kutta of the second order methods are used to obtain an approximate solution for Equation (1.1).This paper is organized as follows.In Section 2, some important preliminaries are discussed.In Section 3, the existence and uniqueness of the solution of random differential initial value problem is discussed and the convergence of random Euler and random Runge-Kutta of the second order methods is discussed.In Section 4, the statistical properties for the exact and numerical solutions are studied.Section 5 presents the solution of some numerical examples of first order random differential equations using random Euler and random Runge-Kutta of the second order methods showing the convergence of the numerical solutions to the exact ones (if possible).The general conclusions are presented in the end section.

Mean Square Calculus [13]
Definition1.Let us consider the properties of a class of real r.v.'s 1 2 , , , n X X X  whose second moments, , E X E X  , are finite.In this case, they are called "second order random variables", (2.r.v's).Definition 2. The linear vector space of second order random variables with inner product, norm and distance, is called an -space.
X t E X t  , .

The Convergence in Mean Square
A sequence of r.v's   n X converges in mean square (m.s) to a random variable X if lim 0 where lim is the limit in mean square sense.

Mean-Square Differentiability
The random process exists, and is denoted by

Existence and Uniqueness
Let us have the random initial value problem where   X t is second order random process.This equation is equivalent to integral equation

Theorem (3.1.1)
If we have the random initial value problem (3.1) and suppose the right-hand side function is continuous and satisfies a mean square (m.s) Lipschitz condition in its second argument: where C is a constant or where c(t) is a continuous function {because in every finite interval c(t) ≤ constant}.then the solution of Equation (3.1) exists and is unique.

The proof
The existence can be proved by using successive approximations.Let 0 0 t X X  (3.5) and for n 1 For n  1 we obtain: For n > 1 we obtain: Since: From Equation (3.17) we have: where X 0 is a random variable and the unknown

 
X t as well as the right-hand side f (X,t) are stochastic processes defined on the same probability space.

Definitions [6,7]
is an m.s.bounded function and let h > 0 then The "m.s.modulus of continuity of g" is the function  The function g is said to be m.s uniformly continuous in T if: Note that: (The limit depends on h because g is defined at every t so we can write In the problem (3.22), we find that the convergence of this problem depends on the right hand side (i.e.

   
, f X t t then we want to apply the previous definition on Then we say that f is "randomly bounded uniformly continuous" in S, if

Random Mean Value Theorem for Stochastic
Processes The aim of this section is to establish a relationship between the increment .This result will be used in the next section to prove the convergence of the random Euler method.
The proof Since is m.s.continuous, the integral process  is well defined and the correlation function is well defined, is a deterministic continuous function on T × T.
For each fixed r, the function is continuous and by the classic mean value theorem for integrals, it follows that: We must prove that for the value  satisfying (3.26) one get: and since: And since: then by substituting in (3.28) we have:

The proof
The result is a direct consequence of Lemma (3.3.2) The proof of (3.30)Let

 
X t be a m.s.differentiable on T and let the or- Then we have:

The Convergence of Random Euler Scheme
In this section we are interested in the mean square convergence, in the fixed station sense, of the random Euler method defined by where n X and and f: , satisfies the following conditions: and we can use  instead of t  and from Theorem (3.3.1) at then we have: and hence 0 was the starting in the problem (3.22) and here n is the starting and since Euler method deal with solution depend on previous solution and if we have , , ( ( ), ) ( ( ), ) Under hypothesis C1, C2 We obtain and note that the two points are X t  and   n X t in (*) then we have: Then by substituting in (3.38) we have Then by substituting in (3.37) we have

w h k t Mh k t e k t h e h w h k t Mh K t h k t h e h w h k t Mh h w h k t Mh K t h e h w h k t Mh k t h K t h e h w h k t Mh k t h k t h K t h e h w h k
The term: And the second term: Then: we have: The first limit in (3.42) equal zero and: The computation of as follows: Taking into account that 0 0 e  where Let garithm of the two sides we have: By using the (L'Hospital's Rule): Note that:

The Convergence of Runge-Kutta of Second Order Scheme for Random Differential Equations in Mean Square Sense
In this section we are interested in the mean square convergence, in the fixed station sense, of the random Runge-Kutta of second order method defined by where n X and and f: , satisfies the following conditions: where   and hence 0 t was the starting in the problem (3.22) and here n is the starting and since Euler method deal with solution depend on previous solution and if we have Then we obtain: By taking the norm for the two sides: where Is Lipschitz constant (from C2) and: and note that the two points Then by substituting in (3.50) we have And another term: , , where .Is Lipschitz constant (from C2) and: and note that the two points are   X t  and   n X t in (*) then we have: where and Then by substituting in (3.49) we have Then we have: is geometrical sequence then we have: Then we get: Taking into ac 0 0 e  count that where Note that: The term: and the second term: we have: The first limit in (3.53) equals zero and: The computation of  is as follows:

Let
then by tacking the logarithm of the two sides we have: Copyright © 2011 SciRes.OJDM By using the (L'Hospital's Rule): By substituting in (3.53):

Some Results
Theorem 4.1 Var X Var X   Definition 4.1 [13]."The conver Then we have:

merical Examples
The differential equation with random term in it and random initial condition

Nu
Example (5.1) 2) The numerical solution he Random ethod: and so on… Then the general numerical solution i We can prove that: This can be written in another form: We can verify theorem (4.1) as follows 3) Then by taking the limit:

D Fy
For a numerical solution: since Then by taking the limit we have i.e.; B. Using the Random Runge-Kutta method: . 2 . 2 Then the general solution is: This can be written in another form: We can prove that: i.e.

Conclusions
Th itially valued first order random differential equatio be solved numerically using the random Euler and random Runge-Kutta methods in mean square sense.The existence and uniqueness of the solution have been proved.The convergence of the presented numerical techniques has been proven in mean square sen The results of the paper have been illustrated through some examples.
.55) By substituting from (3.55) and (3.53) in (3.51) then w Let us define Z = D. Then the inverse transformation is: Let us define Z = D. Then the inverse transformation is: = z then we have D = z and

. The Convergence of Euler Scheme for Random Differential Equations in
(m.s.) Sense Let us have the random differential equation d s over the same probability space and let a and b be deterministic real numbers.