^{1}

^{*}

^{1}

In this article, we consider the construction of a SVIR (Susceptible, Vaccinated, Infected, Recovered) stochastic compartmental model of measles. We prove that the deterministic solution is asymptotically the average of the stochastic solution in the case of small population size. The choice of this model takes into account the random fluctuations inherent to the epidemiological characteristics of rural populations of Niger, notably a high prevalence of measles in children under 5, coupled with a very low immunization coverage.

The measles is caused by a virus belonging to morbillivirus group. It may infect other primates, but is largely specialized on its human host. It is transmitted by direct contact with an infected person or by air [

A fundamental concept that has come out of the measles transmission process is that of the basic reproduction number R_{0}. It is defined as average number of secondary infections produced when one infected individual is introduced into a host population where everyone is susceptible [_{0} is a threshold parameter in the course of the spread of measles disease; indeed, if R_{0} < 1, the disease will eventually disappear from the population, while if R_{0} > 1, the disease can spread as an epidemic in the absence of health interventions. In a small, isolated population, a measles epidemic cannot persist [

Most model used for infections diseases are the compartmental models, originally introduced by Kermack and Mckendrick and their variants [

In our SVIR stochastic model, we consider R_{p} the effective reproduction number, characterizing the vaccination effort to control the spread of the disease, where p is the proportion of newborns vaccinated and immunized. In the total absence of vaccination (p= 0) against measles, we estimate R_{0} the basic reproduction number [

The rest of the paper is organized as follows: Section 2 describes in detail the deterministic model SVIR and the equilibrium points of the system of differential equations of the model. In Section 3, we formulate our stochastic SVIR model by means of the Kolmogorov Forward equations, precisely by means of a system of differential equations of the mathematical expectations of the number of susceptible, infected and immune (recovered and vaccinated). Section 4 is devoted to the study of the asymptotic behavior of our stochastic model, followed by numerical simulations in the fifth section. Finally, in the last section, we discuss our stochastic approach and scientific conclusions.

In what follows, S ( t ) , I ( t ) , R ( t ) denote respectively the number of susceptible, infected and immunized (susceptible vaccinated and recovered patients) at time t.

In this model, the new susceptibles (newborns) are introduced at a constant rate n. A fraction, pn, of newborns has acquired immunity by vaccination. The other fraction ( 1 − p ) n remains susceptible. In addition, we assume that:

· The natural death rate is δ for each compartment.

· Infectious patients recover at the rate of γ.

· Infectious patients have an additional μ death rate from measles.

· We consider the standard incidence f ( I , S ) = β S I , β is the disease transmission coefficient. βis the average probability of an adequate contact (contact sufficient for transmission) between an infected and a susceptible per unit of time.

In

The dynamics of a well-mixed population can be described by the differential equations:

{ d S d t = n ( 1 − p ) − β S I − δ S d I d t = β S I − ( δ + μ + γ ) I d R d t = n p + γ I − δ R (1)

Remark. 1) In the case of equilibrium without disease, the system (1) admits an equilibrium point ( S * 0 , I * 0 , R * 0 ) with

S * 0 = ( 1 − p ) n δ , I * 0 = 0 et R * 0 = n p δ (2)

Setting R 0 = β n δ ( δ + μ + γ ) et R p = ( 1 − p ) R 0 , this equilibrium point is asymptotically stable [

2) If R p > 1 , an endemic point of equilibrium appears ( S * e , I * e , R * e ) asymptotically stable [

S * e = δ + μ + γ β , I * e = ( R p − 1 ) δ β et R * e = n p β + γ δ ( R p − 1 ) δ β (3)

Let X t = ( S ( t ) , I ( t ) ) t ≥ 0 be a continuous-time homogeneous Markov chain on the denumerable state space ℕ 2 = { 0,1,2, ⋯ } 2 . First, assume that Δ t can be chosen sufficiently small such that at most one change in state occurs during the time interval Δ t . In particular, there can be either a new infection, a birth, a death, or a recovery. From of state { X t = ( s , i ) } , only the following states are accessible:

( s , i ) ; ( s + 1, i ) ; ( s , i − 1 ) ; ( s − 1, i ) ; ( s − 1, i + 1 ) .

corresponding to the possible transitions starting from the state ( s , i ) . (See

Let V ( s , i ) be the set of neighbors of state ( s , i ) :

V ( s , i ) = { ( s + 1, i ) ; ( s − 1, i + 1 ) ; ( s − 1, i ) ; ( s , i − 1 ) }

Setting τ ( s , i ) = n ( 1 − p ) + β i s + δ s + ( μ + δ + γ ) i , the transition rates are defined by:

τ ( s , i ) , ( k , l ) = { n ( 1 − p ) ( k , l ) = ( s + 1 , i ) , s ≥ 0 , i ≥ 0 β i s ( k , l ) = ( s − 1 , i + 1 ) , s ≥ 1 , i ≥ 0 δ s ( k , l ) = ( s − 1 , i ) , s ≥ 1 , i ≥ 0 ( μ + δ + γ ) i ( k , l ) = ( s , i − 1 ) , s ≥ 0 , i ≥ 1 (4)

The transition probabilities of X t = ( S ( t ) , I ( t ) ) are defined by

P ( s , i ) , ( k , l ) ( Δ t ) = ℙ { X t + Δ t = ( k , l ) / X t = ( s , i ) }

We have ∀ s ≥ 0 ,

P ( s , i ) , ( k , l ) ( Δ t ) = { ∀ i > 0 , τ ( s , i ) , ( k , l ) Δ t + o ( Δ t ) if ( k , l ) ∈ V ( s , i ) 1 − τ ( s , i ) Δ t + o ( Δ t ) if ( k , l ) = ( s , i ) ∀ i = 0 , P ( s , 0 ) , ( s , 0 ) ( Δ t ) = 1 (5)

The distribution of X t is P s , i ( t ) = 0 if s < 0 or i < 0 and P s , i ( t ) = ℙ { X t = ( s , i ) } if s ≥ 0 , i ≥ 0 . Therefore, the marginal distributions are given by:

ℙ { I ( t ) = i } = ∑ s ≥ 0 P s , i ( t ) and ℙ { S ( t ) = s } = ∑ i ≥ 0 P s , i ( t )

From the Equation (5), we obtain the Kolmogorov Forward equations, for all s ≥ 0 and i ≥ 0

d P s , i d t = n ( 1 − p ) [ P s − 1 , i − P s , i ] + β [ ( s + 1 ) ( i − 1 ) P s + 1 , i − 1 − s i P s , i ] + ( μ + γ + δ ) [ ( i + 1 ) P s , i + 1 − i P s , i ] + δ [ ( s + 1 ) P s + 1 , i − s P s , i ] (6)

Hence the system of differential equations verified by the mathematical expectations:

{ d S ¯ d t = ( 1 − p ) n − β S ¯ I ¯ − δ S ¯ − β c o v S I d I ¯ d t = β S ¯ I ¯ − ( μ + δ + γ ) I ¯ + β c o v S I d R ¯ d t = n p + γ I ¯ − δ R ¯ (7)

S ¯ ( t ) = ∑ s = 0 + ∞ ∑ i = 0 + ∞ s P s , i ( t ) , I ¯ ( t ) = ∑ s = 0 + ∞ ∑ i = 0 + ∞ i P s , i ( t ) c o v S I ( t ) = ∑ s = 0 + ∞ ∑ i = 0 + ∞ s i P s , i ( t ) − S ¯ ( t ) I ¯ ( t ) and R ¯ ( t ) = ∑ r = 0 + ∞ r ℙ { R ( t ) = r }

In this part, we establish that the extinction of the epidemic is done almost surely independently of the number R_{p}, although this is not a priori guaranteed in infinite dimension.

Let us consider the embedded process ( Y k ) k ∈ ℕ of ( X t ) t ≥ 0 which is a discrete Markov chain representing the sequence of values taken by ( X t ) t ≥ 0 at transition times.

Setting Δ Y k = Y k + 1 − Y k , we have:

ℙ { Δ Y k = ( e 1 , e 2 ) / Y k = ( s , i ) } = { n ( 1 − p ) τ ( s , i ) , if ( e 1 , e 2 ) = ( 1 , 0 ) β s i τ ( s , i ) , if ( e 1 , e 2 ) = ( − 1 , 1 ) δ s τ ( s , i ) , if ( e 1 , e 2 ) = ( − 1 , 0 ) ( μ + γ + δ ) i τ ( s , i ) , if ( e 1 , e 2 ) = ( 0 , − 1 ) (8)

note that τ ( s , i ) = n ( 1 − p ) + β s i + δ s + ( μ + γ + δ ) i .

To establish our results, we need the proposition [

Proposition 1. Setting s 0 = max ( n ( 1 − p ) δ , μ + γ + δ β ) ; D 0 = { ( s , i ) , s > s 0 , i > 0 } ,

D 1 = { ( s , 0 ) , s > s 0 } , D 2 = { ( s 0 , i ) , i > 0 } and let d ( s , i ) = E [ Δ Y k / Y k = ( s , i ) ] be the drift vector and d j ( s , i ) = d ( s , i ) ( s , i ) ∈ D j , 0 ≤ j ≤ 2 . then

d ( s , i ) = ( n ( 1 − p ) − β s i − δ s τ ( s , i ) , β s i − ( μ + γ + δ ) i τ ( s , i ) ) (9)

Lemma 2. For all j ∈ { 0,1,2 } , we pose d j = d ( s , i ) where ( s , i ) ∈ D j . We denote by ψ = ( n 1 , d 0 ) ^ the angle between n 1 and d 0 , ψ 1 = ( n 1 , d 1 ) ^ the angle between n 1 and d 1 , ψ 2 = ( n 2 , d 2 ) ^ the angle between n 2 and d 2 , where n 1 = ( 0,1 ) and n 2 = ( 1,0 ) .

We have the following results:

1) 0 < ψ < ψ 1 = π 2 < ψ 2 ≤ π .

2) If R p ≤ 1 : s 0 = μ + γ + δ β , ψ 2 = π and π 4 < ψ < ψ 1 = π 2

3) If R p > 1 : s 0 = n ( 1 − p ) δ , π 2 < ψ 2 < π .

Proof: See Appendix.

Definition 4.1. Let ϕ ( r , θ ) = r α cos ( α θ − θ 1 ) where α = 2 ( θ 1 + θ 2 ) π , for all reals r ≥ 0 and θ ∈ [ 0, π 2 ] with

· θ 1 ∈ ] 0, π 4 [ and θ 2 ∈ ] π 2 − θ 1 , π 2 [ , in the case where R p ≤ 1 .

· θ 1 ∈ ] − π 2 , inf ( ψ − π 2 , − π 4 ) [ and θ 2 ∈ ] − θ 1 , π 2 [ , in the case where R p > 1 .

ψ = ( n 1 , d 0 ) ^ is the angle between n 1 and d 0 ,

We say that ϕ is the Lyapounov function intervening in the study of the recurrence-transience of X t .

Remark. If R p ≤ 1 , we obtain 1 < α < 2 , whereas if R p > 1 , 0 < α < 1 .

Lemma 3. Let ϕ be the Lyapounov function. For all reals r ≥ 0 and θ ∈ [ 0, π 2 ] , we have the following results:

1) ∇ ϕ ( r , θ ) ⋅ d 0 < 0 , ∇ ϕ ( r , 0 ) ⋅ d 1 < 0 and ∇ ϕ ( r , π / 2 ) ⋅ d 2 < 0 .

2) There are real constants C_{0} and C_{1} such that, uniformly in θ we have:

a) lim sup r → + ∞ r 1 − α ∇ ϕ ( r , θ ) ⋅ d 0 ≤ C 0 < 0

b) lim sup r → + ∞ r 2 − α | D l j ϕ ( r , θ ) | ≤ C 1 and (c) lim sup r → + ∞ ϕ ( r , θ ) = + ∞

3) lim sup r → + ∞ r 1 − α ∇ ϕ ( r , 0 ) ⋅ d 1 ≤ C 0 < 0 and lim sup r → + ∞ r 1 − α ∇ ϕ ( r , π / 2 ) ⋅ d 2 ≤ C 0 < 0

D l j ϕ ( r , θ ) denote the partial derivatives of ϕ ( r , θ ) with respect to x l ( l = 1 , 2 ) and x j ( j = 1 , 2 ) . r and θ are the polar coordinates of x = ( x 1 , x 2 ) .

Proof: See Appendix.

Remark. Let x = ( x 1 , x 2 ) ∈ D j ,0 ≤ j ≤ 2 and A j ( x ) = ( Δ Y k / Y k = x ) ; we obtain

d j ( x ) = E [ A j ( x ) ] and A j ( x ) ∈ { ( 1 , 0 ) , ( − 1 , 1 ) , ( − 1 , 0 ) , ( 0 , − 1 ) }

On { Y k = ( s , i ) } we have: ℙ [ ‖ Δ Y k ‖ 2 = 2 ] = β s i τ ( s , i ) = 1 − ℙ [ ‖ Δ Y k ‖ 2 = 1 ] and

E [ ‖ A 0 ( s , i ) ‖ 2 ] = 1 + β s i τ ( s , i ) < 2 , E [ ‖ A 1 ( s , i ) ‖ 2 ] = 1 < 2 , E [ ‖ A 2 ( s , i ) ‖ 2 ] = 1 < 2

An immediate consequence of the lemma 3 is:

Lemma 4. Let x = ( x 1 , x 2 ) , y = ( y 1 , y 2 ) two vectors of the plane and ‖ x ‖ = x 1 2 + x 2 2 , x ⋅ y = x 1 y 1 + x 2 y 2 .

Then, There are ε > 0 and K > 0 such that

1) If R p > 1 , then ∀ ‖ x ‖ ≥ K , E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ] ≤ 0 .

2) If R p ≤ 1 , then ∀ ‖ x ‖ ≥ K , E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ] ≤ − ε .

Proof: See Appendix.

Lemma 5. Let ( Y k ) k ∈ ℕ be the embedded process of ( X t ) t ≥ 0 , which is a discrete Markov chain representing the sequence of values taken by ( X t ) t ≥ 0 at transition times. Then

1) If R p ≤ 1 , then the Markov chain ( Y k ) k ∈ ℕ is positive recurrent.

2) If R p > 1 , then the Markov chain ( Y k ) k ∈ ℕ is null recurrent.

Proof: See Appendix.

We can state now our main results:

Theorem 6. Let T 0 = inf { t ≥ 0 , I ( t ) = 0 } with inf ∅ = + ∞ . Then, for all i ∈ ℕ * , ℙ i [ T 0 < + ∞ ] = 1 and lim t → + ∞ ℙ i [ I ( t ) = 0 ] = 1 .

Proof: This result is a consequence of the lemma 5 and the properties of recurrent Markov chains with nonempty absorbing set of states. (see [

Theorem 7. Let T 0 = inf { t ≥ 0 , I ( t ) = 0 } with inf ∅ = + ∞ and ( S * 0 = ( 1 − p ) n δ , I * 0 = 0 , R * 0 = n p δ ) .

If R p ≤ 1 , then (1) E [ T 0 ] = + ∞ and (2) lim t → + ∞ ( S ¯ ( t ) , I ¯ ( t ) , R ¯ ( t ) ) = ( S * 0 , I * 0 , R * 0 ) .

Proof: The first result reflects the positive recurrence obtained from the lemma 5. The second assertion follows from the fact that the Markov chain is absorbent, and once in the absorbing state, the correlation between S ( t ) and I ( t ) is identically zero. Therefore, asymptotically the deterministic equations and the mathematical expectation equations have the same equilibrium points. □

Theorem 8. Let T 0 = inf { t ≥ 0 , I ( t ) = 0 } , inf ∅ = + ∞ and ( S * e = δ + μ + γ β , I * e = ( R p − 1 ) δ β , R * e = n p β + γ δ ( R p − 1 ) δ β )

If R p > 1 , then (1) E [ T 0 ] = + ∞ and (2) lim t → + ∞ ( S ¯ ( t ) , I ¯ ( t ) , R ¯ ( t ) ) = ( S * e , I * e , R * e )

Proof: The first assertion is proved by observing that there are asymptotically two distinct equilibrium points, and necessarily E ( T ) = + ∞ in the case R p > 1 , otherwise the two equilibrium points would be confused by uniqueness of the stationary measure.

The proof of the second assertion is similar to that of the second assertion of Theorem 7. □

In what follows, we will denote by I ¯ and dI numerical solutions of Equations (7) and (1) respectively. The average of the simulated realizations of the number of infected I ( t ) is denoted by mI. We used MATLAB software for Monte-Carlo simulations and R software for graphics

Let an initial population of S 0 = 100 susceptibles with an initial number of I 0 = 2 infected for the following values of the parameters:

β = 0.69 ; δ = 0.25 ; μ = 0.02 ; γ = 0.5 ; n = 3.5 ; p = 0.51 ; R p = 6.15

In

For the considered values of the parameters, the endemic equilibrium value is I e * = 1.8659 . The simulations gave the following values :

I ¯ ( 26 ) ≈ 1.8648, d I ( 26 ) ≈ 1.8650, m I ( 26 ) ≈ 0 (voir

I ¯ ( 52 ) ≈ 1.8650, d I ( 52 ) ≈ 1.8650, m I ( 52 ) ≈ 0 (voir

In

S 0 = 100 ; I 0 = 2 ; β = 0.69 ; δ = 0.25 ; μ = 0.02 ; γ = 0.5 ; n = 3.5 ; p = 0.51 ; t ∈ [ 0 , 26 ] et R p = 6.1

S 0 = 100 ; I 0 = 2 ; β = 0.69 ; δ = 0.25 ; μ = 0.02 ; γ = 0.5 ; n = 3.5 ; p = 0.51 ; t ∈ [ 0 , 26 ] ; R p = 6.15

In

This paper presents a stochastic compartmental model SVIR of measles. A comparison of our stochastic model with the corresponding deterministic model indicates that the deterministic solution is asymptotically the mean of the stochastic solution. It is well known that mI obtained by random sampling (Monte Carlo methods) before extinction is an estimate of I ¯ . Our result shows that the three trajectories of I ¯ , dI and mI asymptotically coincide. The deterministic solution is the mean of the stochastic solution.

In addition, unlike the deterministic approach, we show that the epidemic is extinguished independently of the threshold R_{p} with a probability equal to 1. More precisely, if R p ≤ 1 extinction occurs in a time of finite mean, and if R p > 1 the disease eventually disappears in a time of infinite mean.

One of the peculiarities of our model is that the size of the population is not constant and can be quite large. The extinction of the process in this case is not guaranteed unlike in the case where the size of the population is constant. This led us to focus on the probability of absorption of the process.

On the other hand, when R 0 > 1 , it is well known for the constant population SIR model [

To understand the dynamics of the system before absorption, a commonly used measure is the quasi-stationary distribution [

If the set of transient states is finite and irreducible, it is well known that the quasi-stationary distribution exists [

The emergence of epidemics often reveals complex dynamic relationships between susceptible individuals, pathogens and their environments. Complex dynamic relationships that result in seasonal epidemic cycles vary over time [

The authors declare no conflicts of interest regarding the publication of this paper.

Seydou, M. and Moussa Tessa, O. (2021) A Stochastic SVIR Model for Measles. Applied Mathematics, 12, 209-223. https://doi.org/10.4236/am.2021.123013

Proof of the lemma 2:

The lemma is a consequence of the definition of R_{p} and of the expressions d 0 , d 1 , d 2 :

d 0 = ( n ( 1 − p ) − β s i − δ s n ( 1 − p ) + β s i + δ s + ( μ + γ + δ ) i , β s i − ( μ + γ + δ ) i n ( 1 − p ) + β s i + δ s + ( μ + γ + δ ) i ) d 1 = ( n ( 1 − p ) − δ s n ( 1 − p ) + δ s , 0 ) d 2 = ( n ( 1 − p ) − β s 0 i − δ s 0 n ( 1 − p ) + β s 0 i + δ s 0 + ( μ + γ + δ ) i , β s 0 i − ( μ + γ + δ ) i n ( 1 − p ) + β s 0 i + δ s 0 + ( μ + γ + δ ) i ) (10)

We can easily determine the signs of the abscissas and ordinates of d j , indeed:

s 0 = max ( n ( 1 − p ) δ , μ + γ + δ β ) ; R p = β n ( 1 − p ) δ ( μ + γ + δ )

1) d 0 x < 0 < d 0 y ; d 1 x = 0 ; d 2 x < 0 ≤ d 2 y

2) If R p ≤ 1 we have: 0 < d 0 y < − d 0 x ; d 1 x = 0 ; d 2 x < 0 = d 2 y

3) If R p > 1 we have: 0 < d 0 y < − d 0 x ; d 1 x = 0 ; d 2 x < 0 < d 2 y □

Proof of the lemma 3:

In polar coordinates, we have ∇ ϕ ( r , θ ) = α r α − 1 ( cos ( α θ − θ 1 ) , − sin ( α θ − θ 1 ) ) . To establish the result, we distinguish the two cases R p ≤ 1 and R p > 1 .

· If R p ≤ 1 , the angle between d 0 and ∇ ϕ ( r , θ ) is

a ( θ ) = θ 1 − α θ − ( ψ + π 2 − θ ) . From − θ 2 − ψ − π 2 ≤ a ( θ ) ≤ ( θ 1 − ψ ) − π 2 , the

angle between d 1 and ∇ ϕ ( r ,0 ) = α r α − 1 ( cos θ 1 , sin θ 1 ) is equal to a 1 = θ 1 − π . furthermore, we show that the angle between d 2 and ∇ ϕ ( r , π / 2 ) = α r α − 1 ( cos θ 2 , − sin θ 2 ) is equal to a 2 = − θ 2 − π .

The choice of θ 1 and θ 2 allows to have:

− 5 π 4 < θ 1 − π < − 3 π 4 , − 3 π 2 < − θ 2 − π < θ 1 − 3 π 2 < − π 2

and for any θ , 0 < θ < π 2 , we obtain − 5 π 4 < a ( θ ) < θ 1 − 3 π 4 < − π 2 . As a result, we have inequalities cos a 1 < − 2 2 , cos a 2 < cos ( θ 1 − 3 π 2 ) < 0 et cos ( a ( θ ) ) < cos ( θ 1 − 3 π 4 ) < 0 .

· If R p > 1 , we have good θ 1 − ψ − π 2 ≤ a ( θ ) ≤ θ 1 − ψ et a 1 = θ 1 − π .

For d 2 , we find a 2 = − θ 2 − ψ 2 . The choice of θ 1 and θ 2 leads to

− 3 π 2 < θ 1 − π < − π et − 3 π 2 < − θ 2 − ψ 2 < − θ 2 − π 2 < − π 2

and for any θ , 0 < θ < π 2 et − 3 π 2 < θ 1 − ψ − π 2 ≤ a ( θ ) ≤ θ 1 − ψ < − π 2 . Thus

cos a 1 < − 2 2 , cos a 2 < cos ( − θ 2 − π 2 ) < 0 et cos ( a ( θ ) ) < cos ( θ 1 − ψ ) < 0.

definitively, for any value of R p and for any 0 ≤ θ ≤ π / 2 , we deduce that

∇ ϕ ( r , θ ) ⋅ d 0 < 0 , ∇ ϕ ( r , 0 ) ⋅ d 1 < 0 and ∇ ϕ ( r , π / 2 ) ⋅ d 2 < 0. (11)

So the assertions 1; 2. a) et 3. deduce.

To establish the assertion 2. (b), we consider the partial derivatives with respect to x 1 and x 2 of ϕ :

D 1 ϕ = cos θ ϕ r − sin θ r ϕ θ = α r α − 1 cos ( ( α − 1 ) θ − θ 1 ) D 2 ϕ = sin θ ϕ r + cos θ r ϕ θ = α r α − 1 sin ( ( α − 1 ) θ − θ 1 ) (12)

where ϕ r and ϕ θ are the partial derivatives with respect to r and θ of ϕ of Jacobian matrix of ϕ :

D l j ϕ = α ( α − 1 ) r α − 2 ( cos ( ( α − 2 ) θ − θ 1 ) sin ( ( α − 2 ) θ − θ 1 ) − sin ( ( α − 2 ) θ − θ 1 ) cos ( ( α − 2 ) θ − θ 1 ) ) (13)

The assertion 2. (c) follows from the definition of ϕ and the fact that cos ( α θ − θ 1 ) > 0 ; indeed, for any θ ,0 ≤ θ ≤ π 2 we have − π 2 < α θ − θ 1 < π 2 .

Hence the lemma 3. □

Proof of the lemma 4:

The proof is analogous to that of the theorem 3 of [

ϕ ( x + h ) − ϕ ( x ) = ∇ ϕ ( x ) ⋅ h + R ( x , h )

where h = ( h 1 , h 2 ) and R ( x , h ) = 1 2 ∑ l , j = 1 , 2 D l j ϕ ( x + η h ) h l h j is the remainder of Taylor with 0 < η < 1 .

For l ∈ { 0,1,2 } , when we replace h by A l ( x ) , we get:

E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ∈ D l ] = ∇ ϕ ( x ) ⋅ d l + E [ R ( x , A l ( x ) ) ]

Applying the lemma 3 and the remark 4, we have

E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ∈ D l ] = ∇ ϕ ( x ) ⋅ d l + O ( ‖ x ‖ α − 2 )

· If R p ≤ 1 , we have 1 < α < 2 and lim sup ‖ x ‖ → ∞ ‖ x ‖ − α + 1 ∇ ϕ ( x ) ⋅ d l ≤ C 0 < 0 ; Therefore:

∃ ε > 0 and K > 0 , E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ] ≤ − ε ∀ ‖ x ‖ ≥ K .

· If R p > 1 , it turns out that 0 < α < 1 , we cannot conclude that

E [ ϕ ( Y k + 1 ) − ϕ ( Y k ) / Y k = x ] ≤ 0 ∀ ‖ x ‖ ≥ K .

What completes the demonstration. □

Proof of the lemma 5:

Let us show the recurrence in the case R p > 1 .

We pose B = { x / ‖ x ‖ ≤ K } , T = inf { k ≥ 0, Y k ∈ B } and Z k = ϕ ( Y k ) I { T > k } where I A denotes the indicator map of A.

Let ( F k ) k ∈ ℕ be the filtration associated to ( Y k ) k ∈ ℕ . Knowing that I { T > k + 1 } ≤ I { T > k } , we can write: E [ Z k + 1 / F k ] ≤ E [ ϕ ( Y k + 1 ) I { T > k } / F k ]

E [ ϕ ( Y k + 1 ) I { T > k } / F k ] = I { T > k } E [ ϕ ( Y k + 1 ) / F k ] ≤ I { T > k } ϕ ( Y k ) = Z k .

In this last expression, the last inequality is obtained from the second assertion of the lemma 4. Thereafter ( Z k ) k ∈ ℕ is a positive supermartingale and therefore ℙ [ lim k → + ∞ Z k = 0 ] = 1 .

On the other hand, because the Markov chain ( Y k ) k ∈ ℕ is irreducible, we have ℙ [ lim sup k → + ∞ ‖ Y k ‖ = ∞ ] = 1 . In this case, on { T = + ∞ } , it follows that

lim k → + ∞ Z k = lim k → + ∞ ϕ ( Y k ) = + ∞ , thus ℙ [ T = + ∞ ] = 0 . In other words, the finite set A is visited an infinite number of times by the Markov chain ( Y k ) k ∈ ℕ , which corresponds to recurrence. Finally, the last assertion of the lemma is a consequence of the first assertion of the lemma 4 and of Foster’s positive recurrence criterion [