Itô Formula for Integral Processes Related to Space-Time Lévy Noise

In this article, we give a new proof of the Itô formula for some integral processes related to the space-time Lévy noise introduced in [1] [2] as an alternative for the Gaussian white noise perturbing an SPDE. We discuss two applications of this result, which are useful in the study of SPDEs driven by a space-time Lévy noise with finite variance: a maximal inequality for the p-th moment of the stochastic integral, and the Itô representation theorem leading to a chaos expansion similar to the Gaussian case.


Introduction
Random processes indexed by sets in the space-time domain are useful objects in stochastic analysis, since they can be viewed as mathematical models for the noise perturbing a stochastic partial differential equation (SPDE).In the recent years, a lot of effort has been dedicated to studying the behaviour of the solution of basic equations (like the heat or wave equations), driven by a Gaussian white noise.This type of noise was introduced by Walsh in [10] and is defined as a zero-mean Gaussian process W = {W (B); In the recent articles [2] and [3], a new process has been introduced as an alternative for the Gaussian white noise perturbing an SPDE, which has a structure similar to a Lévy process.We introduce briefly the definition of this process below.
Let N be a Poisson random measure (PRM) on E = R + × R d × R 0 of intensity µ = dtdxν(dz) where R 0 = R\{0} and ν is a Lévy measure on R: for some a ∈ R. It was shown in [3] that Z is an "independently scattered random measure" (in the sense of [7]) with characteristic function: , u ∈ R.
(In particular, Z can be an α-stable random measure with α ∈ (0, 2), as in Definition 3.3.1 of [9].)One can define the stochastic integral of a process X = {X(t, x); t ≥ 0, x ∈ R d } with respect to Z and for a certain integrands, X(t, x)zN (dt, dx, dz) X(t, x)z N (dt, dx, dz).
The stochastic integral with respect to N (or N ) can be defined using classical methods (see e.g.[1]).We review briefly this definition here.
Assume that N is defined on a probability space (Ω, F, P ).On this space, we consider the filtration where B b (R d ) is the class of bounded Borel sets in R d and B b (R 0 ) is the class of Borel sets in R 0 which are bounded away from 0.
An elementary process on Ω × R d × R 0 is a process of the form we can define the stochastic integral of H with respect to N and the process (2) On the other hand, for any predictable process K such that we can define the integral of K with respect to N and this integral satisfies (3) In this article, we work with processes whose trajectories are right-continuous with left limits.If x is a right continuous function with left limits, we denote by x(t−) = lim s↑t x(s) the left limit at time t and ∆x(t) = x(t) − x(t−) the jump size at time t.We will prove the following result.
Theorem 1.1 (Itô Formula I).Let Y = {Y (t)} t≥0 be a process defined by where G,K and H are predictable processes which satisfy Then there exists a modification of Y (denoted also by Y ) whose sample paths are right-continuous with left limits, such that for any function f ∈ C 2 (R) and for any t > 0, with probability 1, Note that since the first two terms on the right-hand side of (4) are processes of finite variation and the last term is a square-integrable martingale, Y is a semimartingale.Therefore, the Itô formula given by Theorem 1.1 can be derived from the corresponding result for a general semimartingale, assuming that Y has sample paths which are right-continuous with left limits (see e.g.Theorem 2.5 of [6]).
The goal of the present article is to give an alternative proof of this result which contains the explicit construction of the modification of Y for which the Itô formula holds.
We will also give the proof of the following variant of the Itô formula, which will be useful for the applications related to the (finite-variance) Lévy white noise, discussed in Section 4.
Then there exists a càdlàg modification of Y (denoted also by Y ) such that for any t > 0, with probability 1, The method that we use for proving Theorems 1.1 and 1.2 is similar to the one described in Section 4.4.2 of [1] in the case of classical Lévy processes, the difference being that in our case, N is a PRM on R + × R d × R 0 instead of R + × R 0 .This method relies on a double "interlacing" technique, which consists in first approximating the set {|z| ≤ 1} of small jumps by sets of the form {ε n < |z| ≤ 1} with ε n ↓ 0 (in the case when H and K vanish outside a bounded Borel set B ⊂ R d ), and then approximating the spatial domain R d by regions of the form [−a n , a n ] d with a n ↑ ∞.This approximation method is described in Section 2. Section 3 is dedicated to the proofs of Theorems 1.1 and 1.2.Finally, in Section 4 we discuss two applications of Theorem 1.2 in the case of the (finite-variance) Lévy white noise introduced in [2].

Approximation by right-continuous processes with left limits
In this section, we show that the Lévy-type integral processes given by ( 4) and ( 9) have right-continuous modifications with left limits, which are constructed by approximation.These modifications will play an important role in the proof of Itô's formula.Since the process Y c (t) = t 0 G(s)ds is continuous, we assume that G = 0.
We consider first processes of the form (4). We start by examining the case when both integrands H and K vanish outside a set B ∈ B b (R d ).Since the process { t 0 B {|z|>1} K(s, x, z)N (ds, dx, dz); t ≥ 0} is clearly càdlàg (the integral being a sum with finitely many terms), we need to consider only the integral process which depends on H.
Note that if H vanishes a.e. on Ω × [0, T ] × B × {z ∈ R 0 ; |z| ≤ ε} for some T > 0 and ε ∈ (0, 1), then is a process whose sample paths are right-continuous with left limits (the first term is a sum with finitely many terms and the second term in continuous).Therefore, we will suppose that H satisfies the following assumption: Assumption A. It is not possible to find T > 0 and ε ∈ (0, 1) such that with respect to the measure P × µ.
Lemma 2.1.Let Y = {Y (t)} t≥0 be a process defined by where B ∈ B b (R d ) and H is a predictable process which satisfies Assumption A and Then, there exists a càdlàg modification where for some sequence (ε n ) n (depending on T ) such that ε n ↓ 0 .
Proof: We use the same argument as in the proof of Theorem 4.3.4 of [1]. where Note that Y n is a càdlàg martingale.By Doob's submartingale inequality and relation ( 2), By Chebyshev's inequality, We consider now the case when the at least one of the integrands H and K do not vanish outside a set B ∈ B b (R d ).More precisely, we introduce the following assumptions: with respect to the measure P × µ.
with respect to the measure P × µ.
We consider bounded Borel sets in R d of the form Theorem 2.2 (Interlacing I).Let Y = {Y (t)} t≥0 be a process defined by (4) with G = 0, where H and K are predictable processes which satisfy conditions (7), respectively (6), such that either H satisfies Assumption B, or K satisfies Assumption B .Then, there exists a càdlàg modification where Y n is a càdlàg modification of the process Y n defined by with E n = K an for some sequence (a n ) n (depending on T ) such that a n ↑ ∞.
Note that (a n ) n is non-decreasing and a n ↑ ∞. (If a n ↑ a * < ∞ then I(a * ) ≤ I(a n ) ≤ 8 −n for all n, and hence I(a * ) = 0, which contradicts Assumptions B or B .)Let Y n be the process given in the statement of the theorem with E n = K an .We denote by Y n (t) the two integrals which compose Y n (t), depending on H, respectively K.
We consider next processes of the form (9) with G = 0. Note that if H vanishes a.e.outside a set B ∈ B b (R d ) then H(s, x, z) N (ds, dx, dz) H(s, x, z)N (ds, dx, dz) where the first term has a càdlàg modification given by Lemma 2.1, the second term is càdlàg, and the third term is continuous.Therefore, we will suppose that H satisfies the following assumption: with respect to the measure P × µ.

Theorem 2.3 (Interlacing II).
Let Y be a process given by (9) with G = 0, where H is a predictable process which satisfies (1) and Assumption C.Then, there exists a càdlàg modification Y = { Y (t)} t≥0 of Y such that (11) holds, where Y n is a càdlàg modification of the process Y n defined by: with E n = K an for some sequence (a n ) n (depending on T ) such that a n ↑ ∞.
Proof: We proceed as in the proof of Theorem 2.2.Fix T > 0. Let a n = inf{a > 0; I(a) ≤ 8 −n } where By Assumption C, a n ↑ ∞.We write Y n (t) as the sum of two integrals, corresponding to the regions {|z| ≤ 1}, and {|z| > 1}.We denote these integrals by Y (1) given by Lemma 2.1.
and the conclusion follows as in the proof of Lemma 2.1.

Proof of Itô Formula
In this section, we give the proofs of Theorem 1.1 and Theorem 1.2.
We start with the simpler case when there are no small jumps (the analogue of Lemma 4.4.6 of [1]).
where G is a predictable process which satisfies (5), B ∈ B b (R d ), ε > 0 and K is a predictable process.Then, for any function f ∈ C 1 (R) and for any ]N (ds, dx, dz).
Case 1: and the conclusion follows since N has points (T i , X i , Z i ) in R + × B × Γ.
Case 2: G is arbitrary.The map t → Y d (t) is a step function which has a jump of size K(T i , X i , Z i ) at time T i .Since Y c is continuous, the jump times and the jump sizes of Y coincide with those of Y d , i.e. ∆Y (T i ) = ∆Y d (T i ) = K(T i , X i , Z i ).We use the decomposition where A and B are defined as follows: if T n−1 ≤ t < T n , we let

Note that
It remains to prove that For this, we assume that T n−1 ≤ t < T n and we write So it suffices to prove that for all i = 1, . . ., n − 1, and We first prove (13).Fix i = 1, . . ., n − 1.For any s ∈ where for the last equality we used the fact that Y Next, we prove (14).Note that if t = T n−1 , both terms are zero.So, we assume that t > T n−1 .For any s ∈ (T n−1 , t), Y (s) = Y c (s) + Y d (T n−1 ) := g(s) and g (s) = Y c (s) = G(s).Arguing as above, we see that where for the last equality we used the fact that Y . This concludes the proof of ( 14).
Proof of Theorem 1.1: We fix t > 0. We assume that f and f are bounded.(Otherwise, we use If H vanishes a.e. on Ω × [0, T ] × B × {z ∈ R 0 ; |z| ≤ ε} for some T > 0 and ε ∈ (0, 1), the conclusion follows from Lemma 3.1.Therefore, we suppose that H satisfies Assumption A. By Lemma 2.1, there exists a càdlàg modification of Y (denoted also by Y ) such that where the process {Y n (s)} s∈[0,t] is defined by (ε n ) n being the sequence given by Lemma 2.1 with T = t.Consequently, Note that where G(s) = G(s)− B {εn<|z|≤1} H(s, x, z)ν(dz)dx and K(s, x, z) = H(s, x, z) . By the Cauchy-Schwarz inequality, G satisfies (5) (since B is a bounded set and H satisfies (10)).We apply Lemma 3.1 to Y n : After using the definitions of G and K, as well as adding and subtracting we obtain that: We denote by T 1 , T 2 , T 3 , respectively T 4 the four terms on the right-hand side of (8).The conclusion will follow by taking the limit as n → ∞ in (17).The left-hand side converges to f (Y (t)) − f (Y (0)), by (15).
We treat separately the four terms in the right-hand side.By the dominated convergence theorem, Since T 2,n is a sum with a finite number of terms, using (15) and the continuity of f , we see that T 2,n → T 2 a.s.For the third term, note that and , by (15) and the continuity of f .By the dominated convergence theorem, A n → 0 and B n → 0. To justify the application of this theorem, we use Taylor's formula of the first order: and the fact that f is bounded.This proves that where and the fact that f is bounded.This proves that T ]N (ds, dx, dz) The conclusion follows letting n → ∞ as in Case 1.
Proof of Theorem 1.2: We assume that f and f are bounded.We fix t.
We add and subtract (E n ) n being the sequence given by Theorem 2.3 with T = t.We write the Itô formula for the process Y n (using Case 1) and we let n → ∞.

Applications
In this section, we assume that the Lévy measure ν satisfies the condition: As in [2], we consider the process For any predictable process X = {X(t, x); t ≥ 0, x ∈ R d } such that we can define the stochastic integral of X with respect to L and this integral satisfies: By (2), this integral has the following isometry property: When used as a noise process perturbing an SPDE, L behaves very similarly to the Gaussian white noise.For this reason, L was called a Lévy white noise in [2].
Hence E(M h (t)) = 1 for all t ≥ 0, where The following result is the analogue of Lemma 5.3.3 of [1].
Proof: We apply Theorem 1.2 to the function f (x) = e ix and the process Hence, H(s, x, z) = h(s, x)z and G(s (e iY (s)+ih(s,x)z − e iY (s) − izh(s, x)e iY (s) )ν(dz)dxds Since the sum of the last two integrals is 0, the conclusion follows.
We fix T > 0. We let ).We denote by L 2 C (Ω, F L T , P ) be the space of C-valued square-integrable random variables which are measurable with respect to F L T .
Lemma 4.4.The linear span of the set Proof: The proof is similar to that of Lemma 5.3.4 of [1].We omit the details.The multiple (and iterated) integral with respect N can be defined similarly to the Gaussian white-noise case (see e.g.Section 5.4 of [1]).
More precisely, we consider the Hilbert space H = L 2 (U, U, µ), where U = [0, T ] × R d × R 0 , U = B([0, T ]) × B(R d ) × B(R 0 ) and µ = dtdxν(dz).For any integer n ≥ 1, we consider the n-th tensor product space H ⊗n = L 2 (U n , U n , µ n ).The n-th multiple integral I n (f ) with respect to N can be constructed for any function f ∈ H ⊗n , and this integral has the isometry property: E|I n (f )| 2 = n! f 2 H ⊗n .Moreover, if n = m, then E[I n (f )I m (g)] = 0 for all f ∈ H ⊗n and g ∈ H ⊗m .
We have the following result.Theorem 4.6 (Chaos Expansion).For any F ∈ L 2 (Ω, F L T , P ), there exist some symmetric functions f n ∈ H ⊗n , n ≥ 1 such that In particular, Proof: We use the same argument as in the classical case, when N is a PRM on R + × R 0 and L(t) = t 0 R 0 z N (ds, dz), t ≥ 0 is a square-integrable Lévy process (see Theorem 5.4.6 of [1] or Theorem 10.2 of [5]).By Theorem 4.5, there exists a predictable process ψ 1 satisfying (1) such that We substitute this into (23) and iterate the procedure.We omit the details.
with covariance E[W (A)W (B)] = |A ∩ B|, where | • | denotes the Lebesgue measure and B b (R + × R d ) is the class of bounded Borel sets in R + × R d .