A Mean-Field Stochastic Maximum Principle for Optimal Control of Forward-Backward Stochastic Differential Equations with Jumps via Malliavin Calculus

Abstract

This paper considers a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation (SDE) driven by Lévy processes and the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

Share and Cite:

Zhou, Q. and Ren, Y. (2018) A Mean-Field Stochastic Maximum Principle for Optimal Control of Forward-Backward Stochastic Differential Equations with Jumps via Malliavin Calculus. Journal of Applied Mathematics and Physics, 6, 138-154. doi: 10.4236/jamp.2018.61014.

1. Introduction

In contrast to the stochastic control problem (e.g. [1] [2] ) which is studied in the complete information case (and [1] with the Brownian motion case only), the performance functional that we will investigate involves the mean of functionals of the state variables (hence the name mean-field). Problems of this type occur in many applications; for example in a continuous-time Markowitz’s mean-variance portfolio selection model where the variance term involves a quadratic function of the expectation. The inclusion of this mean term introduces some major technical difficulties, which include among others the time inconsistency leading to the failure of dynamic programming approach. Recently, there has been increasing interest in the study of this type of stochastic control problems; see for example [3] [4] and [5] .

On the other hand, since we allow the coefficients ( b , σ , γ , g , f and h 2 as follows) to be the stochastic processes and also because our control must be partial information adapted, this problem is not of Markovian type and hence cannot be solved by dynamic programming even if the mean term were not present. We instead investigate the maximum principle, and will derive an explicit form for the adjoint process. The approach we employ is Malliavin calculus which enables us to express the duality involved via the Malliavin derivative. Our paper is related to the recent paper [6] and [7] . In [6] , they consider a mean-field type stochastic control problem where the dynamics is governed by a controlled forward SDE with jumps and the information available to the controller is possibly less than the overall information. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed. [7] presents various versions of the maximum principle for optimal control (not mean-field type) of forward-backward stochastic differential equations with jumps and a Malliavin calculus approach which allow us to handle non-Markovian system. The motivation of [7] is risk minimization via g-expectation.

This paper can be considered as the continuation of [6] and [7] . We consider a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation (SDE) driven by Lévy processes and the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus will be employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

As in the paper [6] , we emphasize that our problem should be distinguished from the partial observation control problem, where it is assumed that the controls are based on the noisy observation of the state process. For the latter type of problems, there is a rich literature (see, e.g. [1] [8] [9] [10] [11] [12] ). Note that the methods and results in the partial observation case do not apply to our situation. On the other hand, there are several existing works on stochastic maximum principle (either completely or partially observed) where adjoint processes are explicitly expressed (see, e.g. [8] [10] [12] [13] ). However, these works all essentially employ stochastic flow technique, over which the Malliavin calculus has the advantage in terms of numerical computations (see, e.g. [14] ).

Now let’s state our problem as follows:

Suppose the state process ( A ( t ) , X ( t ) ) = ( A ( u ) ( t , ω ) , X ( u ) ( t , ω ) ) ; t [ 0, T ] , ω Ω , of our system is described by the following coupled forward-backward system of SDEs.

Forward system in the controlled process A ( t ) :

{ d A ( t ) = b ( t , A ( t ) , u ( t ) ) d t + σ ( t , A ( t ) , u ( t ) ) d B ( t ) + 0 γ ( t , A ( t ) , u ( t ) , z ) N ˜ ( d t , d z ) ; t [ 0 , T ] , A ( 0 ) = a . (1.1)

Backward system in the unknown processes X ( t ) , Y ( t ) , K ( t , z ) :

{ d X ( t ) = g ( t , A ( t ) , X ( t ) , Y ( t ) , u ( t ) ) d t + Y ( t ) d B ( t ) + 0 K ( t , z ) N ˜ ( d t , d z ) ; t [ 0, T ] , X ( T ) = c A ( T ) , where c 0 is a given constant . (1.2)

Here 0 = \ { 0 } , B ( t ) = B ( t , ω ) and η ( t ) = η ( t , ω ) , given by

η ( t ) = 0 t 0 z N ˜ ( d s , d z ) ; t 0, ω Ω , (1.3)

are a 1-dimension Brownian motion (see [15] Theorem 13.5) and an independent pure jump Lévy martingale, respectively, on a given filtered probability space ( Ω , F , { F t } t 0 , P ) . Thus

N ˜ ( d t , d z ) : = N ( d t , d z ) ν ( d z ) d t (1.4)

is the compensated jump measure of η ( ) , where N ( d t , d z ) is the jump measure and ν ( d z ) is the Lévy measure of the Lévy process η ( ) . The process u ( t ) is our control process, assumed to be F t -adapted and have values in a given open convex set U . The coefficients b : [ 0, T ] × × U × Ω , σ : [ 0, T ] × × U × Ω , γ : [ 0, T ] × × U × 0 × Ω and g : [ 0, T ] × × × × U × Ω are given F t -predictable processes.

Let T > 0 be a given constant. For simplicity, we assume that

0 z 2 ν ( d z ) < . (1.5)

Suppose in addition that we are given a subfiltration

E t F t , t [ 0, T ]

representing the information available to the controller at time t and satisfying the usual conditions. For example, we could have

E t = F ( t δ ) + ; t [ 0, T ] , δ > 0 is a constant ,

meaning that the controller gets a delayed information compared to F t .

Let A = A E denote a given family of controls, contained in the set of E t - predictable controls u ( ) such that the system (1.1)-(1.2) has a unique strong solution. If u A E , then we call u an admissible control. Let U be a given convex set such that u ( t ) U for all t [ 0, T ] a.s., for all u A E .

Suppose we are given a performance functional of the form

J ( u ) = E [ 0 T f ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) , ω ) d t + h 1 ( X ( 0 ) ) + h 2 ( A ( T ) , E [ g 0 ( A ( T ) ) ] , ω ) ] ; u A E , (1.6)

where E denotes expectation with respect to P , f 0 : , h 0 : and g 0 : are given functions such that E [ | f 0 ( A ( t ) ) | ] < , E [ | h 0 ( X ( t ) ) | ] < for all t and E [ | g 0 ( A ( T ) ) | ] < , and f : [ 0, T ] × × × × × × × U × Ω and h 2 : × × Ω are given F t -predictable processes and h 1 is a given function with

E [ 0 T | f ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) | d t + | h 1 ( X ( 0 ) ) | + | h 2 ( A ( T ) , E [ g 0 ( X ( T ) ) ] ) | ] < , for all u A E . (1.7)

The control problem we consider is the following:

Problem 1.1 (Partial information optimal control). Find Φ E and u * A E (if it exists) such that

Φ E = s u p u A E J ( u ) = J ( u * ) . (1.8)

2. A Brief Review of Malliavin Calculus for Lévy Processes

In this section, we recall the basic definitions and properties of Malliavin calculus for Brownian motion B ( ) and N ( d s , d z ) related to this paper, for reader’s convenience.

Let L 2 ( F T , P ) be the space of all -valued F T -measurable, and square-integrable random variables. Let L 2 ( λ n ) be the space of deterministic real functions f such that

f L 2 ( λ n ) = ( [ 0, T ] n f 2 ( t 1 , t 2 , , t n ) d t 1 d t 2 d t n ) 1 / 2 < , (2.1)

where λ ( d t ) denotes the Lebesgue measure on [ 0, T ] .

Let L 2 ( ( λ × μ ) n ) be the space of deterministic real functions f such that

f L 2 ( ( λ × μ ) n ) = ( ( [ 0, T ] × 0 ) n f 2 ( t 1 , z 1 , t 2 , z 2 , , t n , z n ) d t 1 μ ( d z 1 ) d t 2 μ ( d z 2 ) d t n μ ( d z n ) ) 1 / 2 < . (2.2)

L 2 ( λ × P ) can be similarly denoted.

A general reference for this presentation is [16] [17] and [18] . See also the book [19] .

2.1. Malliavin Calculus for B ( ⋅ )

A natural starting point is the Wiener-Itô chaos expansion theorem (See [18] Theorem 1.1.2), which states that any F L 2 ( F T , P ) can be written as

F = n = 0 I n ( f n ) , (2.3)

for a unique sequence of symmetric deterministic functions f n L 2 ( λ n ) , where λ is Lebesgue measure on [ 0, T ] and

I n ( f n ) = n ! 0 T 0 t n 0 t 2 f n ( t 1 , , t n ) d B ( t 1 ) d B ( t n ) (2.4)

(the n-times iterated integral of f n with respect to B ( ) ) for n = 1 , 2 , and I 0 ( f 0 ) = f 0 when f 0 is a constant.

Moreover, we have the isometry

E [ F 2 ] = F L 2 ( P ) 2 = n = 0 n ! f n L 2 ( λ n ) 2 . (2.5)

Definition 2.1 (Malliavin derivative D t ). Let D 1,2 ( B ) be the space of all F L 2 ( F T , P ) such that its chaos expansion (11) satisfies

F D 1,2 ( B ) 2 : = n = 1 n n ! f n L 2 ( λ n ) 2 < . (2.6)

For F D 1,2 ( B ) and t [ 0, T ] , we define the Malliavin derivative of F at t (with respect to B ( ) ), D t F , by

D t F = n = 1 n I n 1 ( f n ( , t ) ) , (2.7)

where the notation I n 1 ( f n ( , t ) ) means that we apply the ( n 1 ) -times iterated integral to the first n 1 variables t 1 , , t n 1 of f n ( t 1 , t 2 , , t n ) and keep the last variable t n = t as a parameter.

One can easily check that

E [ 0 T ( D t F ) 2 d t ] = n = 1 n n ! f n L 2 ( λ n ) 2 = F D 1,2 ( B ) 2 , (2.8)

so ( t , ω ) D t F ( ω ) belongs to L 2 ( λ × P ) .

Some other basic properties of the Malliavin derivative D t are the following:

1) Chain rule ( [18] , page 29)

Suppose F 1 , , F m D 1,2 ( B ) and that ψ : m is C 1 with bounded partial derivatives. Then

ψ ( F 1 , , F m ) D 1,2 ( B ) and

D t ψ ( F 1 , , F m ) = i = 1 m ψ x i ( F 1 , , F m ) D t F i . (2.9)

2) Integration by parts/duality formula ( [18] , page 35)

Suppose h ( t ) is F t -adapted with E [ 0 T u 2 ( t ) d t ] < and let F D 1,2 ( B ) . Then

E [ F 0 T h ( t ) d B ( t ) ] = E [ 0 T h ( t ) D t F d t ] . (2.10)

2.2. Malliavin Calculus for N ˜ ( ⋅ )

The construction of a stochastic derivative/Malliavin derivative in the pure jump martingale case follows the same lines as in the Brownian motion case. In this case, the corresponding Wiener-Itô chaos expansion theorem states that any F L 2 ( F T , P ) (where in this case F t = F t N ˜ is the s-algebra generated by

η ( s ) : = 0 s 0 z N ˜ ( d r , d z ) ; 0 s t ) can be written as

F = n = 0 I n ( f n ) ; f n L ^ 2 ( ( λ × ν ) n ) , (2.11)

where L ^ 2 ( ( λ × ν ) n ) is the space of functions f n ( t 1 , z 1 , , t n , z n ) ; t i [ 0, T ] , z i 0 such that f n L 2 ( ( λ × ν ) n ) and f n is symmetric with respect to the pairs of variables ( t 1 , z 1 ) , , ( t n , z n ) .

It is important to note that in this case the n-times iterated integral I n ( f n ) is taken with respect to N ˜ ( d t , d z ) and not with respect to d η ( t ) . Thus, we define

I n ( f n ) = n ! 0 T 0 0 t n 0 0 t 2 0 f n ( t 1 , z 1 , , t n , z n ) N ˜ ( d t 1 , d z 1 ) N ˜ ( d t n , d z n ) , (2.12)

for f n L 2 ( ( λ × ν ) n ) .

Then Itô isometry for stochastic integrals with respect to N ˜ ( d t , d z ) gives the following isometry for the chaos expansion:

F L 2 ( P ) 2 = n = 0 n ! f n L 2 ( ( λ × ν ) n ) 2 . (2.13)

As in the Brownian motion case, we use the chaos expansion to define the Malliavin derivative. Note that in this case there are two parameters t , z , where t represents time and z 0 represents a generic jump size.

Definition 2.2 (Malliavin derivative D t , z ) ( [16] [17] ) Let D 1,2 ( N ˜ ) be the space of all F L 2 ( F T , P ) such that its chaos expansion (2.11) satisfies

F D 1,2 ( N ˜ ) 2 = n = 1 n n ! f n L 2 ( ( λ × ν ) n ) 2 < . (2.14)

For F D 1,2 ( N ˜ ) , we define the Malliavin derivative of F at ( t , z ) (with respect to N ( ) ), D t , z F , by

D t , z F = n = 1 n I n 1 ( f n ( , t , z ) ) , (2.15)

where I n 1 ( f n ( , t , z ) ) means that we perform the ( n 1 ) -times iterated integral with respect to N ˜ to the first n 1 variable pairs ( t 1 , z 1 ) , , ( t n , z n ) , keeping ( t n , z n ) = ( t , z ) as a parameter.

In this case we get the isometry.

E [ 0 T 0 ( D t , z F ) 2 ν ( d z ) d t ] = n = 1 n n ! f n L 2 ( ( λ × ν ) n ) 2 = ( F ] D 1,2 ( N ˜ ) 2 . (2.16)

(Compare with (2.8)).

The properties of D t , z corresponding to the properties (2.9) and (2.10) of D t are the following:

1) Chain rule ( [17] [20] )

Suppose F 1 , , F m D 1,2 ( N ˜ ) and that ϕ : m is continuous and bounded. Then ϕ ( F 1 , , F m ) D 1,2 ( N ˜ ) and

D t , z ϕ ( F 1 , , F m ) = ϕ ( F 1 + D t , z F 1 , , F m + D t , z F m ) ϕ ( F 1 , , F m ) . (2.17)

2) Integration by parts/duality formula ( [17] )

Suppose Ψ ( t , z ) is F t -adapted and E [ 0 T 0 Ψ 2 ( t , z ) ν ( d z ) d t ] < and let F D 1,2 ( N ˜ ) . Then

E [ F 0 T 0 Ψ ( t , z ) N ˜ ( d t , d z ) ] = E [ 0 T 0 Ψ ( t , z ) D t , z F ν ( d z ) d t ] . (2.18)

We let D 1,2 denote the set of all random variables which are Malliavin differentiable with respect to both B ( ) and N ( , ) .

3. The Stochastic Maximum Principle

We now return to Problem 1.1 given in the introduction. We make the following assumptions:

Assumptions 3.1. (3.1) The functions b ( t , x , u , ω ) : [ 0, T ] × × U × Ω , σ ( t , x , u , ω ) : [ 0, T ] × × U × Ω , γ ( t , x , u , z , ω ) : [ 0, T ] × × U × 0 × Ω , g ( t , a , x , y , u , ω ) : [ 0, T ] × × × × U × Ω , f ( t , a , a 0 , x , x 0 , y , k , u , ω ) : [ 0, T ] × × × × × × × U × Ω , f 0 ( a 0 ) : , h 0 ( x 0 ) : , g 0 ( x 0 ) : , h 1 ( x 0 ) : , h 2 ( a , a 0 , ω ) : × × Ω are all continuously differentiable ( C 1 ) with respect to the arguments (if depending on them) x , x 0 , a , a 0 and u U for each t [ 0, T ] and a.a. ω Ω .

(3.2) For all t , r [ 0, T ] , t r , and all bounded E t -measurable random variables θ = θ ( ω ) the control

β θ ( s ) = θ ( ω ) χ ( t , r ] ( s ) ; s [ 0, T ]

belongs to A E .

(3.3) For all u , β A E with β bounded, there exists δ > 0 such that

u + y β A E for all y ( δ , δ ) .

Furthermore, if we define

f ˜ 1 ( t ) = f ˜ 1 ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) : = f a ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) + E [ f a 0 ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) ] × f 0 ( A ( t ) ) , (3.1)

f ˜ 2 ( t ) = f ˜ 2 ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) : = f x ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) + E [ f x 0 ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) ] × h 0 ( X ( t ) ) , (3.2)

h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) : = h 2 a ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) + E [ h 2 a 0 ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) ] g 0 ( A ( T ) ) , (3.3)

then the family

{ f ˜ 1 ( t , A u + y β ( t ) , E [ f 0 ( A u + y β ( t ) ) ] , X u + y β ( t ) , E [ h 0 ( X u + y β ( t ) ) ] , Y u + y β ( t ) , K u + y β ( t , ) , u ( t ) + y β ( t ) ) d d y A u + y β ( t ) + f u ( t , A u + y β ( t ) , E [ f 0 ( A u + y β ( t ) ) ] , X u + y β ( t ) , E [ h 0 ( X u + y β ( t ) ) ] , Y u + y β ( t ) , K u + y β ( t , ) , u ( t ) + y β ( t ) ) β ( t ) } y ( δ , δ ) (3.4)

and

{ f ˜ 2 ( t , A u + y β ( t ) , E [ f 0 ( A u + y β ( t ) ) ] , X u + y β ( t ) , E [ h 0 ( X u + y β ( t ) ) ] , Y u + y β ( t ) , K u + y β ( t , ) , u ( t ) + y β ( t ) ) d d y X u + y β ( t ) + f u ( t , A u + y β ( t ) , E [ f 0 ( A u + y β ( t ) ) ] , X u + y β ( t ) , E [ h 0 ( X u + y β ( t ) ) ] , Y u + y β ( t ) , K u + y β ( t , ) , u ( t ) + y β ( t ) ) β ( t ) } y ( δ , δ ) (3.5)

are λ × P -uniformly integrable and the family

{ h ˜ ( A u + y β ( T ) , E [ g 0 ( A u + y β ( T ) ) ] ) d d y A u + y β ( T ) } y ( δ , δ ) (3.6)

is P-uniformly integrable.

(3.4) For all u , β A E , with β bounded, the processes α ( t ) = d d y A u + y β ( t ) | y = 0 , ξ ( t ) = d d y X u + y β ( t ) | y = 0 , η ( t ) = d d y Y u + y β ( t ) | y = 0 and ζ ( t , z ) = d d y K u + y β ( t , z ) | y = 0 exist and satisfy the equations

d α ( t ) = { b a ( t ) α ( t ) + b u ( t ) β ( t ) } d t + { σ a ( t ) α ( t ) + σ u ( t ) β ( t ) } d B ( t ) + 0 { γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) } N ˜ ( d t , d z ) , (3.7)

d ξ ( t ) = { g a ( t ) α ( t ) g x ( t ) ξ ( t ) g y ( t ) η ( t ) g u ( t ) β ( t ) } d t + η ( t ) d B ( t ) + 0 ζ ( t , z ) N ˜ ( d t , d z ) , (3.8)

where we used the simplified notation

b a ( t ) = b a ( t , A ( t ) , u ( t ) ) etc . (3.9)

(3.5) For all u A E , with definition (3.1), (3.2) and (3.3), the following process:

G ( t , s ) : = exp ( t s { b a ( r ) 1 2 ( σ a ( r ) ) 2 } d r + t s σ a ( r ) d B ( r ) + t s 0 ln ( 1 + γ a ( r , z ) ) N ˜ ( d r , d z ) + t s 0 [ ln ( 1 + γ a ( r , z ) ) γ a ( r , z ) ] ν ( d z ) d r ) , s > t (3.10)

exists and we now define the adjoint process p ( t ) , q ( t ) , r ( t , z ) , λ ( t ) as follows:

p ( t ) : = κ ( t ) + t T H 0 a ( s ) G ( t , s ) d s (3.11)

q ( t ) : = D t p ( t ) (3.12)

r ( t , z ) : = D t , z p ( t ) , (3.13)

with

κ ( t ) : = h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) + c λ ( T ) + t T f ˜ 1 ( s ) d s (3.14)

H 0 ( s , a , x , u ) : = κ ( s ) b ( s , a , u ) + D s κ ( s ) σ ( s , a , u ) + 0 D s , z κ ( s ) γ ( s , a , u , z ) ν ( d z ) + g ( s , a , x , u ) λ ( s ) . (3.15)

The above processes all exist for 0 t s T , z 0 . Above and in the following, we use the shorthand notation H 0 ( s ) = H 0 ( s , A ( s ) , X ( s ) , u ( s ) ) .

We now define the Hamiltonian for this problem:

H : [ 0, T ] × × × × L 2 ( ν ) × U × × × × L 2 ( ν ) × Ω

is defined by

H ( t , a , x , y , k , u , λ , p , q , r ( ) , ω ) = f ( t , a , E [ f 0 ( A ( t ) ) ] , x , E [ h 0 ( X ( t ) ) ] , y , k , u , ω ) + g ( t , a , x , y , u , ω ) λ + b ( t , a , u , ω ) p + σ ( t , a , u , ω ) q + 0 γ ( t , a , u , z , ω ) r ( z ) ν ( d z ) . (3.16)

The process λ ( t ) is given by the forward equation

{ d λ ( t ) = H x ( t , A ( t ) , X ( t ) , Y ( t ) , K ( t , ) , u ( t ) , λ ( t ) , p ( t ) , q ( t ) , r ( t , ) ) d t + E [ f x 0 ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ( t ) ) ] × h 0 ( X ( t ) ) d t + H y ( t , A ( t ) , X ( t ) , Y ( t ) , K ( t , ) , u ( t ) , λ ( t ) , p ( t ) , q ( t ) , r ( t , ) ) d B ( t ) + 0 k H ( t , A ( t ) , X ( t ) , Y ( t ) , K ( t , ) , u ( t ) , λ ( t ) , p ( t ) , q ( t ) , r ( t , ) ) N ˜ ( d t , d z ) λ ( 0 ) = h 1 ( X ( 0 ) ) ( = d h 1 d x ( X ( 0 ) ) ) , (3.17)

for t [ 0, T ] .

We can now formulate our stochastic maximum principle:

Theorem 3.1 (Partial information equivalence principle) Suppose u A E with corresponding solutions A ( t ) , X ( t ) , Y ( t ) , K ( t , z ) , λ ( t ) , of (1.1), (1.2) and (3.17). Assume that the random variables

F ( T ) : = h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) + c λ ( T ) , Φ ( t , s ) : = H 0 a ( s ) G ( t , s ) and

f ˜ 1 ( t ) belong to D 1,2 for all 0 t s T and that

E [ 0 T { ( σ a ( s ) ) 2 α 2 ( s ) + ( σ u ( s ) ) 2 + 0 { ( γ a ( s , z ) ) 2 α 2 ( s ) + ( γ u ( s , z ) ) 2 } ν ( d z ) } d s ] < , (3.18)

E [ 0 T 0 T { ( D s f ˜ 1 ( t ) ) 2 + 0 ( D s , z ( f ˜ 1 ( t ) ) ) 2 ν ( d z ) } d s d t ] < , (3.19)

E [ 0 T 0 T { ( D r Φ ( t , s ) ) 2 + 0 ( D r , z Φ ( t , s ) ) 2 ν ( d z ) } d r d s ] < . (3.20)

Then the following are equivalent:

i) d d y J ( u + y β ) | y = 0 = 0 for all bounded β A E .

ii) E [ u H ( t , A ( t ) , X ( t ) , Y ( t ) , K ( t , ) , u ( t ) , λ ( t ) , p ( t ) , q ( t ) , r ( t , ) ) u = u ( t ) | E t ] = 0 , for a.a. ( t , ω ) [ 0, T ] × Ω .

Proof. (i) Þ (ii): Assume that (i) holds and note that

α ( 0 ) = d d y A u + y β ( 0 ) | y = 0 (3.21)

and

α ( T ) = d d y A u + y β ( T ) | y = 0 = 1 c d d y X u + y β ( T ) | y = 0 = 1 c ξ ( T ) . (3.22)

Then

0 = d d y J ( u + y β ) | y = 0 = E [ 0 T { f a ( t ) α ( t ) + f a 0 ( t ) E [ f 0 ( A ( t ) ) α ( t ) ] + f x ( t ) ξ ( t ) + f x 0 ( t ) E [ h 0 ( X ( t ) ) ξ ( t ) ] + f y ( t ) η ( t ) + 0 k f ( t , z ) ζ ( t , z ) ν ( d z ) + f u ( t ) β ( t ) } d t + h 1 ( X ( 0 ) ) ξ ( 0 ) + h 2 a ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) α ( T )

+ h 2 a 0 ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) E [ g 0 ( A ( T ) ) α ( T ) ] ] = E [ 0 T { f a ( t ) α ( t ) + E [ f a 0 ( t ) ] f 0 ( A ( t ) ) α ( t ) + f x ( t ) ξ ( t ) + E [ f x 0 ( t ) ] h 0 ( X ( t ) ) ξ ( t ) + f y ( t ) η ( t ) + 0 k f ( t , z ) ζ ( t , z ) ν ( d z ) + f u ( t ) β ( t ) } d t + h 1 ( X ( 0 ) ) ξ ( 0 ) + h 2 a ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) α ( T )

+ E [ h 2 a 0 ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) ] g 0 ( A ( T ) ) α ( T ) ] = E [ 0 T { f ˜ 1 ( t ) α ( t ) + f ˜ 2 ( t ) ξ ( t ) + f y ( t ) η ( t ) + 0 k f ( t , z ) ζ ( t , z ) ν ( d z ) + f u ( t ) β ( t ) } d t + h 1 ( X ( 0 ) ) ξ ( 0 ) + h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) α ( T ) + c λ ( T ) α ( T ) c λ ( T ) α ( T ) ] . (3.23)

By the duality formulae (2.10), (2.18) and with F ( T ) = h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) + c λ ( T ) , we get

E [ F ( T ) α ( T ) ] = E [ F ( T ) ( 0 T { b a ( t ) α ( t ) + b u ( t ) β ( t ) } d t + 0 T { σ a ( t ) α ( t ) + σ u ( t ) β ( t ) } d B ( t ) + 0 { γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) } N ˜ ( d t , d z ) ) ]

= E [ 0 T { F ( T ) [ b a ( t ) α ( t ) + b u ( t ) β ( t ) ] + D t F ( T ) [ σ a ( t ) α ( t ) + σ u ( t ) β ( t ) ] + 0 D t , z F ( T ) [ γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) ] ν ( d z ) } d t ] . (3.24)

Similarly using the Fubini theorem in the following last equality, we have

E [ 0 T f ˜ 1 ( t ) α ( t ) d t ] = E [ 0 T f ˜ 1 ( t ) ( 0 t { b a ( s ) α ( s ) + b u ( s ) β ( s ) } d s + 0 t { σ a ( s ) α ( s ) + σ u ( s ) β ( s ) } d B ( s ) + 0 t 0 { γ a ( s , z ) α ( s ) + γ u ( s , z ) β ( s ) } N ˜ ( d s , d z ) ) ] = E [ 0 T ( 0 t { f ˜ 1 ( t ) [ b a ( s ) α ( s ) + b u ( s ) β ( s ) ]

+ D s f ˜ 1 ( t ) [ σ a ( s ) α ( s ) + σ u ( s ) β ( s ) ] + 0 D s , z f ˜ 1 ( t ) [ γ a ( s , z ) α ( s ) + γ u ( s , z ) β ( s ) ] ν ( d z ) } d s ) d t ] = E [ 0 T { ( s T f ˜ 1 ( t ) d t ) [ b a ( s ) α ( s ) + b u ( s ) β ( s ) ] + ( s T D s f ˜ 1 ( t ) d t ) [ σ a ( s ) α ( s ) + σ u ( s ) β ( s ) ] + 0 ( s T D s , z f ˜ 1 ( t ) d t ) [ γ a ( s , z ) α ( s ) + γ u ( s , z ) β ( s ) ] ν ( d z ) } d s ] . (3.25)

Changing the notation s t , this becomes

= E [ 0 T { ( t T f ˜ 1 ( s ) d s ) [ b a ( t ) α ( t ) + b u ( t ) β ( t ) ] + ( t T D t f ˜ 1 ( s ) d s ) [ σ a ( t ) α ( t ) + σ u ( t ) β ( t ) ] + 0 ( t T D t , z f ˜ 1 ( s ) d s ) [ γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) ] ν ( d z ) } d t ] . (3.26)

Combing (3.24) and (3.26) and using (3.14) we get

E [ 0 T { f ˜ 1 ( t ) α ( t ) + f u ( t ) β ( t ) } d t + h ˜ ( A ( T ) , E [ g 0 ( A ( T ) ) ] ) α ( T ) ] = E [ 0 T { κ ( t ) [ b a ( t ) α ( t ) + b u ( t ) β ( t ) ] + D t κ ( t ) [ σ a ( t ) α ( t ) + σ u ( t ) β ( t ) ] + 0 D t , z κ ( t ) [ γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) ] ν ( d z ) + f u ( t ) β ( t ) } d t ] E [ λ ( T ) ξ ( T ) ] using that c α ( T ) = ξ ( T ) . (3.27)

Then by the Itô formula and (3.17),

E [ h 1 ( X ( 0 ) ) ξ ( 0 ) ] = E [ λ ( 0 ) ξ ( 0 ) ] = E [ λ ( T ) ξ ( T ) 0 T λ ( t ) d ξ ( t ) 0 T ξ ( t ) d λ ( t ) 0 T H y ( t ) η ( t ) d t 0 T 0 k H ( t , z ) ζ ( t , z ) ν ( d z ) d t ]

= E [ λ ( T ) ξ ( T ) 0 T λ ( t ) { g a ( t ) α ( t ) g x ( t ) ξ ( t ) g y ( t ) η ( t ) g u ( t ) β ( t ) } d t 0 T ξ ( t ) H x ( t ) d t 0 T ξ ( t ) E [ f x 0 ( t ) ] h 0 ( X ( t ) ) d t 0 T η ( t ) H y ( t ) d t 0 T 0 k H ( t , z ) ζ ( t , z ) ν ( d z ) d t ] . (3.28)

Now by (3.16) we have

H x ( t ) = f x ( t ) + g x ( t ) λ ( t ) H y ( t ) = f y ( t ) + g y ( t ) λ ( t ) k H ( t , z ) = k f ( t , z ) . (3.29)

Hence, we conclude

E [ 0 T f ˜ 2 ( t ) ξ ( t ) d t + h 1 ( X ( 0 ) ) ξ ( 0 ) ] = E [ λ ( T ) ξ ( T ) + 0 T { λ ( t ) [ g a ( t ) α ( t ) + g u ( t ) β ( t ) ] f y ( t ) η ( t ) 0 k f ( t , z ) ζ ( t , z ) ν ( d z ) } d t ] . (3.30)

Combining (3.23), (3.27) and (3.30) we get

0 = d d y J ( u + y β ) | y = 0 = E [ 0 T { κ ( t ) [ b a ( t ) α ( t ) + b u ( t ) β ( t ) ] + D t κ ( t ) [ σ a ( t ) α ( t ) + σ u ( t ) β ( t ) ] + 0 D t , z κ ( t ) [ γ a ( t , z ) α ( t ) + γ u ( t , z ) β ( t ) ] ν ( d z ) + f u ( t ) β ( t ) + λ ( t ) [ g a ( t ) α ( t ) + g u ( t ) β ( t ) ] } d t ]

= E [ 0 T { [ κ ( t ) b a ( t ) + D t κ ( t ) σ a ( t ) + 0 D t , z κ ( t ) γ a ( t , z ) ν ( d z ) + λ ( t ) g a ( t ) ] α ( t ) + [ κ ( t ) b u ( t ) + D t κ ( t ) σ u ( t ) + 0 D t , z κ ( t ) γ u ( t , z ) ν ( d z ) + f u ( t ) + λ ( t ) g u ( t ) ] β ( t ) } d t . (3.31)

This holds for all β A E . In particular, if we apply this to

β θ = β θ ( s ) = θ ( ω ) χ ( t , t + h ] ( s ) ,

where θ ( ω ) is E t -measurable and 0 t t + h T , we get, by (3.7)

α = α ( β θ ) ( s ) for 0 s t

and (3.31) can be written

L 1 ( h ) + L 2 ( h ) = 0 , (3.32)

where

L 1 ( h ) = E [ t T { κ ( s ) b a ( s ) + D s κ ( s ) σ a ( s ) + 0 D s , z κ ( s ) γ a ( s , z ) ν ( d z ) + λ ( s ) g a ( s ) } α ( s ) d s ] (3.33)

and

L 2 ( h ) = E [ θ t t + h { κ ( s ) b u ( s ) + D s κ ( s ) σ u ( s ) + 0 D s , z κ ( s ) γ u ( s , z ) ν ( d z ) + f u ( s ) + λ ( s ) g u ( s ) } d s ] . (3.34)

Note that with α ( s ) = α β θ ( s ) we have, for s t + h ,

d α ( s ) = α ( s ) { b a ( s ) d s + σ a ( s ) d B ( s ) + 0 γ a ( s , z ) N ˜ ( d s , d z ) } . (3.35)

Hence, by the Itô formula

α ( s ) = α ( t + h ) G ( t + h , s ) ; s t + h , (3.36)

where G is defined in (3.10). Note that G ( t , s ) does not depend on h. Then

L 1 ( h ) = E [ t T H 0 a ( s ) α ( s ) d s ] , (3.37)

where H 0 is defined in (3.15). Differentiating with respect to h at h = 0 gives

L 1 ( 0 ) = d d h E [ t t + h H 0 a ( s ) α ( s ) d s ] h = 0 + d d h E [ t + h T H 0 a ( s ) α ( s ) d s ] h = 0 . (3.38)

Since α ( t ) = 0 we see that

d d h E [ t t + h H 0 a ( s ) α ( s ) d s ] h = 0 = 0. (3.39)

Therefore, by (3.36)

L 1 ( 0 ) = d d h E [ t + h T H 0 a ( s ) α ( t + h ) G ( t + h , s ) d s ] h = 0 = t T d d h E [ H 0 a ( s ) α ( t + h ) G ( t + h , s ) ] h = 0 d s = t T d d h E [ H 0 a ( s ) G ( t , s ) α ( t + h ) ] h = 0 d s . (3.40)

By (3.7) we have

α ( t + h ) = θ t t + h { b u ( r ) d r + σ u ( r ) d B ( r ) + 0 γ u ( r , z ) N ˜ ( d r , d z ) } + t t + h α ( r ) { b a ( r ) d r + σ a ( r ) d B ( r ) + 0 γ a ( r , z ) N ˜ ( d r , d z ) } . (3.41)

Therefore, by (3.40) and (3.41)

L 1 ( 0 ) = Γ 1 + Γ 2 , (3.42)

where

Γ 1 = t T d d h E [ H 0 a ( s ) G ( t , s ) θ t t + h { b u ( r ) d r + σ u ( r ) d B ( r ) + 0 γ u ( r , z ) N ˜ ( d r , d z ) } ] h = 0 d s (3.43)

and

Γ 2 = t t + h d d h E [ H 0 a ( s ) G ( t , s ) t t + h α ( r ) { b a ( r ) d r + σ a ( r ) d B ( r ) + 0 γ a ( r , z ) N ˜ ( d r , d z ) } ] h = 0 d s . (3.44)

Recall that Φ ( t , s ) = H 0 a ( s ) G ( t , s ) . By the duality formula (2.10) and (2.18), we have

Γ 1 = t T d d h E [ θ t t + h { b u ( r ) Φ ( t , s ) + σ u ( r ) D r Φ ( t , s ) + 0 γ u ( r , z ) D r , z Φ ( t , s ) ν ( d z ) } d r ] h = 0 d s = t T E [ θ { b u ( t ) Φ ( t , s ) + σ u ( t ) D t Φ ( t , s ) + 0 γ u ( t , z ) D t , z Φ ( t , s ) ν ( d z ) } ] d s . (3.45)

Since α ( t ) = 0 , we see that

Γ 2 = 0. (3.46)

We conclude from (3.42)-(3.46) that

L 1 ( 0 ) = Γ 1 . (3.47)

Moreover, we see directly that

L 2 ( 0 ) = E [ θ { κ ( t ) b u ( t ) + D t κ ( t ) σ u ( t ) + 0 D t , z κ ( t ) γ u ( t , z ) ν ( d z ) + f u ( t ) + λ ( t ) g u ( t ) } ] . (3.48)

By differentiating (3.32) with respect to h at h = 0 , we thus obtain the equation

E [ θ { ( κ ( t ) + t T Φ ( t , s ) d s ) b u ( t ) + D t ( κ ( t ) + t T Φ ( t , s ) d s ) σ u ( t ) + 0 D t , z ( κ ( t ) + t T Φ ( t , s ) d s ) γ u ( t , z ) ν ( d z ) + f u ( t ) + λ ( t ) g u ( t ) } ] = 0. (3.49)

Using (3.11), equation (3.49) can be written

E [ θ u { f ( t , A ( t ) , E [ f 0 ( A ( t ) ) ] , X ( t ) , E [ h 0 ( X ( t ) ) ] , Y ( t ) , K ( t , ) , u ) + p ( t ) b ( t , A ( t ) , u ) + λ ( t ) g ( t , A ( t ) , X ( t ) , Y ( t ) , u ) + D t p ( t ) σ ( t , A ( t ) , u ) + 0 D t , z p ( t ) γ ( t , A ( t ) , u , z ) ν ( d z ) } u = u ( t ) ] = 0. (3.50)

Since this holds for all E t -measurable θ we conclude that

E [ u H ( t , A ( t ) , X ( t ) , Y ( t ) , K ( t , ) , u , λ ( t ) , p ( t ) , q ( t ) , r ( t , ) ) u = u ( t ) | E t ] = 0. (3.51)

(ii) Þ (i): Conversely, suppose (3.51) holds for some u A E . Then we can reverse the argument to get that (3.32) holds for all β = β θ . Then (3.32) holds for all linear combinations of such β θ . Since all bounded β A E can be approximated by such linear combinations, it follows that (3.32) hold for all bounded β A E . Hence, by reversing the remaining part of the argument above, we conclude that (ii) Þ (i).

4. Conclusion

In this paper, we consider a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation driven by Lévy processes and the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

Acknowledgements

The work was partially done while the first author was visiting the University of Kansas. She would like to thank Professor David Nualart and Professor Yaozhong Hu for providing a stimulating working environment.

Fund

The work of Qing Zhou is supported by the National Natural Science Foundation of China (No. 11471051 and 11371362). The work of Yong Ren is supported by the National Natural Science Foundation of China (No. 11371029).

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Bensoussan, A. (1992) Stochastic Control of Partially Observable Systems. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511526503
[2] Shi, J. and Wu, Z. (2007) Maximum Principle for Fully Coupled Stochastic Control System with Random Jumps. Proceedings of the 26th Chinese Control Conference, Zhangjiajie, 375-380.
[3] Anderson, D. and Djehiche, B. (2011) A Maximum Principle for SDE’s of Mean-Field Type. Applied Mathematics & Optimization, 58, 76-82.
[4] Jourdain, B., Meleard, S. and Woyczynski, W.A. (2008) Non-Linear SDEs Driven by Lévy Processes and Related PDEs. ALEA—Latin American Journal of Probability and Mathematical Statistics, 4, 1-29.
[5] Lasry, J.-M. and Lions, P.-L. (2007) Mean Field Games. Japanese Journal of Mathematics, 2, 229-260.
https://doi.org/10.1007/s11537-007-0657-8
[6] Meyer-Brandis, T., Øksendal, B. and Zhou, X.Y. (2012) A Mean-Field Stochastic Maximum Principle via Malliavin Calculus. Stochastics: An International Journal of Probability and Stochastic Processes: formerly Stochastics and Stochastics Reports, 84, 643-666.
[7] Øksendal, B. and Sulem, A. (2009) Maximum Principles for Optimal Control of Forward-Backward Stochastic Differential Equations with Jumps. SIAM Journal on Control and Optimization, 48, 2945-2976.
https://doi.org/10.1137/080739781
[8] Baras, J.S., Elliott, R.J. and Kohlmann M. (1989) The Partially Observed Stochastic Minimum Principle. SIAM Journal on Control and Optimization, 27, 1279-1292.
https://doi.org/10.1137/0327065
[9] Karatzas, I. and Xue, X. (1991) A Note on Utility Maximization under Partial Observations. Mathematical Finance, 1, 57-70.
https://doi.org/10.1111/j.1467-9965.1991.tb00009.x
[10] Lakner, P. (1998) Optimal Trading Strategy for an Investor: The Case of Partial Information. Stochastic Processes and Their Applications, 76, 77-97.
https://doi.org/10.1016/S0304-4149(98)00032-5
[11] Pham, H. and Quenez, M.-C. (2001) Optimal Portfolio in Partially Observed Stochastic Volatility Models. Annals of Applied Probability, 11, 210-238.
https://doi.org/10.1214/aoap/998926991
[12] Tang, S. (1998) The Maximum Principle for Partially Observed Optimal Control of Stochastic Differential Equations. SIAM Journal on Control and Optimization, 36, 1596-1617. https://doi.org/10.1137/S0363012996313100
[13] Elliott, R.J. and Kohlmann, M. (1994) The Second Order Minimum Principle and Adjoint Processes. Stochastics and Stochastic Reports, 46, 25-39.
https://doi.org/10.1080/17442509408833867
[14] Fourni, E., Lasry, J.-M., Lebuchoux, J. and Lions, P.-L. (2001) Applications of Malliavin Calculus to Monte-Carlo Methods in Finance. II. Finance and Stochastics, 5, 201-236.
https://doi.org/10.1007/PL00013529
[15] Kallenberg, O. (2002) Foundation of Modern Probability. 2nd Edition, Springer Series in Statistics.
https://doi.org/10.1007/978-1-4757-4015-8
[16] Benth, F.E., Di Nunno, G., Løkka, A., Øksendal, B. and Proske, F. (2003) Explicit Representation of the Minimal Variance Portfolio in Markets Driven by Lévy Processes. Mathematical Finance, 13, 55-72.
https://doi.org/10.1111/1467-9965.t01-1-00005
[17] Di Nunno, G., Meyer-Brandis, T., Øksendal, B. and Proske, F. (2005) Malliavin Calculus and Anticipative ItØ Formula for Lévy Processes. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 8, 235-258.
[18] Nualart, D. (2006) Malliavin Calculus and Related Topics. 2nd Edition, Springer, Berlin, Heidelberg.
[19] Di Nunno, G., Øksendal, B. and Proske, F. (2009) Malliavin Calculus for Lévy processes and Application to Finance. Universitext, Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-540-78572-9
[20] ItÔ, Y. (1988) Generalized Poisson Functionals. Probability Theory and Related Fields, 77, 1-28.
https://doi.org/10.1007/BF01848128

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.