A Mean-Field Stochastic Maximum Principle for Optimal Control of Forward-Backward Stochastic Differential Equations with Jumps via Malliavin Calculus

This paper considers a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation (SDE) driven by Lévy processes and the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.


Introduction
In contrast to the stochastic control problem (e.g.[1] [2]) which is studied in the complete information case (and [1] with the Brownian motion case only), the performance functional that we will investigate involves the mean of functionals of the state variables (hence the name mean-field).Problems of this type occur in many applications; for example in a continuous-time Markowitz's mean-variance portfolio selection model where the variance term involves a quadratic function of the expectation.The inclusion of this mean term introduces some major technical difficulties, which include among others the time inconsistency leading to the failure of dynamic programming approach.Recently, there has been increasing interest in the study of this type of stochastic control problems; see for example [3] [4] and [5].
On the other hand, since we allow the coefficients ( , , , , b g f σ γ and 2 h as follows) to be the stochastic processes and also because our control must be partial information adapted, this problem is not of Markovian type and hence cannot be solved by dynamic programming even if the mean term were not present.We instead investigate the maximum principle, and will derive an explicit form for the adjoint process.The approach we employ is Malliavin calculus which enables us to express the duality involved via the Malliavin derivative.Our paper is related to the recent paper [6] and [7].In [6], they consider a mean-field type stochastic control problem where the dynamics is governed by a controlled forward SDE with jumps and the information available to the controller is possibly less than the overall information.Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.[7] presents various versions of the maximum principle for optimal control (not mean-field type) of forward-backward stochastic differential equations with jumps and a Malliavin calculus approach which allow us to handle non-Markovian system.The motivation of [7] is risk minimization via g-expectation.
This paper can be considered as the continuation of [6] and [7].We consider a mean-field type stochastic control problem where the dynamics is governed by a forward and backward stochastic differential equation (SDE) driven by Lévy processes and the information available to the controller is possibly less than the overall information.All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian.Malliavin calculus will be employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
As in the paper [6], we emphasize that our problem should be distinguished from the partial observation control problem, where it is assumed that the controls are based on the noisy observation of the state process.For the latter type of problems, there is a rich literature (see, e.g.[1] [8] [9] [10] [11] [12]).
Note that the methods and results in the partial observation case do not apply to our situation.On the other hand, there are several existing works on stochastic maximum principle (either completely or partially observed) where adjoint processes are explicitly expressed (see, e.g.[8] [10] [12] [13]).However, these works all essentially employ stochastic flow technique, over which the Malliavin calculus has the advantage in terms of numerical computations (see, e.g.[14]).

Now let's state our problem as follows:
Suppose the state process , ω ∈ Ω , of our system is described by the following coupled forward-backward system of SDEs.
Forward system in the controlled process ( )

A t b t A t u t t t A t u t B t t A t u t z N t z t T
A a Backward system in the unknown processes , where is a given constant.
are a 1-dimension Brownian motion (see [15] Theorem 13.5) and an independent pure jump Lévy martingale, respectively, on a given filtered probability space { } ( ) 0 , , , is the compensated jump measure of ( ) Let 0 T > be a given constant.For simplicity, we assume that ( ) Suppose in addition that we are given a subfiltration [ ] , 0, representing the information available to the controller at time t and satisfying the usual conditions.For example, we could have meaning that the controller gets a delayed information compared to t A denote a given family of controls, contained in the set of t E - predictable controls ( ) 2) has a unique strong solution.If u ∈ E A , then we call u an admissible control.Let U ⊂  be a given convex set such that ( ) Suppose we are given a performance functional of the form ( ) , where E denotes expectation with respect to P , 0 :  are given t F -predictable processes and 1 h is a given function with ( ) , for all .
The control problem we consider is the following: Problem 1.1 (Partial information optimal control).Find Φ ∈

A Brief Review of Malliavin Calculus for Lévy Processes
In this section, we recall the basic definitions and properties of Malliavin L λ be the space of deterministic real functions f such that , , , d d  d , where ( ) L λ µ × be the space of deterministic real functions f such that , , , , , , d  d d  d  d  d .
L P λ × can be similarly denoted.

Malliavin Calculus for ( ) B ⋅
A natural starting point is the Wiener-Itô chaos expansion theorem (See [18] Theorem 1.1.2),which states that any ( ) can be written as for a unique sequence of symmetric deterministic functions ( ) ( ) Moreover, we have the isometry such that its chaos expansion (11) satisfies , we define the Malliavin derivative of F at t (with respect to , , where the notation means that we apply the ( ) , , , keep the last variable n t t = as a parameter.

Malliavin Calculus for
( ) The construction of a stochastic derivative/Malliavin derivative in the pure jump martingale case follows the same lines as in the Brownian motion case.In this case, the corresponding Wiener-Itô chaos expansion theorem states that any ( ) (where in this case is the σ-algebra generated by ( ) ( ) ) can be written as where , , , , ; 0, and n f is symmetric with respect to the pairs of variables ( ) ( ) , , , , It is important to note that in this case the n-times iterated integral ( ) and not with respect to ( ) Then Itô isometry for stochastic integrals with respect to ( ) gives the following isometry for the chaos expansion: As in the Brownian motion case, we use the chaos expansion to define the Malliavin derivative.Note that in this case there are two parameters , t z , where t represents time and such that its chaos expansion (2.11) satisfies .
For ( )  ( ) , means that we perform the ( ) , , , , as a parameter.In this case we get the isometry.
The properties of

Suppose
( )  is continuous and bounded.
Then ( ) ( ) ( ) ( ) We let 1,2  denote the set of all random variables which are Malliavin differentiable with respect to both ( )

The Stochastic Maximum Principle
We now return to Problem 1.1 given in the introduction.We make the following assumptions: , , : , , ( ) then the family where we used the simplified notation (3.5)For all u ∈ E A , with definition (3.1), (3.2) and (3.3), the following process: exists and we now define the adjoint process ( ) p t , ( ) q t , ( )  The above processes all exist for 0 t s T ≤ ≤ ≤ , 0 z ∈  .Above and in the following, we use the shorthand notation We now define the Hamiltonian for this problem: : 0,

H t a x y k u p q r f t a E f A t x E h X t y k u g t a x y u b t a u
The process ( ) t λ is given by the forward equation

K t u t x h X t t H t A t X t Y t K t u t t p t q t r t B t y H t A t X t Y t K t u t t p t q t r t N t z h
We can now formulate our stochastic maximum principle: Theorem 3.1 (Partial information equivalence principle) Suppose u ∈ E A with corresponding solutions ( ) (1.2) and (3.17).Assume that the random variables

and
( ) d d d .
By the duality formulae (2.10), ( Similarly using the Fubini theorem in the following last equality, we have Then by the Itô formula and (3.17), ( ) Hence, we conclude This holds for all β ∈ E A .In particular, if we apply this to ( ) ( ) ( ] ( ) Note that with ( ) ( )

T
we define the Malliavin derivative of F at ( ) .28) Journal of Applied Mathematics and Physics .