_{1}

^{*}

Global existence of classical solutions to the relativistic Vlasov-Maxwell system, given sufficiently regular initial data, is a long-standing open problem. The aim of this project is to present in details the results of a paper published in 1986 by Robert Glassey and Walter Strauss. In that paper, a sufficient condition for the global existence of a smooth solution to the relativistic Vlasov-Maxwell system is derived. In the following , the resulting theorem is proved by taking initial data , . A small data global existence result is presented as well.

Definition 1.1. A plasma is one of the four states of matter, which is a completely ionized gas.

For this work, we assume the following. The plasma is:

• at high temperature.

• at low density.

• collisions are unimportant (i.e. collisions between particles and external forces is negligible).

The plasma is at high temperature implies that

T > e 2 γ ¯ ≅ e 2 N − 1 / 3

where N is the total number of charges per unit volume, and γ ¯ = N − 1 / 3 is the mean distance between the particles.

Definition 1.2. We call the distance at which the coulomb field of a charge in the plasma is screened a Debye length denoted by a and is defined by:

a − 2 = 4 π T ∑ α N α ( Z α ) e 2

But if we consider only one type of ion, Z = 1 , then a = ( T 4 π N e 2 ) 1 / 2 . From T > e 2 N 1 / 3 we have that e 2 N 1 / 3 T < 1 , i . e . e 2 N 1 / 3 4 π N e 2 a 2 < 1 or γ ¯ 2 4 π a 2 < 1 we can interpret this inequality as, the mean distance between particles is small with respect to the Debye length.

Generally speaking, a plasma is collision-less when the effective collision fre- quency ν < ω -that is the frequency of variation of E, B. In this case

∂ f ∂ t > collisionterm .

It is a kinetic field model for a collision-less plasma, that is a gas of charged particles which is sufficiently hot and dilute in order to ignore collision effect. Hence the particles are supposed to interact only through electromagnetic forces. In this work let us assume that the plasma is composed of n different particles, (i.e., ions, electrons) with the corresponding masses m α and e α . According to statistical physics the set of the particles of this species is denoted by a distribution function

f α = f α ( t , x , p ) ≥ 0

which is the probability density to find a particle at a time t > 0 , at a position x with momentum p. Here in the vlasov Maxwell system the motion of the particles is governed by Vlasov’s equation;

∂ t f α + v α ⋅ ∇ x f α + e α ( E + v α c × B ) ⋅ ∇ p f α = 0 (1.1)

where v α is the relativistic speed of a particle α , c is the speed of light and E and B are electric and magnetic fields respectively and p is momentum. Here

v α = p m α 2 + | p | 2 c 2 is the relativistic velocity

where m α is the mass of the particle α .

From this we can observe that | v α | < c (hence relativistic system).

The electric field E ( t , x ) and the magnetic field B ( t , x ) satisfies the fol- lowing Maxwell equations.

∂ t E = c ∇ x × B − j ; ∇ x ⋅ E = ρ (1.2)

∂ t B = − ∇ x × E ; ∇ ⋅ B = 0 (1.3)

where ρ and j are the densities of charge and current respectively, and hence they can be computed by:

ρ = 4 π ∫ ∑ α e α f α d p , and j = 4 π ∫ ∑ α e α f α v α d p (1.4)

The coupled system of Equations (1.1), (1.2), (1.3) and (1.4) is what we call the Vlasov-Maxwell System which is represented as:

{ ∂ t f α + v α ⋅ ∇ x f α + e α ( E + v α c × B ) ⋅ ∇ p f α = 0 ρ = 4 π ∫ ∑ α e α f α d p , and j = 4 π ∫ ∑ α e α f α v α d p ∂ t E = c ∇ × B − j ; ∇ ⋅ E = ρ ∂ t B = − ∇ × E ; ∇ ⋅ B = 0 (1.5)

In this system the Vlasov equation governs the motion of the particles and the interaction of the particles are described by the Maxwell equations. So, the aim of this work is to derive a sufficient condition for the global existence of a smooth solution to the system 1.5 with initial data f α 0 ( x , p ) , E 0 ( x ) and B 0 ( x ) , which are supposed to satisfy

∇ x ⋅ E 0 = ρ 0 , ∇ x ⋅ B 0 = 0 and ∫ ρ 0 d x = 0

In the entire work, we are going to use, the partial derivatives with respect to x i ( i = 1 , 2 , 3 ) will be denoted by ∂ x i , while any derivative of order k with respect to t or x or p will be denoted by D k that is D f = ∂ t f or ∂ x i f , D 2 f = ∂ t a ∂ x i b ∂ x j c f , a + b + c = 2 and so on with the convention D o f = f .

Now let us provide a short review on the Classical Cauchy problem on the Vlasov Maxwell System.

Robert T.Glassey and Walter A. Strauss [

Organization of the project: Now let us describe how this project is organized. In chapter one, we state some definitions and terms which are related to Vlasov Maxwell system and we try to show the solutions of inhomogeneous wave equations with initial conditions in ℝ 3 and Gronwall’s lemma is stated and proved. In chapter two, the main theorem is stated and we see representations of the fields and used boundedness to prove the existence and uniqueness of the solution. To prove the theorem, we use an iterative scheme. We construct sequences, then using representations of the fields we showed that these sequences are bounded in C 1 , and finally we try to show that the sequences are Cauchy sequences in C 1 . In the last chapter, the main theorem is re-stated by changing the hypothesis (taking small initial data conditions) just to show the reader there is at least one case such that the sufficient condition in the main theorem of the latter chapter holds true. In this chapter, only the theorem is stated and the main steps to prove the theorem are described.

Theorem 1.3. Kirchhoff’s Formula [

{ u t t − Δ u = 0 ∈ ℝ n × ( 0 , ∞ ) u = g , u t = h on ℝ n × { t = 0 } (1.6)

Kirchhoff’s states that an explicit formula for u in terms of g and h in three dimensions is:

u ( x , t ) = ∫ ∂ B ( x , t ) h ( y ) t + g ( y ) + D g ( y ) ⋅ ( y − x ) d S ( y ) ( x ∈ ℝ 3 , t > 0 )

where ∂ B ( x , t ) is a sphere in ℝ 3 centered at x and radius t > 0 .

Consider the initial value problem for the non homogeneous wave equation

{ u t t − Δ u = f ∈ ℝ n × ( 0 , ∞ ) u = 0 , u t = 0 on ℝ n × { t = 0 }

We define u = u ( x , t ; s ) to be the solution of

{ u t t ( . ; s ) − Δ u ( . ; s ) = 0 in ℝ n × ( s , ∞ ) u ( . ; s ) = 0 , u t ( . ; s ) = f ( . , s ) on ℝ n × { t = s }

Now set

u ( x , t ) = ∫ 0 t u ( x , t ; s ) d s ( x ∈ ℝ n , t ≥ 0 ) (1.7)

Duhamel’s principle asserts that this is a solution of

{ u t t − Δ u = f ∈ ℝ n × ( 0 , ∞ ) u = 0 , u t = 0 on ℝ n × { t = 0 }

To find u ( x , t ) explicitly let us consider the case n = 3 . For n = 3 , Kirchhoff’s formula implies,

u ( x , t ; s ) = ( t − s ) ∫ ∂ B ( x , t − s ) f ( y , s ) d S

so that

u ( x , t ) = ∫ 0 t ( t − s ) ( ∫ ∂ B ( x , t − s ) f ( y , s ) d S ) d s = 1 4 π ∫ 0 t ∫ ∂ B ( x , t − s ) f ( y , s ) ( t − s ) d S d s = 1 4 π ∫ 0 t ∫ ∂ B ( x , r ) f ( y , t − r ) r d S d r

Therefore

u ( x , t ) = 1 4π ∫ B ( x , t ) f ( y , t − | y − x | ) | y − x | d y ( x ∈ ℝ 3 , t ≥ 0 )

If the initial data is not zero

u ( x , t ) = u 0 ( x , t ) + 1 4π ∫ B ( x , t ) f ( y , t − | y − x | ) | y − x | d y ( x ∈ ℝ 3 , t ≥ 0 ) (1.8)

where u 0 ( x , t ) is the solution of the homogeneous equation u t t − Δ u = 0 in ℝ n × ( 0, ∞ ) , which is in C 2 .

In estimating some norm of a solution of a partial differential equation, we are often led to a differential inequality for the norm from which we want to deduce an inequality for the norm itself. Gronwall’s inequality allows one to do this. Roughly speaking, it states that a solution of a differential inequality is bounded by the solution of the corresponding differential equality. There are both linear and non linear versions of Gronwalls’s inequality. We state here only the simplest version of the linear inequality that we are going to use.

Lemma 1.4. Gronwall’s lemma [

If f : ℝ + → ℝ + is continuous and bounded above on each closed interval [ 0, T ] and satisfies

f ( T ) ≤ a ( T ) + ∫ 0 T b ( t ) f ( t ) d t

for increasing function a ( t ) and positive (integrable) function b ( t ) then

f ( T ) ≤ a ( T ) exp ( ∫ 0 T b ( t ) d t ) (1.9)

In particular if a ( T ) = 0 , then

f ( T ) = 0

Proof: Consider the function

v ( t ) = e − ∫ 0 t b ( s ) d s ∫ 0 t b ( s ) f ( s ) d s

differentiating both sides with respect to t and applying 1.4, we have:

d v ( t ) d t = − b ( t ) e − ∫ 0 t b ( s ) d s ∫ 0 t b ( s ) f ( s ) d s + b ( t ) f ( t ) e − ∫ 0 t b ( s ) d s ≤ − b ( t ) e − ∫ 0 t b ( s ) d s ∫ 0 t b ( s ) f ( s ) d s + a ( t ) b ( t ) e − ∫ 0 t b ( s ) d s + b ( t ) e − ∫ 0 t b ( s ) d s ∫ 0 t b ( s ) f ( s ) d s = a ( t ) b ( t ) e − ∫ 0 t b ( s ) d s

Integrating both sides and using the increasing property of the functions gives

e − ∫ 0 T b ( t ) d t ∫ 0 T b ( t ) f ( t ) d t = v ( T ) ≤ ∫ 0 T a ( t ) b ( t ) e − ∫ 0 t b ( s ) d s ≤ a ( T ) ∫ 0 T b ( t ) e − ∫ 0 t b ( s ) d s d t = a ( T ) [ 1 − e − ∫ 0 T b ( s ) d s ]

then using 1.9, and the above bound, we have

f ( T ) ≤ a ( T ) + ∫ 0 T b ( t ) f ( t ) d t ≤ a ( T ) + e ∫ 0 T b ( t ) d t a ( T ) [ 1 − e − ∫ 0 t b ( s ) d s ] = a ( T ) exp ( ∫ 0 T b ( t ) d t )

Lemma 1.5 Partial Integration in ℝ n . [

When the function f ( x ) ∈ C 0 1 ( ℝ n ) and g ( x ) ∈ C 1 ( ℝ n ) are given, we have

∫ ℝ n g ( x ) ∂ ∂ x i ( f ( x ) ) d x = − ∫ ℝ n f ( x ) ∂ ∂ x i ( g ( x ) ) d x , for i = 1 , , 2 , ⋯ , n (1.10)

Proof: On account of the property f ( x ) ∈ C 0 1 ( ℝ n ) , we find a radius r > 0 such that f ( x ) = 0 and f ( x ) g ( x ) = 0 is true for all x ∈ ℝ n with | x j | ≥ r for at least one index j ∈ { 1,2,3, ⋯ , n } . Hence the fundamental theorem of differential and integral calculus yields:

∫ ℝ n ∂ ∂ x i ( f ( x ) g ( x ) ) d x = ∫ − r + r ⋯ ∫ − r + r [ ∫ − r + r ∂ ∂ x i ( f ( x ) g ( x ) ) d x i ] d x 1 d x 2 ⋯ d x i − 1 d x i + 1 ⋯ d x n = 0

where d x 1 d x 2 ⋯ d x i − 1 d x i + 1 ⋯ d x n = d S x i . Therefore,

0 = ∫ ℝ n ∂ ∂ x i ( f ( x ) g ( x ) ) d x = ∫ ℝ n g ( x ) ∂ ∂ x i ( f ( x ) ) d x + ∫ ℝ n f ( x ) ∂ ∂ x i ( g ( x ) ) d x

hence the result.

Lemma 1.6, Let u ( t ) be a continuous function of t and ‖ u ( t ) ‖ = sup 0 ≤ s ≤ t ‖ u ( s ) ‖ ∞ .

If ‖ u ( t ) ‖ ≤ c 0 + c 1 ‖ u ( t ) ‖ 2 , then ‖ u ( t ) ‖ is bounded provided that either c 0 or c 1 is sufficiently small.

In this chapter, we are going to establish the existence and uniqueness of global smooth solutions for the system in 1.5 under a sufficient condition. To derive the sufficient condition, we shall consider the case of only one species of particles, then at the end we extend the result to the case of a plasma composed of many species. Let us set c = 1 , e α = 1 , m α = 1 and dropping the π factor, the system in 1.5 reduces to:

{ ∂ t f + v ⋅ ∇ x f + ( E + v × B ) ⋅ ∇ p f = 0 ρ = ∫ f d p , and j = ∫ f v d p ∂ t E = ∇ × B − j ; ∇ ⋅ E = ρ ∂ t B = − ∇ × E ; ∇ ⋅ B = 0 (2.1)

The term E + v × B can be represented by K .That is

K = E + v × B

which we call Lorentz force.

Theorem 2.1. [

f α ( t , x , p ) = 0 for | p | > β ( t )

Then there exists a unique C 1 solution for all t.

To prove this theorem, we are going to use the concept of representation of the fields and their derivatives. The characteristics equations of the system 1.1 are the solutions of:

x ˙ = v , p ˙ = K , f ˙ = 0 (2.2)

Hence the solution of this system is:

( X ( s , t , x , p ) , P ( s , t , x , p ) )

such that at s = t , X = x and P = p . Therefore

f ( t , x , p ) = f 0 ( X ( 0 , t , x , p ) , P ( 0 , t , x , p ) ) ≥ 0

Since

0 ≤ f 0 ≤ max ( f 0 ) < ∞

f remains bounded.

For the next two sections, the reader can consult the material [

Theorem 2.2. Assume that the function β ( t ) is as in Theorem 2.1 above. Let

S = ∂ t + ∑ k = 1 3 v k ∂ x k

Then for i = 1 , 2 , 3 , E i and B i are represented by:

4 π E i ( t , x ) = ( E i ) 0 ( t , x ) + E T i ( t , x ) + E S i ( t , x )

4 π B i ( t , x ) = ( B i ) 0 ( t , x ) + B T i ( t , x ) + B S i ( t , x )

where

E T i ( t , x ) = − ∬ | y − x | ≤ t ( w i + v i ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 f ( t − | y − x | , y , p ) d p d y | y − x | 2

E S i ( t , x ) = − ∬ | y − x | ≤ t ( w i + v i ) ( 1 + v ⋅ w ) ( S f ) ( t − | y − x | , y , p ) d p d y | y − x |

where

w = y − x | y − x |

Just replacing w i + v i , in each expression above by ( w × v ) i , we can represent B i ( t , x ) in a similar way.

Proof:

Using chain rule

∂ y i ( f ( t − | y − x | , y , p ) ) = ∂ y i f − w i ∂ t f

Hence, let us denote this by T i f . That is T i f = ∂ y i f − w i ∂ t f .

Here, T i is the tangential derivative along the surface of a backward cha- racteristic cone.

Now let us replace the usual operators ∂ t and ∂ i by T i and S . From

T i f = ∂ y i f − w i ∂ t f and S = ∂ t + ∑ k = 1 3 v k ∂ x k , we get ,

∂ t = S − v ⋅ T 1 + v ⋅ w and

∂ i = T i + w i ( 1 + v ⋅ w ) ( S − v ⋅ T ) , v ⋅ T = v j ⋅ T j = w i S 1 + v ⋅ w + T i − w i v ⋅ T 1 + v ⋅ w = w i S 1 + v ⋅ w + ( δ i j − w i v j 1 + v ⋅ w ) T j (2.3)

For relativistic Vlasov Maxwell system, the fields satisfy the inhomogeneous wave equation:

( ∂ t 2 − Δ ) E i = − ( ∇ ρ + ∂ t j i ) = − ( ∂ i ρ + ∂ t j i )

But from

ρ = ∫ f d p and j i = ∫ f v i d p

we have

( ∂ t 2 − Δ ) E i = − ( ∂ i ρ + ∂ t j i ) = − ∫ ( ∂ i f + v i ∂ t f ) d p (2.4)

Using 2.3 above

∂ i + v i ∂ t = w i S 1 + v ⋅ w + ( δ i j − w i v j 1 + v ⋅ w ) T j + v i ( S − v ⋅ T 1 + v ⋅ w ) = w i S 1 + v ⋅ w + v i S 1 + v ⋅ w + ( δ i j − w i v j 1 + v ⋅ w ) T j − v i v j 1 + v ⋅ w = ( w i + v i ) S 1 + v ⋅ w + ( δ i j − w i v j 1 + v ⋅ w ) T j − v i v j 1 + v ⋅ w T j = ( w i + v i ) S 1 + v ⋅ w + ( δ i j − ( w i + v i ) v j 1 + v ⋅ w ) T j (2.5)

Hence, substituting 2.5 in to 2.4, we have

( ∂ t 2 − Δ ) E i = − ∫ [ ( w i + v i ) S f 1 + v ⋅ w + ( δ i j − ( w i + v i ) v j 1 + v ⋅ w ) T j f ] d p (2.6)

By applying Equation (1.8) in chapter one, we have

E i ( t , x ) = ( E i ) 0 ( t , x ) − 1 4 π ∫ | y − x | ≤ t ∫ ( w i + v i ) S f 1 + v ⋅ w ( t − | y − x | , y , p ) d p d y | y − x | − ∫ | y − x | ≤ t ∫ ( δ i j − ( w i + v i ) v j 1 + v ⋅ w ) ( T j f ) ( t − | x − y | , y , p ) d p d y | y − x |

This implies,

4 π E i ( t , x ) = ( E i ) 0 ( t , x ) − ∫ | y − x | ≤ t ∫ ( w i + v i ) S f 1 + v ⋅ w ( t − | y − x | , y , p ) d p d y | y − x | − ∫ | y − x | ≤ t ∫ ( δ i j − ( w i + v i ) v j 1 + v ⋅ w ) ( T j f ) ( t − | x − y | , y , p ) d p d y | y − x | (2.7)

where ( E i ) 0 ( t , x ) is the solution of the homogeneous wave equation.

From this we can easily see that the second term is E S i . Let

a j = ( δ i j − ( w i + v i ) v j 1 + v ⋅ w ) and γ = | y − x |

Hence, we can re-write the last integral as:

− ∫ | y − x | ≤ t ∫ a j γ ∂ ∂ y j [ f ( t − | x − y | , y , p ) ] d y d p

Now let us integrate the last term using integration by parts in y. Hence, by applying lemma 1.5 in chapter one (integration by parts), this expression reduces to;

− ∫ | y − x | = t ∫ w j a j γ f ( 0 , y , p ) d S y d p + ∫ | y − x | ≤ t ∫ f ∂ ∂ y j ( a j γ ) d y d p

where

d S y = d y 1 d y 2 , if j = 3 , d y 2 d y 3 if j = 1 or d y 1 d y 3 if j = 2

But

− ∫ | y − x | = t ∫ w j a j γ f ( 0, y , p ) d S y d p

is part of ( E i ) 0 ( t , x ) , hence the above integral reduces to:

∫ | y − x | ≤ t ∫ f ∂ ∂ y j ( a j γ ) d y d p (2.8)

But

∂ ∂ y j ( a j γ ) = ( w i + v i ) ( | v | 2 − 1 ) γ 2 ( 1 + v ⋅ w ) 2 (2.9)

see the computation of this expression at the appendix part of [

∫ | y − x | ≤ t ∫ ( w i + v i ) ( | v | 2 − 1 ) γ 2 ( 1 + v ⋅ w ) 2 f ( t − | x − y | , y , p ) d p d y | y − x | 2 (2.10)

Therefore, substituting 2.10 and E S i ( t , x ) term in to 2.7,

4 π E i ( t , x ) = ( E i ) 0 ( t , x ) + E T i ( t , x ) + E S i ( t , x )

Similarly, by using the inhomogeneous wave equation for the field B, we have

( ∂ t 2 − Δ ) B 1 = ∫ ( ∂ 2 v 3 f − ∂ 3 v 2 f ) d p

and following the same step, we have

4 π B i ( t , x ) = ( B i ) 0 ( t , x ) + B T i ( t , x ) + B S i ( t , x )

This proves theorem 2.2, Proof of uniqueness of Theorem 2.1.

To do this, let ( f ( 1 ) , E ( 1 ) , B ( 1 ) ) and ( f ( 2 ) , E ( 2 ) , B ( 2 ) ) be two different Classical solutions of 2.1 with the same Cauchy data given. Define

f = f ( 1 ) − f ( 2 ) , E = E ( 1 ) − E ( 2 ) , B = B ( 1 ) − B ( 2 ) , K = K ( 1 ) − K ( 2 ) , where K ( i ) = E ( i ) + v × B ( i )

From the Vlasov Equation,

∂ t f + v ⋅ ∇ x f + K ⋅ ∇ p f = 0

we have

( ∂ t + v ⋅ ∇ x ) f = S f = − ∇ p ⋅ K f = ∇ p ⋅ [ − K ( 1 ) f ( 1 ) + K ( 2 ) f ( 2 ) ] = ∇ p ⋅ [ − K f ( 1 ) − K ( 2 ) f ]

Using theorem 2.2 above, we get

4 π E ( t , x ) = E T ( t , x ) + E S ( t , x ) (2.11)

where

E T ( t , x ) = − ∫ | y − x | ≤ t ∫ ( w + v ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 f ( t − | y − x | , y , p ) d p d y | y − x | 2

E S ( t , x ) = − ∬ | y − x | ≤ t ∇ p [ w + v 1 + v ⋅ w ] ⋅ ( − K f ( 1 ) − K ( 2 ) f ) d p d y | y − x |

Here, in the E S i term using the fact that S f is a pure p divergent, we have integrated by parts in p. Similarly we can represent the field B as

4 π B ( t , x ) = B T ( t , x ) + B S ( t , x ) (2.12)

Since, f has a compact support in p, the expression 1 + v ⋅ w is bounded away from zero. Now since the fields are bounded (from hypothesis), adding Equations (2.11) and (2.12), estimating using the support property, for t ≤ T and using Equation (1.8), in chapter one, we have

| E ( t ) | 0 + | B ( t ) | 0 ≤ C T ∫ 0 t ( | f ( ξ ) | 0 + [ | E ( 2 ) ( ξ ) | 0 + | B ( 2 ) ( ξ ) | 0 ] | f ( ξ ) | 0 + ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) | f ( 1 ) ( ξ ) | 0 ) d ξ

where, | . | 0 represents the maximum norm.

Now E ( 2 ) , B ( 2 ) and f ( 1 ) bounded implies that

| E ( t ) | 0 + | B ( t ) | 0 ≤ C T ∫ 0 t ( | f ( ξ ) | 0 + | E ( ξ ) | 0 + | B ( ξ ) | 0 ) d ξ (2.13)

Again from

( ∂ t + v ⋅ ∇ x ) f = ∇ p ⋅ [ − K f ( 1 ) − K ( 2 ) f ]

∂ t f + v ⋅ ∇ x f + K ( 1 ) ⋅ ∇ p f = − K ⋅ ∇ p f ( 2 ) , then

the characteristics of this equation are the solutions of:

x ˙ = v , p ˙ = K ( 1 ) , f ˙ = − K ⋅ ∇ p f ( 2 )

Thus, from

f ˙ = − K ⋅ ∇ p f ( 2 )

when f is written as a line integral over such a characteristics curve, we have

| f ( t ) | 0 ≤ C ∫ 0 t | K ⋅ ∇ p f ( 2 ) ( ξ ) | 0 d ξ

Since, ∇ p f ( 2 ) is bounded , we can write it as:

| f ( t ) | 0 ≤ C T ∫ 0 t | K ( ξ ) | 0 d ξ ≤ C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) d ξ (2.14)

Now adding 2.13 and 2.14, we have

| E ( t ) | 0 + | B ( t ) | 0 + | f ( t ) | 0 ≤ C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 + | f ( ξ ) | 0 ) d ξ

Applying Gronwall lemma, we have E = B = f = 0 . This implies the solution is unique. This proves the uniqueness of theorem 2.1.

Theorem 2.3. Assume that β ( t ) exists as in the hypothesis of theorem 2.1. Then the derivatives of the fields can be represented as:

∂ k E i = ( ∂ k E i ) 0 + ∬ | y − x | ≤ t a ( w , v ) f d p d y | y − x | 3 + ∫ | w | = 1 ∫ d ( w , v ) f ( t , x , p ) d w d p + ∬ | y − x | ≤ t b ( w , v ) ( S f ) d p d y | y − x | 2 + ∬ | y − x | ≤ t c ( w , v ) S 2 f d p d y | y − x |

Note that f , S f , S 2 f without explicit arguments are evaluated at ( t − | x − y | , y , p ) and | y − x | = γ . Except at 1 + v ⋅ w , the functions a ( w , v ) , b ( w , v ) , c ( w , v ) , d ( w , v ) are C ∞ . Moreover

∫ | w | = 1 a ( w , v ) d w = 0

In a similar way, we can represent ∂ k B i .

Proof:

Applying ∂ ∂ x k in to the field representation in theorem 2.2, we have

4 π ∂ k E i = ( ∂ k E i ) 0 − ∬ | y − x | ≤ t ( w i + v i ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 ∂ k f ( t − | y − x | , y , p ) d p d y | y − x | 2 − ∬ | y − x | ≤ t ( w i + v i ) ( 1 + v ⋅ w ) ∂ k ( S f ) ( t − | y − x | , y , p ) d p d y | y − x | = ( ∂ k E i ) 0 − ∬ | y − x | ≤ t ( w i + v i ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 [ ( δ j k − w k v j 1 + v ⋅ w ) T j + w k S 1 + v ⋅ w ] f d p d y | y − x | 2 − ∬ | y − x | ≤ t ( w i + v i ) ( 1 + v ⋅ w ) [ ( δ j k − w k v j 1 + v ⋅ w ) T j + w k S 1 + v ⋅ w ] ( S f ) d p d y | y − x |

Now using the fact that T j is a perfect y j derivative, integrating the last integral using integration by parts in y, is equal to:

∬ | y − x | ≤ t ∂ j [ ( w i + v i ) ( 1 + v ⋅ w ) ( δ j k − w k v j 1 + v ⋅ w ) 1 γ ] ( S f ) ( t − | y − x | , y , p ) d p d y − ∬ | y − x | ≤ t ( w i + v i ) w k ( 1 + v ⋅ w ) 2 1 γ S 2 f ( t − | y − x | , y , p ) d p d y − 1 t ∫ γ = t ∫ w k ( w i + v i ) ( 1 + v ⋅ w ) 2 S f ( 0 , y , p ) d p d S y

Here the last expression is part of ( E 0 i ) . Hence c ( w , v ) is the term multi- plying S 2 f , and b ( w , v ) is the term multiplying S f γ 2 , which we can see it

easily. Now let us determine d ( w , v ) and a ( w , v ) . Here, the most singular term is the T j term, which appears in the first expression, it is:

E T T = − lim ϵ → 0 ∬ ϵ ≤ γ ≤ t ( w i + v i ) ( 1 − | v | 2 ) γ 2 ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) T j f ( t − | y − x | , y , p ) d p d y

Simplifying this we get:

= ∬ γ = t w j ( w i + v i ) ( 1 − | v | 2 ) t 2 ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) f ( 0 , y , p ) d p d S y − ∬ γ = ϵ w j ( w i + v i ) ( 1 − | v | 2 ) ϵ 2 ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) f ( t − ϵ , y , p ) d p d S y − ∬ ϵ < | y − x | < t ∂ j [ ( w i + v i ) ( 1 − | v | 2 ) γ 2 ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) ] f ( t − | y − x | , y , p ) d p d y

since the first term depends only on initial data, hence part of ( ∂ k E 0 i ) . Now the second term simplifies as:

− ∫ | w | = 1 ∫ w j ( w i + v i ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) f ( t , x , p ) d w d p

Hence d ( w , v ) is known. Now from the last term of E T T ,

∂ j [ ( w i + v i ) ( 1 − | v | 2 ) γ 2 ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) ] = − 3 ( w i + v i ) [ w k ( 1 − | v | 2 ) + v k ( 1 + v ⋅ w ) ] + ( 1 + v ⋅ w ) 2 δ i k γ 3 ( 1 + v ⋅ w ) 4 ( 1 + | p | 2 )

(see an elementary computation of this in the appendix part of [

∫ | w | = 1 a ( w , v ) d w = 0

write a ( w , v ) as;

a ( w , v ) = − 3 ( w i + v i ) w k ( 1 + | p | 2 ) 2 ( 1 + v ⋅ w ) 4 − 3 v k ( w i + v i ) ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 3 + δ i k [ 1 + | p | 2 + p ⋅ w ] 2

because

( 1 + | p | 2 + p ⋅ w ) 2 = ( 1 + | p | 2 ) ( 1 + p ⋅ w 1 + | p | 2 ) 2 = ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 2

Hence

∫ | w | = 1 a ( w , v ) d w = ∫ | w | = 1 − 3 ( w i + v i ) w k ( 1 + | p | 2 ) 2 ( 1 + v ⋅ w ) 4 d w − ∫ | w | = 1 3 v k ( w i + v i ) ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 3 d w + ∫ | w | = 1 δ i k [ 1 + | p | 2 + p ⋅ w ] 2 d w

Now first compute the third term, we have:

∫ | w | = 1 δ i k [ 1 + | p | 2 + p ⋅ w ] 2 d w = δ i k 1 + | p | 2 ∫ | w | = 1 d w ( 1 + v ⋅ w ) 2 = ∫ 0 π ∫ 0 2π sin ϕ ( 1 + | v | cos ϕ ) 2 d θ d ϕ = 2 π δ i k 1 + | p | 2 ∫ 0 π sin ϕ d ϕ ( 1 + | v | cos ϕ ) 2 = 4 π δ i k (2.15)

Now the integrand in the first term simplifies as:

− 3 w k ( w i + v i ) ( 1 + | p | 2 ) 2 ( 1 + v ⋅ w ) 4 = − 3 w k ( w i + v i ) [ 1 + | p | 2 + p ⋅ w ] 4 = ∂ ∂ p i [ ( w k + v k ) − v k ( 1 + | p | 2 + p ⋅ w ) 3 ] = − ∂ ∂ p i [ ∂ ∂ p k ( 1 2 ( 1 + | p | 2 + p ⋅ w ) 2 ) + v k ( 1 ( 1 + | p | 2 + p ⋅ w ) 3 ) ]

Hence,

∫ | w | = 1 − 3 ( w i + v i ) w k ( 1 + | p | 2 ) 2 ( 1 + v ⋅ w ) 4 d w = ∫ | w | = 1 − ∂ ∂ p i [ ∂ ∂ p k ( 1 2 ( 1 + | p | 2 + p ⋅ w ) 2 ) + v k ( 1 ( 1 + | p | 2 + p ⋅ w ) 3 ) ] d w = − 1 2 ∂ 2 ∂ p i ∂ p k ∫ | w | = 1 1 ( 1 + | p | 2 + p ⋅ w ) 2 d w − ∂ ∂ p i ∫ | w | = 1 v k ( 1 + | p | 2 + p ⋅ w ) 3 d w = − 1 2 ∂ 2 ∂ p i ∂ p k ( 4 π ) − ∂ ∂ p i ( 4 π p k ) = − 4 π δ i k (2.16)

Now the second integral becomes:

∫ | w | = 1 − 3 v k ( w i + v i ) ( 1 + | p | 2 ) ( 1 + p ⋅ w ) 3 d w = 1 + | p | 2 ∫ | w | = 1 − 3 ( w i + v i ) p k ( 1 + | p | 2 + p ⋅ w ) 3 d w = 1 + | p | 2 p k ∫ | w | = 1 3 2 ∂ ∂ p i ( 1 ( 1 + | p | 2 + p ⋅ w ) 2 ) d w = 1 + | p | 2 3 2 p k ∂ ∂ p i ∫ | w | = 1 d w ( 1 + | p | 2 + p ⋅ w ) 2 = 1 + | p | 2 p k 3 2 ∂ ∂ p i ( 4 π ) = 0 (2.17)

Adding the Equations (2.15), (2.16) and (2.17), we get

∫ | w | = 1 a ( w , v ) = 0

We can have the same result for the magnetic field see [

To estimate the particle density take

f ( 0 , x , p ) = f 0 ( x , p ) ∈ C 0 2

and s u p p ( f 0 ) = [ | x | ≤ k , | p | ≤ k ] . The characteristics of the Vlasov equation are solutions of the O D E s :

x ˙ = v , p ˙ = K , f ˙ = 0

Hence

f ( t , x , p ) = f 0 ( X ( 0 , t , x , p ) , P ( 0 , t , x , p ) )

and since 0 ≤ f 0 ≤ max ( f 0 ) < ∞ , also 0 ≤ f ≤ m a x ( f 0 ) , that is f is non-negative and bounded.

Now, we claim that f ( t , x , p ) = 0 if | x | > t + k . From the ODE above,

| X ( 0 , t , x , p ) − x | = | ∫ 0 t V ( s , t , x , p ) d s | ≤ t . Hence , if | x | > k + t

| x | − | X ( 0 , t , x , p ) | ≤ | X ( 0 , t , x , p ) − x | ≤ t thisimplies | X ( 0 , t , x , p ) | > | x | − t > k + t − t = k

Hence, ρ ( t , x ) = j ( t , x ) = 0 for | x | > t + k , provided that

ρ = ∫ f d p , j = ∫ f v d p and f ( t , x , p ) = 0 for | x | > k

Now let us estimate the derivatives of the particle density. Let D f = ∂ ∂ x j , for

any j. Then we have;

( ∂ t + v ⋅ ∇ x + K ⋅ ∇ p ) ( D f ) = − D K ⋅ ∇ p f (2.18)

From d d s ( D f ) = − D K ⋅ ∇ p f , we get

d d s [ D f ( s , X ( s , t , x , p ) , P ( s , t , x , p ) ) ] = − D K ⋅ ∇ p f ( s , X ( s , t , x , p ) , P ( s , t , x , p ) )

Integrating both sides we have

| D f ( t , x , p ) | ≤ | D f ( 0, X ( 0, t , x , p ) , P ( 0, t , x , p ) ) | + ∫ 0 t | D K ⋅ ∇ p f ( s , X ( s , t , x , p ) , P ( s , t , x , p ) ) | d s (2.19)

Now let us define the following norms [

| B ( t ) | 0 = sup x | B ( t , x ) | , ‖ B ‖ 0 = sup 0 ≤ t ≤ T | B ( t ) | 0 ,

| B ( t ) | 1 = ∑ k = 1 3 sup x | ∂ x k B ( t , x ) | + sup x | ∂ t B ( t , x ) | ,

‖ B ‖ 1 = sup 0 ≤ t ≤ T | B ( t ) | 1 , | f ( t ) | 0 = sup x , p f ( t , x , p ) ,

| f ( t ) | 1 = sup x sup p ( | ∂ t f | + ∑ k = 1 3 ( | ∂ x k f | + | ∂ p k f | ) ) ,

‖ f ‖ 0 = sup 0 ≤ t ≤ T | f ( t ) | 0 and ‖ f ‖ 1 = sup 0 ≤ t ≤ T | f ( t ) | 1

A similar definition can be done for the electric field E.

Now by applying the norm properties above, the expression in 2.3 can be reduced to

| D f ( t ) | 0 ≤ C 0 + C ∫ 0 t ( | E ( ξ ) | 1 + | B ( ξ ) | 1 ) | f ( ξ ) | 1 d ξ (2.20)

Again by taking D = ∂ ∂ p j , we can have a similar bound, since

[ ∂ t + v ⋅ ∇ x + K ⋅ ∇ p ] ( D f ) = − ∂ v ∂ p j ⋅ ∇ x f − ∂ v ∂ p j × B ⋅ ∇ p f

Therefore, again by applying the norm properties above we have,

| f ( t ) | 1 ≤ C 0 + C T ∫ 0 t [ 1 + | E ( ξ ) | 1 + | B ( ξ ) | 1 ] | f ( ξ ) | 1 d ξ for t ≤ T (2.21)

We already proved in theorem 2.2 that the fields can be represented as:

E = ( data term ) + E T + E S

B = ( data term ) + B T + B S

By our hypothesis we have | p | ≤ β ( t ) ≤ β T , say on support of f for 0 ≤ t ≤ T . Now

| v ⋅ w | ≤ β T β T 2 + 1 < 1. Therefore , | ( w i + v i ) ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 2 | ≤ C T

Hence

| E T ( t , x ) | ≤ C T ∫ | y − x | ≤ t ∫ | p | ≤ β T f ( t − | x − y | , y , p ) d p d y γ 2

Since

∫ | p | ≤ β T f d p ≤ ‖ f ‖ 0 ∫ | p | ≤ β T d p ≤ ‖ f ‖ 0 β T 3 and ∫ | y − x | < t d y | y − x | 2 = C t

We have that

| E T ( t , x ) | ≤ C T β T 3 t ‖ f ‖ 0 (2.22)

Similarly, for E S , we use S f = − K ⋅ ∇ p f = − ∇ p ⋅ ( K f ) . Then integrating this by parts in p, we get:

E S = − ∬ w i + v i 1 + v ⋅ w ( S f ) d p d y | y − x | = ∫ | y − x | ≤ t ∫ ∇ p [ w i + v i 1 + v ⋅ w ] ⋅ ( E + v × B ) f d p d y | y − x | (2.23)

By the support hypothesis, the v-gradient factor is bounded (say by C T ). Hence,

| E S ( t , x ) | ≤ C T ∫ γ ≤ t ∫ | p | ≤ β T ( | E ( t ) | 0 + | B ( t ) | 0 ) | f ( t ) | 0 d p d y γ

Therefore,

| E ( t ) | 0 ≤ C T + C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) d ξ (2.24)

A similar estimate holds for B, See ( [

| B ( t ) | 0 ≤ C T + C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) d ξ (2.25)

Adding Equations (2.24) and (2.25), we have:

| E ( t ) | 0 + | B ( t ) | 0 ≤ C T + C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) d ξ

Applying Gronwall’s lemma, we obtain

| E ( t ) | 0 + | B ( t ) | 0 ≤ C T (2.26)

Theorem 2.4. [

log * s = { s , s ≤ 1 1 + ln s , s ≥ 1

then

| E ( t ) | 1 + | B ( t ) | 1 ≤ C T [ 1 + log * ( sup γ ≤ t | f ( ξ ) | 1 ) ] for t ≤ T (2.27)

Proof: We can express ∂ k E i in the form of theorem 2.3 above as:

∂ k E i = ( data term ) + ∂ k E T T i − ∂ k E T S i + ∂ k E S T i − ∂ k E S S i + O ( 1 )

where,

∂ k E T T i = ∬ | y − x | ≤ t ∂ j [ ( w i + v i ) ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 2 ( δ j k − w k v j 1 + v ⋅ w ) 1 γ 2 ] f d p d y

∂ k E T S i = ∬ | y − x | ≤ t ( w i + v i ) w k ( 1 + | p | 2 ) ( 1 + v ⋅ w ) 3 γ 2 S f d p d y

∂ k E S T i = ∬ | y − x | ≤ t ∂ j [ ( w i + v i ) 1 + v ⋅ w ( δ j k − w k v j 1 + v ⋅ w ) 1 γ ] S f d p d y

∂ k E S S i = ∬ | y − x | ≤ t ( w i + v i ) w k ( 1 + v ⋅ w ) 2 γ S 2 f d p d y

Here the first term(data term) which is ( ∂ k E 0 i ) is bounded in C 2 since it is just depends on the derivative of the initial data. For the second term, ∂ k E T T i , a direct bound would leave a singularity | x − y | − 3 which is logarithmically diver- gent. Thus we must use the fact that the kernel has zero average.

For simplicity write y = x + ( t − ξ ) w .

From this the most singular term is the E T T term. Hence

∂ k E T T i = ∫ 0 t 1 t − ξ ∬ a ( w , v ) f ( ξ , x + w ( t − ξ ) , p ) d p d w d ξ

Here ω is integrated over the unit sphere S 2 and p is over ℝ 3 . We break the ξ integral into two integrals, over [ 0, t − c ] and over [ t − c , t ] . Since the support of f is bounded in p, the kernel a ( w , v ) is bounded for | p | < β T . Hence, for any c , 0 < c < t

∫ 0 t − c 1 t − ξ ∬ a ( w , v ) f ( ξ , x + w ( t − ξ ) , p ) d p d w d ξ ≤ C T ‖ f ‖ 0 ∫ 0 t − c d ξ t − ξ ≤ C T ln ( t c ) (2.28)

Now the integral over [ t − c , t ] is equal to,

∫ t − c t ∬ a ( w , v ) [ f ( ξ , x + w ( t − ξ ) , p ) − f ( ξ , x , p ) ] d w d p d ξ t − ξ ,

because ∫ | w | = 1 a ( w , v ) d w = 0 .

Therefore

| ∫ t − c t ∬ a ( w , v ) [ f ( ξ , x + w ( t − ξ ) , p ) − f ( ξ , x , p ) d w d p d ξ t − ξ ] | ≤ C ‖ ∇ x f ‖ 0 ∫ t − c t ∫ | w | = 1 ∫ | p | ≤ β T d p d w d ξ ≤ C T c ‖ ∇ x f ‖ 0 (2.29)

Hence, from expressions 2.28 and 2.29, we have:

| ∂ k E T T i ( t , x ) | ≤ C T [ ln ( t c ) + c ‖ ∇ x f ‖ 0 ]

Now take 1 c = ‖ ∇ x f ‖ 0 , we get

| ∂ k E T T i ( t , x ) | ≤ C T [ 1 + log * ( ‖ ∇ x f ‖ 0 ) ] (2.30)

For the S f term, let us integrate by parts in p:

∫ γ ≤ t ∫ b ( w , v ) S f d p d y γ 2 = − ∫ γ ≤ t ∫ b ( w , v ) ∇ p ⋅ ( K f ) d p d y γ 2 = ∫ γ ≤ t ∫ ∇ p ⋅ ( b ( w , v ) ) ⋅ K f d p d y γ 2 ≤ C ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) | f ( ξ ) | 0 d ξ ≤ C T

That is

∫ γ ≤ t ∫ b ( w , v ) S f d p d y γ 2 ≤ C T (2.31)

For the S 2 term, we write

S 2 f = − S [ ∇ p ⋅ ( K f ) ] = − ( ∂ t + v ⋅ ∇ x ) ∂ p k ( f K k ) = − ∂ p k [ ( ∂ t + v ⋅ ∇ x ) ( f K k ) ] + ( ∂ p k v j ) ∂ x j ( f K k ) = − ∇ p ⋅ ( K f ) + δ j k − v j ⋅ v k 1 + | p | 2 ( f ∂ x j K k + K k ∂ x j f )

But we have; c ( w , v ) = − w k ( w i + v i ) ( 1 + v ⋅ w ) 2 .

Thus,

∂ k E S S i = ∫ γ ≤ t ∫ c ( w , v ) S 2 f d p d y γ = ∫ γ ≤ t ∫ ∇ p c ( w , v ) ⋅ S ( K f ) d p d y γ + ∫ γ ≤ t ∫ c ( w , v ) δ j k − v j ⋅ v k 1 + | v | 2 ( f ∂ x j K k + K k ∂ x j f ) d p d y γ

Let

c ( w , v ) δ j k − v j ⋅ v k 1 + | v | 2 = m j k ( w , v )

which is bounded and the y-integrals are over the ball | y − x | ≤ t . Now

S ( K f ) = K S f + f S K = − K ∇ p ⋅ ( K f ) + f S K (2.32)

This implies

∂ k E S S i = ∬ γ ≤ t f K ⋅ ∇ p [ ∇ p c ⋅ K ] d p d y γ + ∬ γ ≤ t ∇ p c ⋅ f S K d p d y γ + ∬ γ ≤ t m j k f ∂ x j K k d p d y γ + ∬ γ ≤ t m j k K k ∂ x j f d p d y γ = I + I I + I I I + I V respectively (2.33)

Because | ∇ p K | ≤ | B | ,

| I | ≤ C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) 2 | f ( ξ ) | 0 d ξ ≤ C T

| I I | ≤ C T ∫ 0 t | f ( ξ ) | 0 ( | E ( ξ ) | 1 + | B ( ξ ) | 1 ) d ξ (2.34)

and III satisfy the same bound as II. Now again split ∂ x j f in IV. That is:

I V = ∫ γ ≤ t ∫ m j k K k [ − w j ∇ P ⋅ ( K f ) 1 + v ⋅ w + ( δ j b − w j v b 1 + v ⋅ w ) T b f ] d p d y γ = I V 1 + I V 2

For I V 1 , integrating by parts in p, and the resulting kernel is bounded for v and | ∇ p K | ≤ | B | , hence we have;

| I V 1 | ≤ C T ∫ 0 t ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) 2 | f ( ξ ) | 0 d ξ ≤ C T (2.35)

For I V 2 , we recall in Section 2.1, T j is a perfect y derivative, hence we can integrate by part in y. Since ∇ y w = O ( | y − x | − 1 ) , the resulting kernel in I V 2 is bounded by C | x − y | − 2 . Therefore

| I V 2 | ≤ C T ∫ 0 t | f ( ξ ) | 0 ( | E ( ξ ) | 1 + | B ( ξ ) | 1 ) d ξ (2.36)

Combining these results, we get:

| ∂ k E S S i | 0 ≤ C T ∫ 0 t [ ( | E ( ξ ) | 0 + | B ( ξ ) | 0 ) 2 + ( | E ( ξ ) | 1 + | B ( ξ ) | 1 ) | f ( ξ ) | 0 ] d ξ (2.37)

Now adding 2.30, 2.31 and 2.37, we get:

| ∂ E k i | ≤ C [ 1 + log * ( sup 0 ≤ ξ ≤ t | f ( ξ ) | 1 ) + ∫ 0 t ( | E ( ξ ) | 1 + | B ( ξ ) | 1 d ξ ) ]

To get the same result for B, we repeat the same process, (see [

| E ( t ) | 1 + | B ( t ) | 1 ≤ C T [ 1 + log * ( sup 0 ≤ ξ ≤ t | f ( ξ ) | 1 ) + ∫ 0 t | K ( ξ ) | 1 d ξ ]

Now applying Gronwall’s lemma, we have

| E ( t ) | 1 + | B ( t ) | 1 ≤ C [ 1 + log * ( sup 0 ≤ ξ ≤ t | f ( ξ ) | 1 ) ] for 0 ≤ t ≤ T

This proves theorem 2.5.

Here putting 2.21 into this expression, we get:

| f ( t ) | 1 ≤ c + C T ∫ 0 t [ 1 + | K ( ξ ) | 1 ] | f ( ξ ) | 1 d ξ ≤ c + C T ∫ 0 t [ 1 + log * ( sup 0 ≤ s ≤ t | f ( s ) | 1 ) ] | f ( ξ ) | 1 d ξ

Now let sup 0 ≤ s ≤ t | f ( s ) | 1 = ρ ( t ) , then

ρ ( t ) ≤ c + C T ∫ 0 t [ 1 + log * ρ ( ξ ) ] ρ ( ξ ) d ξ = λ ( t )

Therefore,

∫ 0 λ ( t ) d λ λ ( 1 + log * λ ) ≤ C t

This indicates that | f ( t ) | 1 is bounded, and hence | K ( t ) | 1 is also bounded. Using this estimates, we will proof the existence of the solutions for theorem 2.1.

From the hypothesis we have smooth initial data f 0 ( x , p ) ∈ C 0 2 , E 0 ( x ) , B 0 ( x ) ∈ C 3 . Now take E 1 ( x ) and B 1 ( x ) in C 2 . Now we recursively define the solutions f n ( t , x , p ) , E n ( t , x ) , B n ( t , x ) as follows.

Define f ( 0 ) ( t , x , p ) = f 0 ( x , p ) , E ( 0 ) ( x ) = E 0 ( x ) and B ( 0 ) ( x ) = B 0 ( x ) .

Given that ( n − 1 ) t h iteration, we define f n as the solution of

∂ t f ( n ) + v ⋅ ∇ x f ( n ) + [ E ( n − 1 ) + v × B ( n − 1 ) ] ⋅ ∇ p f ( n ) = 0 (2.38)

f ( n ) ( o , x , p ) = f 0 ( x , p ) (2.39)

which is a linear equation (for a single unknown) of the form

∂ t f + c ( t , x , p ) ⋅ ∇ ( x , p ) f = 0

and with initial condition f 0 , where c and f 0 are C 2 functions. Since E ( n − 1 ) and B ( n − 1 ) are C 2 , f ( n ) is also a C ( 2 ) function.

The characteristics of 2.38 are the solutions of:

x ˙ = v , p ˙ = E ( n − 1 ) + v × B ( n − 1 ) , f ˙ = 0

Hence, f ( n ) is constant along the characteristics, hence f ( n ) has a compact support in p, therefore

ρ ( n ) = ∫ f ( n ) d p and j ( n ) = ∫ v f ( n ) d p

are C 2 -functions.

Now given f ( n ) , hence ρ ( n ) and j ( n ) , we define E ( n ) and B ( n ) as solu- tions of the system

( ∂ t 2 − Δ ) E ( n ) = − ∇ x ρ ( n ) − ∂ t j ( n ) ( ∂ t 2 − Δ ) B ( n ) = ∇ x × j ( n ) (2.40)

with initial data E 0 ( x ) , B 0 ( x ) , B 1 ( x ) , E 1 ( x ) .

Lemma 2.5. [

Proof: From 2.40, since ρ ( n ) , j ( n ) are in C 1 , hence the solutions given in 2.7 are C 1 . Now let us proceed by induction on n to show the solutions are C 2 . From the representation theorem 2.2,

4 π E ( n ) ( t , x ) = E 0 ( t , x ) + E T ( n ) ( t , x ) + E S ( n ) ( t , x )

where E 0 ( t , x ) is the solution of the homogeneous wave equation with the same Cauchy data, and

E S ( n ) ( t , x ) = − ∬ | y − x | ≤ t ( w + v ) 1 + v ⋅ w ( S f ( n ) ) ( t − | y − x | , y , p ) d p d y | y − s |

E T ( n ) ( t , x ) = − ∬ | y − x | ≤ t ( w + v ) ( 1 − | v | 2 ) ( 1 + v ⋅ w ) 2 f ( n ) ( t − | y − x | , y , p ) d p d y | y − x | 2

Here E 0 ( t , x ) is C 2 from 1.8. From E S ( n ) ( t , x ) ,

S f ( n ) = − ∇ p ⋅ [ ( E ( n − 1 ) + v × B ( n − 1 ) ) f ( n ) ]

Substituting this in to E S ( n ) ( t , x ) , we can integrate by parts in p. From the induction hypothesis E ( n − 1 ) and B ( n − 1 ) are C 2 , hence E ( n ) is C 2 . Similarly B ( n ) is a C 2 function. This proves lemma 2.5.

Now let us claim that the estimates 2.21 and 2.26 holds uniformly in n for f ( n ) , E ( n ) and B ( n ) . To show this we follow the same process as we did for f , E and B , the only difference is replacing with the superscripts ( n − 1 ) and n . Thus,

| f ( n ) ( ξ ) | 0 ≤ C (2.41)

and, the expression analogous to 2.21 is;

| f ( n ) ( t ) | 1 ≤ C + C T ∫ 0 t ( 1 + | E ( n ) ( ξ ) | 1 + | B ( n ) ( ξ ) | 1 ) | f ( n ) ( ξ ) | 1 d ξ (2.42)

and the analogue of 2.26 is;

| E ( n ) ( t ) | 0 + | B ( n ) ( t ) | 0 ≤ C + C ∫ 0 t ( | B ( n ) ( ξ ) | 0 + | B ( n ) ( ξ ) | 0 ) d ξ (2.43)

for 0 ≤ t ≤ T with constants C depending on T. Now iterating 2.43, we have

| E ( n ) ( t ) | 0 + | B ( n ) ( t ) | 0 ≤ C ( 1 + C t + C 2 t 2 2 ! + ⋯ + C n t n n ! ) ≤ C e C t (2.44)

This tells us that the fields E ( n ) , B ( n ) and f ( n ) are point wise bounded uniformly in n. Now by applying Gronwall’s lemma for 2.42, we get:

| f ( n ) ( t ) | 1 ≤ C exp ∫ 0 t C ( | E ( n − 1 ) ( ξ ) | 1 + | B ( n − 1 ) ( ξ ) | 1 ) d ξ (2.45)

Now an analogue of the result of theorem 2.5 is:

| E ( n ) ( t ) | 1 + | B ( n ) ( t ) | 1 ≤ C T + C T log * ( sup 0 ≤ ξ ≤ t | f ( n ) ( ξ ) | 1 ) + C T ∫ 0 t ( | E ( n − 1 ) ( ξ ) | 1 + | B ( n − 1 ) ( ξ ) | 1 ) d ξ . (2.46)

Substituting 2.45 in to 2.46, we conclude that;

| E ( n ) ( t ) | 1 + | B ( n ) ( t ) | 1 ≤ C T + C T ∫ 0 t ( | E ( n − 1 ) ( ξ ) | 1 + | B ( n − 1 ) ( ξ ) | 1 ) d ξ (2.47)

Since log * s ≤ max { 1 , 1 + ln s } , iterating 2.47 as 2.44 above, we get uniform bound for

‖ E ( n ) ‖ 1 + ‖ B ( n ) ‖ 1 (2.48)

And from 2.43, we have a uniform bound for ‖ f ( n ) ‖ 1 for all n and 0 ≤ t ≤ T .

From this estimates, and compactness property, now it is easy to pass to the limit. But to get an optimal result, let us show that the sequences are Cauchy sequences in the C 1 norm.

In the rest of this proof, to show the sequences are Cauchy, we used the materials ( [

To show the sequences are Cauchy, let us fix two indices m and n. For j = 0, 1, let

d j m n = | E ( m ) ( t ) − E ( n ) ( t ) | j + | B ( m ) ( t ) − B ( n ) ( t ) | j

f j ( m n ) ( t ) = | f ( m ) ( t ) − f ( n ) ( t ) | j

In the same way as derivations of 2.13 for E n , B n , we have

d 0 m n ( t ) ≤ C ∫ 0 t ( d 0 m − 1, n − 1 ( ξ ) + f 0 m n ( ξ ) ) d ξ (2.49)

And the term analogous to 2.14 is;

f 0 m n ( t ) ≤ C ∫ 0 t d 0 m − 1, n − 1 ( ξ ) d ξ (2.50)

using the bounds already known, with C depending on T and 0 ≤ t ≤ T . Now substituting 2.50 to 2.49, we obtain

d 0 m n ( t ) ≤ C 1 ∫ 0 t ( d 0 m − 1, n − 1 ( ξ ) ) d ξ , C 1 ≠ C (2.51)

Now iterating 2.51, we have;

d 0 m n ( t ) ≤ C 1 2 ∫ 0 t ( t − ξ ) d 0 m − 2 , n − 2 ( ξ ) d ξ ⋯ C 1 k ∫ 0 t ( t − ξ ) k − 1 ( k − 1 ) ! d 0 m − k , n − k ( ξ ) d ξ ≤ a C 1 k t k k ! , for m ≥ k , n ≥ k

where

d 0 m n ( t ) ≤ | E ( m ) ( t ) | 0 + | E ( n ) ( t ) | 0 + | B ( m ) ( t ) | 0 + | B ( n ) ( t ) | 0 ≤ a

Therefore, E ( n ) and B ( n ) are Cauchy sequences in the C 0 norm, and from 2.50 f ( n ) is a Cauchy sequence in C 0 norm. Hence they converge uniformly in C 0 .

Now let us claim that f ( n ) , E ( n ) and B ( n ) are Cauchy in C 1 norm.

To prove this, let us split ∂ E ( n ) and ∂ B ( n ) as in such a way given in theorem 2.5, and then subtract this expressions. We can write the TT term as written in 2.28-2.30 and then estimate it, we have;

| ∂ E T T ( n ) ( t ) − ∂ E T T ( m ) ( t ) | 0 ≤ C ∫ 0 t | f ( n ) ( ξ ) − f ( m ) ( ξ ) | 1 d ξ

Similarly TS and ST terms are written as in 2.31 and then estimated as:

C ∫ 0 t ( d 0 m − 1, n − 1 ( ξ ) + f 0 m n ( ξ ) ) d ξ

For the SS term, let us break up in to several pieces as in 2.33. Following the same procedure and using the known bound in C 1 , we conclude that

| ∂ E S S ( n ) ( t ) − ∂ E S S ( m ) ( t ) | 0 ≤ C ∫ 0 t ( d 1 m − 1, n − 1 ( ξ ) + f 0 m n ( ξ ) ) d ξ

Therefore,

d 1 m n ( t ) ≤ C ∫ 0 t ( d 1 m − 1, n − 1 ( ξ ) + f 1 m n ( ξ ) ) d ξ (2.52)

Let us now estimate f 1 m n . To do this recall the characteristics equations

x ˙ n = v n , p ˙ n = K ( n − 1 ) ( s , x n ) (2.53)

where

K ( n − 1 ) = E ( n − 1 ) + v n × B ( n − 1 )

is evaluated at time s.

Let x n ( x ) , p n ( s ) be the solutions of 2.53 with the initial values x , p at s = t respectively. From x ˙ n = v n we have

| d d s ( x n − x m ) | ≤ | v n − v m | (2.54)

because the derivative of the real function p ↦ p ( 1 + p 2 ) − 1 / 2 is bounded by unity. From p ˙ n = K ( n − 1 ) ( s , x n ) , we have:

| d d s ( p n − p m ) | = | K ( n − 1 ) ( x n ) − K ( m − 1 ) ( x m ) | ≤ | K ( n − 1 ) ( x n ) − K ( n − 1 ) ( x m ) | + | K ( n − 1 ) ( x m ) − K ( m − 1 ) ( x m ) | ≤ C | x n − x m | + | K ( n − 1 ) ( x m ) − K ( m − 1 ) ( x m ) |

this is because each K ( n ) has uniformly bounded C 1 norm.

Now by the known bounds, we have

| ( K ( n − 1 ) − K ( m − 1 ) ) ( x m ) | ≤ δ n m

say, where δ n m → 0 as n , m → ∞ uniformly on [ 0, T ] . Therefore,

| x n − x m | + | p n − p m | ≤ C T [ δ n m + C ∫ 0 t ( ( v n − v m ) + | x n − x m | ) d s ] (2.55)

Hence by Gronwall’s lemma, the sequences x n ( s ) , p n ( s ) converges uniformly on 0 ≤ s ≤ T . Here, on the parameter t , x , p , the convergence is also uniform, 0 ≤ t ≤ T , x ∈ ℝ 3 , p ∈ ℝ 3 . To estimate f 1 m n ( t ) , let us differentiate 2.38 with respect to x, hence the result is;

( ∂ t + v ⋅ ∇ x + K ( n − 1 ) ( x n ) ⋅ ∇ p ) ∂ f n = − ∂ K ( n − 1 ) ⋅ ∇ p f ( n )

After integrating this along the characteristics, we have

∂ f ( n ) ( t , x , p ) = ∂ f 0 ( x n ( 0 ) , p n ( 0 ) ) − ∫ 0 t [ ∂ K ( n − 1 ) ( x n ) ⋅ ∇ p f ( n ) ( s , x n , p n ) ] d s

∂ f ( m ) ( t , x , p ) = ∂ f 0 ( x m ( 0 ) , p m ( 0 ) ) − ∫ 0 t [ ∂ K ( m − 1 ) ( x m ) ⋅ ∇ p f ( m ) ( s , x m , p m ) ] d s

Subtracting the second from the first and estimating, we get:

| ( ∂ f ( n ) − ∂ f ( m ) ) ( t , x , p ) | ≤ | ∂ f 0 ( x n ( 0 ) , p n ( 0 ) ) − ∂ f 0 ( x m ( 0 ) , p m ( 0 ) ) | + ∫ 0 t | ∂ K ( n − 1 ) ( x n ) ⋅ ∇ p f ( n ) ( s , x n , p n ) − ∂ K ( m − 1 ) ( x m ) ⋅ ∇ p f ( m ) ( s , x m , p m ) | d s

The first term goes to zero as n , m → ∞ , from the hypothesis on f 0 . Hence, we can re-write the expression as:

| ( ∂ f ( n ) − ∂ f ( m ) ) ( t , x , p ) | ≤ ϵ n m + ∫ 0 t [ | ∂ K ( n − 1 ) ( x m ) ⋅ ( ∇ p f ( n ) ( s , x n , p n ) − ∇ p f ( n ) ( s , x m , p m ) ) | ] + ∫ 0 t [ | ( ∂ K ( n − 1 ) ( x m ) − ∂ K ( m − 1 ) ( x m ) ) ⋅ ∇ p f ( n ) ( s , x m , p n ) | ] d s + ∫ 0 t [ | ∂ K ( m − 1 ) ( x m ) ⋅ ( ∇ p f ( n ) ( s , x m , p m ) − ∇ p f ( m ) ( s , x m , p m ) ) | ] d s

for x , p ∈ ℝ 3 and 0 ≤ s ≤ T ,0 ≤ t ≤ T , ϵ n m → 0 as m , n → ∞ uniformly.

From 2.55 and the known bound in C 1 , the first term goes to zero uniformly on [ 0, T ] . The second term in the integrand is dominated by C d 1 ( m − 1, n − 1 ) ( s ) , and similarly the last term in the integrand is dominated by C f 1 ( m n ) ( s ) . As we did in 2.21, the p-derivative of the difference can be estimated in terms of the x- derivatives. Hence

f 1 ( m n ) ( t ) ≤ ϵ ′ n m + C ∫ 0 t [ d 1 ( m − 1, n − 1 ) ( s ) + f 1 ( m n ) ( s ) ] d s (2.56)

The ϵ ′ n m converges to zero uniformly on [ 0, T ] as m , n → ∞ . Define,

H n m ( t ) = ∫ 0 t f 1 m n ( s ) d s

then using 2.56 we get, H ˙ n m ( t ) − c H n m ( t ) ≤ ϵ ′ n m + c ∫ 0 t d 1 m − 1 , n − 1 ( s ) d s

Therefore,

H n m ( t ) ≤ ϵ ′ n m ∫ 0 t e c ( t − s ) d s + c ∫ 0 t e c ( t − s ) ∫ 0 s d 1 m − 1 , n − 1 ( ξ ) d ξ d s

Substituting this in to 2.56 we get,

f 1 m n ( t ) ≤ ϵ ″ n m + C ∫ 0 t d 1 m − 1, n − 1 ( s ) d s (2.57)

where C depends on T, and ϵ ″ n m → 0 uniformly on [ 0, T ] , as n , m → ∞ . Now substituting 2.57 in to 2.6, we get the inequality

d 1 m n ( t ) ≤ δ n m + c ∫ 0 t d 1 m − 1, n − 1 ( s ) d s (2.58)

with a different constant c, which depends on T, here δ n m tends to zero uni- formly on [ 0, T ] as n , m → ∞ . Iterating 2.58 we get:

d 1 m n ( t ) ≤ δ n m ( 1 + c t + c 2 t 2 2 ! + ⋯ + c k − 1 t k − 1 ( k − 1 ) ! + c k ( k − 1 ) ! ∫ 0 t ( t − ξ ) k − 1 d 1 m − k , n − k ( ξ ) d ξ )

If u is an upper bound for the C 1 norm, of the field, we thus have

d 1 m n ( t ) ≤ δ n m e c t + u c k T k k ! , 0 ≤ t ≤ T , m , n ≥ k

Therefore E n , B n and f n , from 2.57 are Cauchy sequences in the C 1 norm. Hence E n , B n and f n converges uniformly for t ∈ [ 0, T ] , x , p ∈ ℝ 3 together with all their first derivatives in C 1 , since C 1 is complete. Hence, let E , B and f be the limits of E n , B n and f n respectively. Therefore ( E , B , f ) will be the unique solution of the system 2.1 for the simplified case of a single species.

To generalize for n species, we need just a little modification. The operator S now depends on α .

i . e S α = ∂ t + v α . ∇ x

In this case each f α remains bounded. In the representations of the fields and their derivatives, ρ and j are written as

ρ = 4 π ∫ ∑ α e α f α d p , j = 4 π ∫ ∑ α v α e α f α d p

where e α is the charge of particles of species α . Again f α n − f α m are estimated for each α separately. Hence, with these simple modifications, we conclude for several species case.

In the previous chapter, we have seen that the sufficient condition for the existence of a global C 1 solution for the relativistic Vlasov Maxwell’s equations was the existence of a continuous function β ( t ) such that any solution f α (or any iterative approximation f α n ) vanishes for | p | > β ( t ) . In this chapter, we verify this sufficient condition under a smallness assumption on the data, which we will see in the theorem below.

Theorem 3.1. [

∇ ⋅ E 0 = ρ 0 = 4 π ∫ ℝ 3 ∑ α e α f α 0 d p , ∇ ⋅ B 0 = 0 , ∫ ℝ 3 ρ 0 d x = 0 (3.1)

If the data satisfy

∑ α ‖ f α 0 ‖ C 2 + ‖ E 0 ‖ C 3 + ‖ B 0 ‖ C 3 ≤ ϵ 0 , then, (3.2)

there exists a unique solution ( f α , E , B ) of 1.5 for all x ∈ ℝ 3 and p ∈ ℝ 3 and all times t ≥ 0 , with f α , E , B ∈ C 1 having initial data f α 0 , E 0 , B 0 such that

f α ( t , x , p ) = 0 for | p | > β forall α , t and x (3.3)

E ( t , x ) = B ( t , x ) = 0 for | x | > c t + k .

For ϵ > 0 , there exists ϵ o > 0 such that if 3.1 holds, then

| E ( t , x ) | + | B ( t , x ) | ≤ ε ( t + 1 ) ( c t − | x | + 2 k ) ; ∀ t ≥ 0 , x ∈ ℝ 3 (3.4)

To prove this theorem, the key step is to show that the paths of the particles spread out with time. Since the paths of the particles are given by the equations

x ˙ = v α , p ˙ = e α ( E + c − 1 v α × B ) (3.5)

the particles would move approximately in straight lines if E and B are small. Thus we need to prove that the electromagnetic field decays as t → ∞ . Hence, to prove this theorem, let us introduce a weighted L ∞ norm for the field, as was introduced by [

The main structure of the proof is as established in the last chapter. To prove uniqueness, we use the same step as we did in chapter two, and for the existence, the following construction was used. For given functions E ( 0 ) ( t , x ) and B ( 0 ) ( t , x ) , we define E ( n ) ( t , x ) , B ( n ) ( t , x ) and f α ( n ) ( t , x ) inductively as follows. That is given the ( n − 1 ) t h iteration, then we define f α ( n ) ( t , x ) as the solution of the linear equation

∂ t f α ( n ) + v α ⋅ ∇ x f α ( n ) + e α ( E ( n − 1 ) + v α × B ( n − 1 ) ) ⋅ ∇ p f α ( n ) = 0

And with

f α ( n ) ( 0, x , p ) = f α 0 ( x , p ) (3.6)

By setting c = 1 , we define

ρ ( n ) = 4 π ∑ α e α ∫ f α ( n ) d p , j ( n ) = 4 π ∑ α ∫ v α f α ( n ) d p (3.7)

Finally we define E ( n ) , B ( n ) as the solutions of the Maxwell’s equations,

∂ t E ( n ) = ∇ × B ( n ) − j ( n ) , ∇ ⋅ E ( n ) = ρ ( n ) , ∂ t B ( n ) = − ∇ × E ( n ) , ∇ ⋅ B ( n ) = 0

with data E n ( 0 , x ) = E 0 ( x ) , B n ( 0 , x ) = B 0 ( x ) .

Hence, from 2.1, we deduce that if there exists β > 0 , independent of t , x , α and n , such that

f α ( n ) ( t , x , p ) = 0 for | p | > β (3.8)

then ( f α ( n ) , E ( n ) , B ( n ) ) converges to a C 1 solution ( f α , E , B ) of the system 2.1. So to prove theorem 3.1, it is enough to show 3.8 under a smallness condition.

Let F ( t , x ) = ( E ( t , x ) , B ( t , x ) ) . Define the norms

‖ F ‖ 0 = sup x , t ( t + | x | + 2 k ) ( t − | x | + 2 k ) [ | E ( t , x ) | + | B ( t , x ) | ]

‖ F ‖ 1 = sup x , t ( t + | x | + 2 k ) ( t − | x | + 2 k ) 2 ln ( t + | x | + 2 k ) [ | ∇ x E ( t , x ) | + | ∇ x B ( t , x ) | ]

and ‖ F ‖ = ‖ F ‖ 0 + ‖ F ‖ 1 .

Given ϵ > 0 and let

F = { F : F is C 1 , F = 0 for | x | > t + k , ‖ F ‖ ≤ ϵ }

Given F ∈ F , we define the characteristics as the solutions X = X α ( s , t , x , p ) , P = P α ( s , t , x , p ) of 3.5, that is

∂ X ∂ s = v = p m α 2 + | p | 2 (3.9)

∂ P ∂ s = e α ( E ( s , X ) + v × B ( s , X ) ) (3.10)

such that the initial conditions are X α ( t , t , x , p ) = x and P α ( t , t , x , p ) = p .

Now if we define

f α ( t , x , p ) = f α 0 ( X α ( 0 , t , x , p ) , P α ( 0 , t , x , p ) ) (3.11)

then f α ( t , x , p ) is the solution of the Vlasov equation,

∂ t f α + v α ⋅ ∇ x f α + e α ( E + v × B ) ⋅ ∇ p f α = 0

with initial condition f α 0 ( x , p ) .

Now let F * = ( E * , B * ) be the solution of the Maxwell’s equations

∂ t E * = ∇ × B * − j , ∇ ⋅ E * = ρ , ∂ t B * = − ∇ × E * , ∇ ⋅ B * = 0

with initial conditions E * ( 0 , x ) = E 0 ( x ) , B * ( 0 , x ) = B 0 ( x ) . Therefore, the itera- tion process may be summarized as F ( n ) = F ( n − 1 ) * . Hence, we begin the process by defining F ( 0 ) = 0 (that is E ( 0 ) ( t , x ) = B ( 0 ) ( t , x ) = 0 ).

Here the characteristics are curves defined by the solutions of the equations 3.9 and 3.10. Because E and B are C 1 , the solutions exist as C 1 functions of s , t , x , p for some time 0 ≤ t ≤ T * , 0 ≤ s ≤ T * . Hence, since the characteristics exist, we define

u ( t ) = sup { | P α ( s , 0 , x , p ) | : | x | ≤ k , | p | ≤ k , 0 ≤ s ≤ t , 1 ≤ α ≤ N }

“which is the largest momentum up to time t emanating from the support of f α 0 ” [

Now let us drop the dependence on the species through the parameter α . Therefore, by the definitions above, we have

X ( t , 0 , X ( 0 , t , x , p ) , P ( 0 , t , x , p ) ) = x (3.12)

P ( t , 0 , X ( 0 , t , x , p ) , P ( 0 , t , x , p ) ) = p (3.13)

Now by setting y = X ( 0 , t , x , p ) and z = P ( 0 , t , x , p ) , Equations (3.12) and (3.13) give x = X ( t , 0 , y , z ) and p = P ( t , 0 , y , z ) . Similarly by uniqueness, we have

X ( s , 0 , X ( 0 , t , x , p ) , P ( 0 , t , s , p ) ) = X ( s , t , x , p ) (3.14)

P ( s , 0 , X ( 0 , t , x , p ) , P ( 0 , t , s , p ) ) = P ( s , t , x , p ) (3.15)

Now since, f is constant on the characteristics, we have

f ( t , x , p ) = f 0 ( X ( 0, t , x , p ) , P ( 0, t , s , p ) )

Therefore,

s u p p ( f ) = { ( x , p ) ∈ ℝ 3 × ℝ 3 : f ( t , x , p ) ≠ 0 } = { ( x , p ) ∈ ℝ 3 × ℝ 3 : f 0 ( X ( 0, t , x , p ) , P ( 0, t , s , p ) ) ≠ 0 } = { ( X ( t , 0 , X ( 0 , t , x , p ) , P ( 0 , t , x , p ) ) , P ( t , 0 , X ( 0 , t , x , p ) , P ( 0 , t , x , p ) ) ) : f 0 ( X ( 0 , t , x , p ) , P ( 0 , t , s , p ) ) ≠ 0 }

This explains the extent of the p-support of f and the definition of u ( t ) .

Lemma 3.2. [

s − | X α ( s , t , x , p ) | + 2 k ≥ k + s 2 ( 1 + u 2 ( t ) )

Proof: Given that for 0 ≤ s ≤ t , f α ( t , x , p ) ≠ 0 , implies f α ( t , x , p ) = f α 0 ( X α ( 0, t , x , p ) , P α ( 0, t , x , p ) ) . Let y 1 = X α ( 0 , t , x , p ) and z 1 = P α ( 0 , t , x , p ) .

Hence from the support property of f, | y 1 | = | X α ( 0 , t , x , p ) | ≤ k and | z 1 | = | P α ( 0 , t , x , p ) | ≤ k . Now from the definition of u ( t ) and from Equation (3.14), | P α ( s , t , x , p ) | ≤ u ( t ) . This implies | p | = | P α ( t ,0, y 1 , z 1 ) | ≤ u ( t ) and

| X α ( s , t , x , p ) | ≤ | X α ( 0, t , x , p ) | + ∫ 0 s | v α ( ξ , t , x , p ) | d ξ ≤ k + s u ^ ( t ) (3.16)

where, assuming m α = e α = 1 , u ^ ( t ) = u ( t ) 1 + u 2 ( t ) < 1 . But

1 − u ^ ( t ) = 1 − u ( t ) 1 + u 2 ( t ) = 1 + u 2 ( t ) − u ( t ) 1 + u 2 ( t ) = 1 u 2 + 1 + u u 2 + 1 ≥ 1 2 ( u 2 + 1 )

hence, 1 − 1 2 ( u 2 + 1 ) ≥ u ^ ( t )

Now substituting this in to the expression 3.16, we have

| X α ( s , t , x , p ) | ≤ k + s − s 2 ( u 2 + 1 )

That is s − | X α ( s , t , x , p ) | + 2 k ≥ k + s 2 ( 1 + u 2 ( t ) ) . This proves the lemma.

Lemma 3.3. [

Proof: For t ≥ 0 , we have X ( s ) = X α ( s , t , x , p ) and P ( s ) = P α ( s , t , x , p ) ,

| P α ( 0, t , x , p ) − p | ≤ ∫ 0 t | E ( s , X ( s ) ) + v ( s ) × B ( s , X ( s ) ) | d s ≤ ‖ F ‖ 0 ∫ 0 t ( s + | X ( s ) | + 2 k ) − 1 ( s − | X ( s ) | + 2 k ) − 1 d s ( bydefinitionof ‖ F ‖ 0 ) ≤ ‖ F ‖ 0 ∫ 0 t ( k + s ) − 2 ( 2 + 2 u 2 ( t ) ) d s ( bylemma1above ) ≤ ‖ F ‖ 0 ( 1 + u 2 ( t ) ) 2 k

Hence, u ( t ) ≤ k + ‖ F ‖ 0 ( 1 + u 2 ( t ) ) 2 k .

Now if ‖ F ‖ 0 is sufficiently small, by lemma (1.6) in chapter one, u ( t ) is a bounded function of t.

Now if f α ( t , x , p ) ≠ 0 for some ( x , t , α , p ) , then

| y | = | X α ( 0 , t , x , p ) | ≤ k and | z | = | P α ( 0 , t , x , p ) | ≤ k

provided that f α ( t , x , p ) = f α 0 ( X α ( 0 , t , x , p ) , P α ( 0 , t , x , p ) ) . Hence, by definition of u ( t ) , we have | p | = | V α ( t , 0 , y , z ) | ≤ u ( t ) ≤ β . This proves the lemma.

Theorem 3.4. [

Proof: From lemma 3.3 above, if f α ( t , x , p ) ≠ 0 , for some ( x , t , α , p ) , then we deduced that | p | ≤ β , where β is depending only on k , ϵ , ϵ 0 . This implies, f α ( t , x , p ) = 0 for | p | ≥ β . This proves the theorem.

Theorem 3.5. If F ∈ F and ϵ is small enough, then F * ∈ F .

Proof: See [

Proof of Theorem 3.1. Define the sequences f α n , F n as above. Since F ( 0 ) ∈ F , by theorem 3.5 above, F ( n ) ∈ F for all n. Now by theorem 3.4 above, f α n = 0 for | p | ≥ β . Therefore from the result of chapter 3, f α n , F n and their first derivatives converge point wise to f and F. This implies F ∈ F . Therefore 3.4 is true.

Hence, ( f , F ) is the solution of the RVM equations. This proves the theorem.

Therefore, under a smallness condition we achieved the same result as of the sufficient condition that we used in chapter two to get a smooth global C 1 solution. We thus conclude that the sufficient condition that we used in theorem 2.1 holds true under a small initial data.

In this project, we have seen that if f α 0 ( x , p ) in C 2 , E 0 ( x ) , B 0 ( x ) in C 3 are initial data to the Vlasov-Maxwell equations and if there exists a continuous function β ( t ) , such that f α ( t , x , p ) = 0 for | p | ≥ β ( t ) , ∀ x , α , then there exists a unique C 1 solution ( f α , E , B ) for all t to the VMEs. And we also seen in chapter three that, the same result that we obtained in chapter two could be achieved if the sufficient condition is replaced by small initial data for the system and we proved that the sufficient condition is true.

The result that we obtained is for the relativistic Vlasov-Maxwell system. But in the small data case, the same technique that we used, provide the result for the

non relativistic VMEs except, we replace v α by p m α . In chapter two in the

decompositions of the field, it was necessary that the term 1 + v ⋅ w could be bounded away from zero, in the non-relativistic case the corresponding expression is 1 + p ⋅ w , hence, singularities may occur in a larger set of momentum. Therefore, smooth global existence in the non-relativistic case seems problematic.

Future WorkIn this paper, existence and uniqueness of global smooth solutions for Vlasov- Maxwell equations by taking initial data f 0 ∈ C 2 , E 0 , B 0 ∈ C 3 is shown. In the future, the same problem will be solved by taking initial data f 0 ∈ C 3 , E 0 , B 0 ∈ C 3 .

After an intensive period of time, today is the day writing this note of thanks is the finishing touch on my article. It has been a period of intense learning for me. Writing this article has had a big impact on me. I would like to reflect on the institution and the people who have supported and helped me so much throughout this period. I would first like to thank Jimma University for faci- litating materials such as printing papers and internet, Next to this I would like to give grate appreciation to my colleagues (Jimma University mathematics department stuff members) for their wonderful collaboration. You supported me greatly and were always willing to help.

Petros, L.D. (2018) Existence and Uniqueness of Global Smooth Solutions for Vlasov Maxwell Equations. Advances in Pure Mathematics, 8, 45-76. https://doi.org/10.4236/apm.2018.81005