_{1}

^{*}

When a differential field
*K* having
*n* commuting derivations is given together with two finitely generated differential extensions
*L* and
*M* of
*K*, an important problem in differential algebra is to exhibit a common differential extension
*N* in order to define the new differential extensions
*L*
∩
*M *and the smallest differential field
(*L*,*M* ) ⊂ *N* containing both
*L* and
*M*. Such a result allows to generalize the use of complex numbers in classical algebra. Having now two finitely generated differential modules
* L* and
*M* over the non-commutative ring
*D* = *K *[*d*_{1},...,*d*_{n}] = *K* [*d*] of differential operators with coefficients in
*K*, we may similarly look for a differential module
*N* containing both
*L* and
*M *in order to define
*L*∩*M* and
*L*+*M*. This is
*exactly* the situation met in linear or non-linear OD or PD control theory by selecting the inputs and the outputs among the control variables. However, in many recent books and papers, we have shown that controllability was a
*built-in* property of a control system, not depending on the choice of inputs and outputs. The purpose of this paper is thus to revisit control theory by showing the specific importance of the two previous problems and the part plaid by
*N* in both cases for the parametrization of the control system. An important tool will be the study of
*differential correspondence*
*s*, a modern name for what was called
*Bäcklund problem* during the last century, namely the elimination theory for groups of variables among systems of linear or nonlinear OD or PD equations. The main difficulty is to revisit
*differential homological algebra* by using noncommutative localization as a way to generalize the symbolic calculus in the style of Heaviside and Mikusinski. Finally, when
*M* is a
*D*-module, this paper is using for the first time the fact that the system
*R* = *hom _{K}* (

*M*,

*K*) is a

*D*-module for the Spencer operator acting on sections, avoiding thus behaviours, trajectories and signal spaces in a purely formal way, contrary to a few recent works on this difficult subject.

The story started in 1970 at Princeton University when the author of this paper was a visiting student of D.C. Spencer and his colleague J. Wheeler from the nearby physics department set up a 1000 $ challenge for proving that Einstein equations could be parametrized by potential-like functions like Maxwell equations. It is only in 1995 that he found the negative solution of this challenge, only paid back one dollar (!) by Wheeler because the relativistic community was (and still is !) convinced about the existence of such a parametrization. Accordingly, such a result can only be found today in books of control theory ( [

In 1990, U. Oberst (Innsbruck University) succeeded applying these new tools to control theory, only studying linear systems of ordinary differential (OD) or partial differential (PD) equations with constant coefficients ( [

A possibility to escape from such a situation was to publish as fast as possible a book presenting for the first time in a self-contained way the non-commutative aspect of double duality for the study of systems having coefficients in a differential field K ( [

In the second section, we shall study the linear framework and in the third section, we shall study the nonlinear framework, separating in each situation the differential geometric approach from the differential algebraic approach and providing various motivating examples. Many of the results are given without proofs that can be found in the many books ( [

The author thanks the anonymous referee who pointed him out the necessity to recall the link existing between this modern formal approach and the classical computational approach of O. Heaviside (1889) or J. Mikusinski (1949) based on purely algebraic localization techniques in place of the Laplace transform. A few tricky examples will be presented in the subsection 2.2 in order to illustrate these difficult questions.

If X is a manifold of dimension n with local coordinates ( x ) = ( x 1 , ⋯ , x n ) , we denote as usual by T = T ( X ) the tangent bundle of X, by T * = T * ( X ) the cotangent bundle, by ∧ r T * the bundle of r-forms and by S q T * the bundle of q-symmetric tensors. More generally, let E , F , ⋯ be vector bundles over X with local coordinates ( x i , y k ) , ( x i , z l ) , ⋯ for i = 1 , ⋯ , n , k = 1 , ⋯ , m , l = 1 , ⋯ , p simply denoted by ( x , y ) , ( x , z ) , projection π : E → X : ( x , y ) → ( x ) and changes of coordinates x ¯ = φ ( x ) , y ¯ = A ( x ) y . We shall denote by E * the vector bundle obtained by inverting the matrix A of the changes of coordinates, exactly like T * is obtained from T. We denote by ξ : X → E : ( x ) → ( x , y = ξ ( x ) ) a (local) section of E, Under a change of coordinates, a section transforms like ξ ¯ ( φ ( x ) ) = A ( x ) ξ ( x ) and the changes of the derivatives can also be obtained with more work. We shall denote by J q ( E ) the q-jet bundle of E with local coordinates ( x i , y k , y i k , y i j k , ⋯ ) = ( x , y q ) called jet coordinates and sections ξ q : ( x ) → ( x , ξ k ( x ) , ξ i k ( x ) , ξ i j k ( x ) , ⋯ ) = ( x , ξ q ( x ) ) transforming like the sections j q ( ξ ) : ( x ) → ( x , ξ k ( x ) , ∂ i ξ k ( x ) , ∂ i j ξ k ( x ) , ⋯ ) = ( x , j q ( ξ ) ( x ) ) where both ξ q and j q ( ξ ) are over the section ξ of E. For any q ≥ 0 , J q ( E ) is a vector bundle over X with projection π q while J q + r ( E ) is a vector bundle over J q ( E ) with projection π q q + r , ∀ r ≥ 0 .

DEFINITION 2.1.1: A linear system of order q on E is a vector sub-bundle R q ⊂ J q ( E ) and a solution of R q is a section ξ of E such that j q ( ξ ) is a section of R q .

Let μ = ( μ 1 , ⋯ , μ n ) be a multi-index with length | μ | = μ 1 + ⋯ + μ n , class i if μ 1 = ⋯ = μ i − 1 = 0 , μ i ≠ 0 and μ + 1 i = ( μ 1 , ⋯ , μ i − 1 , μ i + 1 , μ i + 1 , ⋯ , μ n ) . We set y q = { y μ k | 1 ≤ k ≤ m , 0 ≤ | μ | ≤ q } with y μ k = y k when | μ | = 0 . There is a natural way to distinguish the section ξ q from the section j q ( ξ ) by introducing the Spencer operator d : J q + 1 ( E ) → T * ⊗ J q ( E ) with components ( d ξ q + 1 ) μ , i k ( x ) = ∂ i ξ μ k ( x ) − ξ μ + 1 i k ( x ) . The kernel of d consists of sections such that ξ q + 1 = j 1 ( ξ q ) = ⋯ = j q + 1 ( f ) . Finally, if R q ⊂ J q ( E ) is a system of order q on E locally defined by linear equations Φ τ ( x , y q ) ≡ a k τ μ ( x ) y μ k = 0 , the r-prolongation R q + r = ρ r ( R q ) = J r ( R q ) ∩ J q + r ( E ) ⊂ J r ( J q ( E ) ) is locally defined when r = 1 by the linear equations Φ τ ( x , y q ) = 0 , d i Φ τ ( x , y q + 1 ) ≡ a k τ μ ( x ) y μ + 1 i k + ∂ i a k τ μ ( x ) y μ k = 0 and has symbol g q + r = R q + r ∩ S q + r T * ⊗ E ⊂ J q + r ( E ) if one looks at the top order terms. If ξ q + 1 ∈ R q + 1 is over ξ q ∈ R q , differentiating the identity a k τ μ ( x ) ξ μ k ( x ) ≡ 0 with respect to x i and substracting the identity a k τ μ ( x ) ξ μ + 1 i k ( x ) + ∂ i a k τ μ ( x ) ξ μ k ( x ) ≡ 0 , we obtain the identity a k τ μ ( x ) ( ∂ i ξ μ k ( x ) − ξ μ + 1 i k ( x ) ) ≡ 0 and thus the restriction d : R q + 1 → T * ⊗ R q . More generally, we have the restriction:

d : ∧ s T * ⊗ R q + 1 → ∧ s + 1 T * ⊗ R q : ( ξ μ , I k ( x ) d x I ) → ( ( ∂ i f μ , I k ( x ) − ξ μ + 1 i , I k ( x ) ) d x i ∧ d x I ) (1)

using standard multi-index notation for exterior forms, namely I = { i 1 < i 2 < ⋯ < i r } , d x I = d x i 1 ∧ ⋯ ∧ d x i r ∈ ∧ r T * for a finite basis, and one can easily check that d ∘ d = 0 . The restriction of − d to the symbol is called the Spencer map δ : ∧ s T * ⊗ g q + 1 → ∧ s + 1 T * ⊗ g q and δ ∘ δ = 0 similarly, leading to the purely algebraic δ -cohomology H q + r s ( g q ) ( [

DEFINITION 2.1.2: A system R q is said to be formally integrable when all the equations of order q + r are obtained by r prolongations only, ∀ r ≥ 0 or, equivalently, when the projections π q + r q + r + s : R q + r + s → R q + r ( s ) ⊆ R q + r are epimorphisms ∀ r , s ≥ 0 .

Finding an intrinsic test has been achieved by D.C. Spencer in 1965 ( [

THEOREM 2.1.3: R q is formally integrable (involutive) if π q q + 1 : R q + 1 → R q is an epimorphism and g q is 2-acyclic with H q + r 2 ( g q ) = 0 , ∀ r ≥ 0 (involutive with H q + r s ( g q ) = 0 , ∀ r ≥ 0 , ∀ s = 1 , ⋯ , n ). When R q is involutive, there exist n integers α q 1 ≥ ... ≥ α q n = α ≥ 0 called characters and we have

d i m ( g q + r ) = ∑ i = 1 n ( r + i − 1 ) ! r ! ( i − 1 ) ! α q i , in particular d i m ( g q ) = α q 1 + ⋯ + α q n , d i m ( g q + 1 ) = α q 1 + ⋯ + n α q n .

REMARK 2.1.4: As long as the Prolongation/Projection (PP) procedure has not been achieved in order to get an involutive system R q + r ( s ) for r , s large enough, nothing can be said about the CC. Fine examples can be found in [

[ ξ q , η q ] = { ξ q + 1 , η q + 1 } + i ( ξ ) d ξ q + 1 − i ( η ) d η q + 1 ∈ J q ( T ) , ∀ ξ q , η q ∈ J q ( T )

where i ( ) is the interior product, which is easily seen not to depend on the respective lifts at order q + 1 . A linear system R q ⊂ J q ( T ) is then called a Lie algebroid if [ R q , R q ] ⊂ R q and it can be proved that R q + r ( s ) is again a Lie algebroid, independently of any formal integrability condition as can be seen on the Lie algebroid preserving the 1-form x 2 d x 1 − x 1 d x 2 ∈ T * (See [

R 1 ( 3 ) ⊂ R 1 ( 2 ) ⊂ R 1 ( 1 ) = R 1 ⊂ J 1 ( T ) with dimensions 2 < 4 < 10 = 10 < 20

according to ( [

When R q is involutive, the linear differential operator D : E → j q J q ( E ) → Φ J q ( E ) / R q = F 0 of order q is said to be involutive. Introducing the set of solutions Θ ⊆ E and the Janet bundles:

F r = ∧ r T * ⊗ J q ( E ) / ( ∧ r T * ⊗ R q + δ ( S q + 1 T * ⊗ E ) ) (2)

we obtain the canonical linear Janet sequence (Introduced in [

0 → Θ → E → D F 0 → D 1 F 1 → D 2 ⋯ → D n F n → 0 (3)

where each other operator, induced by the Spencer operator, is first order involutive and generates the compatibility conditions (CC) of the preceding one. Similarly, introducing the Spencer bundles:

C r = ∧ r T * ⊗ R q / δ ( ∧ r − 1 T * ⊗ g q + 1 ) (4)

we obtain the canonical linear Spencer sequence also induced by the Spencer operator:

0 → Θ → j q C 0 → D 1 C 1 → D 2 ⋯ → D n C n → 0 (5)

In the case of analytic systems, the following theorem providing the Cartan-Kähler (CK) data is well known though its link with involution is rarely quoted because it is usually presented within the framework of exterior calculus ( [

THEOREM 2.1.5 (Cartan-Kähler): If R q ⊂ J q ( E ) is a linear involutive and analytic system of order q on E, there exists one analytic solution y k = f k ( x ) and only one such that:

1) ( x 0 , ∂ μ f k ( x 0 ) ) with 0 ≤ | μ | ≤ q − 1 is a point of R q − 1 = π q − 1 q ( R q ) ⊂ J q − 1 ( E ) .

2) For i = 1 , ⋯ , n the α q i parametric derivatives ∂ μ f k ( x ) of class i are equal for x i + 1 = x 0 i + 1 , ⋯ , x n = x 0 n to α q i given analytic functions of x 1 , ⋯ , x i .

The monomorphism 0 → J q + 1 ( E ) → J 1 ( J q ( E ) ) allows to identify R q + 1 with its image R ¯ 1 in J 1 ( R q ) and we just need to set R q = E ¯ in order to obtain the first order system (Spencer form) R ¯ 1 ⊂ J 1 ( E ¯ ) which is also involutive and analytic while π 0 1 : R ¯ 1 → E ¯ is an epimorphism. Studying the respective symbols, we may identify g q + r and g ¯ r while g ¯ 1 is involutive. Looking at the Janet board of multiplicative variables we have α ¯ 1 i + β ¯ 1 i = m ¯ = d i m ( E ¯ ) and:

α ¯ 1 i = α q i + ⋯ + α q n = α q + 1 i ⇒ α q i = α ¯ 1 i − α ¯ 1 i + 1 = β ¯ 1 i + 1 − β ¯ 1 i

We obtain therefore:

COROLLARY 2.1.6: If R 1 ⊂ J 1 ( E ) is a first order linear involutive and analytic system such that π 0 1 : R 1 → E is an epimorphism, then there exists one analytic solution y k = f k ( x ) and only one, such that:

1) f 1 ( x ) , ⋯ , f β 1 1 ( x ) are equal to β 1 1 given constants when x = x 0 .

2) f β 1 i + 1 ( x ) , ⋯ , f β 1 i + 1 ( x ) are equal to β 1 i + 1 − β 1 i given analytic functions of x 1 , ⋯ , x i when x i + 1 = x 0 i + 1 , ⋯ , x n = x 0 n .

3) f β 1 n + 1 ( x ) , ⋯ , f m ( x ) are m − β 1 n given analytic functions of x 1 , ⋯ , x n .

If A is an associative ring with unit 1 ∈ A , a subset S ⊂ A is called a multiplicative subset if 1 ∈ S ,0 ∉ S , s t ∈ S , ∀ s , t ∈ S .. In the commutative case, these conditions are sufficient to localize A at S by constructing the new ring of fractions S − 1 A over A. For simplicity, we shall suppose that A is an integral domain (no divisor of zero) and we shall choose S = A − { 0 } in order to introduce the field of fractions Q ( A ) = S − 1 A = A S − 1 . The idea is to exhibit new quantities

written a s with the standard rules:

b ( a s ) = a b s , a s + b t = a t + b s s t , a s b t = a b s t , ∀ a , b ∈ A , ∀ s , t ∈ S

The same definition can be used for any module M over A in order to introduce the module of fractions S − 1 M over S − 1 A with the rules:

a s x t = a x s t , x s + y t = t x + s y s t , ∀ x , y ∈ M

DEFINITION 2.2.1: t ( M ) = t S ( M ) = { x ∈ M | ∃ s ∈ S , s x = 0 } is called the torsion submodule of M over S and we have the exact sequence

0 → t ( M ) → M → θ S − 1 M of modules over A where the morphism θ on the right is x → x 1 = s x s and we have S − 1 M = S − 1 A ⊗ A M .

In the non-commutative case considered through all this paper, we shall meet four problems:

· How to compare s − 1 a with a s − 1 ?

· How to decide when we shall say that s − 1 a = t − 1 b ?

· How to multiply s − 1 a by t − 1 b ?

· How to find a common denominator for s − 1 a + t − 1 b ?

LEMMA 2.2.2: If there exists a (left) localization of a noetherian A with respect to S, then we must have the (left) “Ore condition” S a ∩ A s ≠ ∅ . It follows that A s ∩ A t ∩ S ≠ ∅ and two fractions can be multiplied or brought to the same denominator. Finally, t ( M ) is a submodule of M.

Proof: Roughly, any right fraction a s − 1 can be written as a left fraction t − 1 b , that is we must have t a = b s . Now, if we have two fractions s − 1 a and t − 1 b , we can find u , v ∈ A such that u s = v t ∈ S . Hence, we obtain s − 1 a = ( u s ) − 1 ( u a ) and t − 1 b = ( v t ) − 1 v b = ( u s ) − 1 v b . As for the multiplication of fractions, we have ( s − 1 a ) ( t − 1 b ) = s − 1 ( a t − 1 ) b = s − 1 ( u − 1 c ) b = ( u s ) − 1 ( c b ) .

Finally, given x , y ∈ t ( M ) , we can find s , t ∈ S such that s x = 0 , t y = 0 . We may thus find u , v ∈ A such that u s = v t ∈ S and we get u s ( x + y ) = u s x + v t y = 0 ⇒ x + y ∈ t ( M ) . Also, we can use t a = b s in order to obtain t ( a x ) = ( t a ) x = ( b s ) x = b ( s x ) = 0 ⇒ a x ∈ t ( M ) .

□

Let K be a differential field with n commuting derivations ( ∂ 1 , ⋯ , ∂ n ) and consider the ring D = K [ d 1 , ⋯ , d n ] = K [ d ] of differential operators with coefficients in K with n commuting formal derivatives satisfying d i a = a d i + ∂ i a in the operator sense. If P = a μ d μ ∈ D = K [ d ] , the highest value of | μ | with a μ ≠ 0 is called the order of the operator P and the ring D with multiplication ( P , Q ) → P ∘ Q = P Q is filtred by the order q of the operators. We have the filtration 0 ⊂ K = D 0 ⊂ D 1 ⊂ ⋯ ⊂ D q ⊂ ⋯ ⊂ D ∞ = D . As an algebra, D is generated by K = D 0 and T = D 1 / D 0 with D 1 = K ⊕ T if we identify an element ξ = ξ i d i ∈ T with the vector field ξ = ξ i ( x ) ∂ i of differential geometry, but with ξ i ∈ K now. It follows that D = D D D is a bimodule over itself, being at the same time a left D-module by the composition P → Q P and a right D-module by the composition P → P Q . We define the adjoint functor a d : D → D o p : P = a μ d μ → a d ( P ) = ( − 1 ) | μ | d μ a μ and we have a d ( a d ( P ) ) = P both with a d ( P Q ) = a d ( Q ) a d ( P ) , ∀ P , Q ∈ D . Such a definition can be extended to any matrix of operators by using the transposed matrix of adjoint operators (See [

PROPOSITION 2.2.3: D is an Ore domain with S − 1 D = D S − 1 when S = D − { 0 } .

Proof: Let U ∈ S and P ∈ D be given. In order to prove the Ore property for D, we must find V ∈ S and Q ∈ D such that V P = Q U . Considering the system P y = u , U y = v , it defines a differential module M over D with the finite presentation D 2 → D → M → 0 . Now, as we have only one unknown and D = D y in this sequence, then M is a torsion module and r k D ( M ) = 0 . From the additivity property of the differential ranks, if there should be no compatibilty condition (CC), see the example below), then the first morphism on the left should be a monomorphism, a result leading to the contradiction 2 − 1 + 0 = 0 . Accordingly we can find the operators V and Q such that P U − 1 = V − 1 Q . Conversely, if now V and Q are given, using the adjoint functor and the fact that a d ( a d ( P ) ) = P , ∀ P ∈ D , we may obtain a d ( V ) and a d ( Q ) such that a d ( P ) a d ( V ) = a d ( U ) a d ( Q ) as before and thus V = a d ( a d ( V ) ) and Q = a d ( a d ( Q ) ) such that V P = Q U , a result showing that V − 1 Q = P U − 1 .

□

REMARK 2.2.4: As pointed out at the end of the Introduction, we now explain the link existing between localization and the operational calculus of Heaviside and Makuzinski (HM) along the fine reference ( [

Having in mind the sudden switch on of an electrical circuit, the key object of the HM theory is surely the Heaviside step function h which is equal to 0 for t < 0 and to 1 for t > 0 with a jump by 1 at t = 0 . The idea is to use it in order to endow the commutative ring A of real or complex valued continuous functions defined on the interval [ 0, ∞ ) with the structure of an integral domain (no divisor of zero). Denoting by { f ( t ) } or simply f a function, and by f ( t ) the value of f at t, we may define an addition { ( f + g ) ( t ) } = { f ( t ) + g ( t ) } and a specific multiplication through the convolution:

f g = f ∗ g = { ∫ 0 t f ( t − u ) g ( u ) d u } = g ∗ f

Now, having in mind the following symbolic computation:

y ′ = d y = h ↔ y = d − 1 h = ∫ 0 t h ( u ) d u = t h ( t ) ⇒ d − n h = t n n ! h ( t ) .

we may even consider for any constant parameter a the formula:

d d − a h = 1 1 − a d h = ∑ r = 0 ∞ a r d − r h = ∑ r = 0 ∞ a r t r r ! = e a t h

a result recalling what is well known for the Laplace transform.

In a more mathematical way, f → h f may be considered as the integration operator and we have successively:

h f = h ∗ f = ∫ 0 t f ( u ) d u = F ( t )

h 2 f = h 2 ∗ f = h ∗ ( h ∗ f ) = ∫ 0 t ( ∫ 0 u f ( s ) d s ) d u = ∫ 0 t F ( u ) d ( u − t )

⇒ h 2 f = ∫ 0 t d ( ( u − t ) F ( u ) ) + ∫ 0 t ( t − u ) d F ( u ) = ∫ 0 t ( t − u ) f ( u ) d u = t ∗ f

a result leading to introduce the formula h n ∗ f = { t ( n − 1 ) ( n − 1 ) ! ∗ f } for n = 1 , 2 , ⋯ by induction.

Introducing as in algebra the multiplicatively closed subset S = { 1, h , h 2 , h 3 , ⋯ } , we may introduce the ring of convolution quotients S − 1 A = A h with standard notations from algebra. The element s = 1 h = h h 2 is the differential operator because, if f has a continuous derivative f ′ , we have:

h f ′ = h ∗ f ′ = ∫ 0 t f ′ ( u ) d u = { f ( t ) − f ( 0 ) } = f − [ f ( 0 ) ] h ⇒ f ′ = s f − [ f ( 0 ) ]

where we have set [ a ] = { a } h ∈ A h for any pure real or complex number, as a way to introduce the boundary conditions. More generally, we have:

f ( n ) = s n f − s ( n − 1 ) [ f ( 0 ) ] − s ( n − 2 ) [ f ′ ( 0 ) ] − ⋯ − [ f ( n − 1 ) ( 0 ) ]

Also, as we have ( s − [ a ] ) { e a t } = s e a t − a e a t = ( a e a t + 1 ) − a e a t = 1 , we get:

( s − [ a ] ) n = ( 1 [ a ] h ) n h n ⇒ 1 ( s − a ) n = { t ( n − 1 ) ( n − 1 ) ! e a t } = { e a t } n .

that can be obtained from the inversion of ( s − a ) { e a t } = 1 by induction on n, starting from the case 1 ( s − a ) 2 = { t e a t } wen n = 2 that the reader may check directly with some effort as for n = 1 .

We can finally integrate any linear OD equation:

a n y ( n ) + a ( n 1 ) y ( n − 1 ) + ⋯ + a 0 y = f ∈ A , a n ≠ 0

with a unique solution whenever y ( 0 ) , y ′ ( 0 ) , ⋯ , y ( n − 1 ) ( 0 ) are n given numbers.

It must be noticed that such an elegant approach found by K. Yosida and S. Okamoto in 1980 allows to prove that A is an integral domain for the convolution directly, without any reference to Titchmarsh’s theorem used by J. Mikusinski in 1950. As we shall prove below, we do believe that such an explicit procedure of integration, though of course useful for electrical circuit, does not allow at all to study the fine structure of the underlying differential modules defined by the corresponding systems (torsion submodules, extension modules, resolutions, ...).

Let us consider as in ( [

L x ˙ 2 + R 2 x 2 = u , R 1 C x ˙ 1 + x 1 = u , C x ˙ 1 + x 2 = y = − 1 R 1 x 1 + x 2 + 1 R 1 u .

Such a system can be set up at once in the standard matrix form x ˙ = A x + B u , y = C x + D u but we shall avoid the corresponding Kalman criterion that could not be used if R 1 , R 2 , L or C should depend on time. The two first OD equations are defining a differential module N (See section 2.3) over the differential field K = ℚ ( R 1 , R 2 , L , C ) while the elimination of ( x 1 , x 2 ) is providing the input submodule D u = L ⊂ N and the output submodule D y = M ⊂ N with ( L , M ) ⊆ N . However, taking into account Remark 2.1.4 that has never been used in control theory, in particular for electrical circuit, we have to distinguish carefully between two cases (See [

· If R 1 R 2 C ≠ L , we have a single second order CC for ( u , y ) and the the system is observable, that is we have indeed the strict equality ( L , M ) = N (Exercise: We let the reader check this fact with R 1 = C = L = 1 , R 2 = 2 and get y ¨ + 3 y ˙ + 2 y − u ¨ − 3 u ˙ − u = 0 which is controllable).

· If R 1 R 2 C = L , we have only a single first order equation L y ˙ + R 2 y − R 2 C u ˙ − u = 0 which is controllable if and only if R 1 ≠ R 2 and we have the strict inclusion ( L , M ) ⊂ N (Exercise: Choose R 1 = R 2 = L = C = 1 and get y ˙ + y − u ˙ − u = 0 which is not controllable because z = y − u is a torsion element with z ˙ + z = 0 ).

Though it is already quite difficult to find such examples, there is an even more striking fact. Indeed, if we consider only the two first equations for ( x 1 , x 2 , u ) , we have a formally surjective first order operator D defined over K. Taking into account the intrinsic definition of controllability which is superseding Kalman’s one (again because it allows to treat time depending coefficients as well), we let the reader check that the corresponding system is controllable if and only if the first order operator a d ( D ) is injective by applying again Remark 1.A.4 (See [

The following example of coupled pendula will prove that this result, still not acknowledged today by engineers, is not evident at all ( [

d 2 x + l 1 d 2 θ 1 + g θ 1 = 0 , d 2 x + l 2 d 2 θ 2 + g θ 2 = 0

where d = d t is the standard time derivative. Any reader can check experimentally that the system is controllable, that is the angles can reach any prescribed (small) values in a finite time when starting from equilibrium, if and only if l 1 ≠ l 2 and, in this case, we have the following 4^{th} order injective parametrization:

− l 1 l 2 d 4 ϕ − g ( l 1 + l 2 ) d 2 ϕ − g 2 ϕ = x , l 2 d 4 ϕ + g d 2 ϕ = θ 1 , l 1 d 4 ϕ + g d 2 ϕ = θ 2

⇒ ( l 2 − l 1 ) g 2 ϕ = ( l 1 − l 2 ) x + ( l 1 ) 2 θ 1 − ( l 2 ) 2 θ 2

Of course, if l 1 = l 2 = l , the system cannot be controllable because, setting θ = θ 1 − θ 2 , we obtain by substraction l d 2 θ + g θ = 0 and thus θ ( 0 ) = 0 , d θ ( 0 ) = 0 ⇒ θ ( t ) = 0 .

Using the differential double duality test ( [

For justifying our comment, we finally provide the following elementary example of a noncommutative localization with n = 1 , m = 1 , K = ℚ ( x ) where we set d = d x for simplicity. Let us consider the two linear operators P = d + x and Q = d 2 + x d + x + 1 in D = K [ d ] and look, as in the previous proposition, to a least common left multiple A P = B Q of P and Q in D. Repeating the same procedure, let us consider the linear inhomogeneous system y x + x y = u , y x x + x y x + x y + y = v . Prolonging once the first OD equation, we obtain y x x + x y x + y = u x and thus x y = v − u x by substraction, a fact showing

that the given system is not formally integrable. Dividing by x, we get y = 1 x v − 1 x u x and substituting in the first OD equation, we obtain the second order CC u x x + ( x − 1 x ) u x + x u = v x + ( x − 1 x ) v leading to the common 3^{rd}-order multiple:

x d 3 + ( 2 x 2 − 1 ) d 2 + ( x 3 + x 2 + x ) d + ( x 3 + x 2 − 1 )

a result leading thus to A = d 2 + ( x − 1 x ) d + x , B = d + ( x − 1 x ) . As n = 1 , D is

a principal ideal domain and we don’t even need to substitute the value of y in the second OD equation as a way to find a new 4^{th}-order CC that could easily be seen to be differentially dependent on the above 3^{rd}-order one already obtained. Denoting by M the D-module defined by the given system, we have thus obtained the following formally exact free resolution:

0 → D → D 2 → D → p M → 0

where p is the canonical residual projection. Accordingly, this resolution is quite far from being “strictly” exact because the first operator on the right is not formally integrable (See [

EXAMPLE 2.2.5: With m = 1 , n = 2 , q = 1 , K = ℚ ( x 1 , x 2 ) let us consider the two first order operators U = d 2 ∈ S , P = d 1 + x 2 . Considering the formal system d 1 y + x 2 y = u , d 2 y = v , we obtain y = d 2 u − d 1 v − x 2 v and thus the involutive system with jet notations:

{ y 2 = v y 1 + x 2 y = u y = u 2 − v 1 − x 2 v 1 2 1 • • •

Among the three CC that should exist, only two are non-trivial and provide the new second order (care!) involutive sytem:

{ A ≡ u 22 − v 12 − x 2 v 2 − 2 v = 0 B ≡ u 12 + x 2 u 2 − u − v 11 − 2 x 2 v 1 − ( x 2 ) 2 v = 0 1 2 1 •

with the unexpected single first order CC C ≡ d 2 B − d 1 A − x 2 A = 0 . We obtain therefore the two operator identities:

d 22 ( d 1 + x 2 ) = ( d 12 + x 2 d 2 + 2 ) d 2 , ( d 12 + x 2 d 2 − 1 ) ( d 1 + x 2 ) = ( d 11 + 2 x 2 d 1 + ( x 2 ) 2 ) d 2

leading again to the two unexpected localizations:

( d 1 + x 2 ) ( d 2 ) − 1 = ( d 22 ) − 1 ( d 12 + x 2 d 2 + 2 ) = ( d 12 + x 2 d 2 − 1 ) − 1 ( d 11 + 2 x 2 d 1 + ( x 2 ) 2 )

Taking the adjoint operators, we get in particular ( d 1 − x 2 ) d 22 = d 2 ( d 12 − x 2 d 2 + 1 ) .

In order to achieve this example and explain why such methods, up to our knowledge, have never been used for applications, it just remains to explain the equality of these two fractions in this framework. Indeed, we obtain easily the unique operator identity ( d 1 + x 2 ) d 22 = d 2 ( d 12 + x 2 d 2 − 1 ) provided by the last CC d 2 B = ( d 1 + x 2 ) A for u. Reducing to the same denominator can be done if we use the operator identity:

( d 1 + x 2 ) ( d 12 + x 2 d 2 + 2 ) = d 2 ( d 11 + 2 x 2 d 1 + ( x 2 ) 2 )

produced by the same last CC d 2 B = ( d 1 + x 2 ) A for v. We conclude this example exhibiting the corresponding long exact sequence of differential modules:

0 → D → D 2 → D 2 → D → M → 0

where we have successively from left to right: D = D C , D 2 = D A + D B , D 2 = D u + D v , D = D y with Euler-Poincaré characteristic 1 − 2 + 2 − 1 = r k D ( M ) = 0 because m = 1 .

Accordingly, if y = ( y 1 , ⋯ , y m ) are differential indeterminates, then D acts on y k by setting d i y k = y i k → d μ y k = y μ k with d i y μ k = y μ + 1 i k and y 0 k = y k . We may therefore use the jet coordinates in a formal way as in the previous section. Therefore, if a system of OD/PD equations is written in the form Φ τ ≡ a k τ μ y μ k = 0 with coefficients a ∈ K , we may introduce the free left differential module D y = D y 1 + ⋯ + D y m ≃ D m and consider the differential module of equations I = D Φ ⊂ D y , both with the residual left differential module M = D y / D Φ or D-module and we may set M = D M ∈ m o d ( D ) if we want to specify the action of the ring of differential operators. We may introduce the formal prolongation with respect to d i by setting d i Φ τ ≡ a k τ μ y μ + 1 i k + ( ∂ i a k τ μ ) y μ k in order to induce maps d i : M → M : y ¯ μ k → y ¯ μ + 1 i k by residue with respect to I if we use to denote the residue D y → M : y k → y ¯ k by a bar like in algebraic geometry. However, for simplicity, we shall not write down the bar when the background will indicate clearly if we are in Dy or in M. As a byproduct, the differential modules we shall consider will always be finitely generated ( k = 1 , ⋯ , m < ∞ ) and finitely presented ( τ = 1 , ⋯ , p < ∞ ). Equivalently, introducing the matrix of operators D = ( a k τ μ d μ ) with m columns and p rows, we may introduce the morphism D p → D D m : ( P τ ) → ( P τ Φ τ ) over D by acting with D on the left of these row vectors while acting with D on the right of these row vectors by composition of operators with i m ( D ) = I . The presentation of M is defined by the

exact cokernel sequence D p → D D m → p M → 0 . We notice that the presentation

only depends on K , D and Φ or D , that is to say never refers to the concept of (explicit local or formal) solutions. It follows from its definition that M can be endowed with a quotient filtration obtained from that of D m which is defined by the order of the jet coordinates y q in D q y . We have therefore the inductive limit 0 ⊆ M 0 ⊆ M 1 ⊆ ⋯ ⊆ M q ⊆ ⋯ ⊆ M ∞ = M with d i M q ⊆ M q + 1 and M = D M q for q ≫ 0 with prolongations D r M q ⊆ M q + r , ∀ q , r ≥ 0 .

An exact sequence of morphisms finishing at M is said to be a resolution of M. If the differential modules involved apart from M are free, that is isomorphic to a certain power of D, we shall say that we have a free resolution of M. In the general situation of a sequence M ′ → f M → g M ″ of modules which may not be exact, we may define the coboundary, cocycle and cohomology at M by setting respectively B = i m ( f ) ⊆ Z = k e r ( g ) ⇒ H = Z / B and apply the above result to the various short exact sequences like 0 → Z → M → i m ( g ) → 0 or 0 → B → Z → H → 0 . The deleted complex is obtained by replacing M by 0. Applying h o m D ( • , D ) , we obtain a sequence that may not be exact. The corresponding cohomology modules, called extension modules e x t D i ( M , D ) = e x t i ( M ) , are torsion modules for i ≥ 1 , do not depend on the resolution of M and only depend on K , D and M ( [

Having in mind that K is a left D-module with the action ( D , K ) → K : ( P , a ) → P ( a ) defined by ( b , a ) → b a = a b , ( d i , a ) → d i ( a ) = ∂ i a and that D is a bimodule over itself for the composition law of operators, we have only two possible constructions only depending on K , D and M:

DEFINITION 2.2.6: We may define the right (care !) differential module h o m D ( M , D ) , using the bimodule structure of D D D and setting ( f P ) ( m ) = f ( m ) P while checking that:

( ( f P ) ( Q ) ) ( m ) = ( ( f P ) ( m ) ) Q = ( f ( m ) P ) Q = f ( m ) P Q = ( f P Q ) ( m )

REMARK 2.2.7: When M admits the finite presentation D p → D D m → p M → 0 , applying h o m D ( • , D ) we obtain the following long exact sequence of right (care!) differential modules:

0 ← N D ← D D p ← D D m ← h o m D ( M , D ) ← 0

and the so-called Malgrange isomorphism just amounts to the fact that e x t 0 ( M ) = h o m D ( M , D ) . We are immediately facing one of the most delicate problems of this section when dealing with applications and/or effective computations, a problem not solved in the corresponding literature which has been almost entirely using fields of constants ( [

THEOREM 2.2.8: There exists an isomorphism N D → N = D N = h o m K ( ∧ n T * , N D ) with inverse M = D M → M D = ∧ n T * ⊗ K M .

Proof: First of all, we prove that ∧ n T * has a natural right module structure over D by introducing the basic volume n-form α = d x 1 ∧ ⋯ ∧ d x n = d x and defining α ⋅ P = a d ( P ) ( 1 ) d x , ∀ P ∈ D . We have α ⋅ ξ = − ∂ i ξ i d x = − d i v ( ξ ) d x = − L ( ξ ) α , ∀ ξ = ξ i d i ∈ T where L is the classical Lie derivative on forms and obtain therefore:

α ⋅ ( a ξ ) = ( a d x ) ⋅ ξ = − ∂ i ( a ξ i ) d x = − a ∂ i ξ i d x − ξ ( a ) d x = − a ∂ i ξ i d x − α ⋅ ( ξ ( a ) )

α ⋅ ( ξ a ) = ( α ⋅ ξ ) a = − a ∂ i ξ i d x ⇒ α ⋅ ( ξ a ) = α ⋅ ( a ξ + ξ ( a ) )

From well known properties of the Lie derivative, we have also:

α ⋅ ( ξ η − η ξ ) = − ( L ( ξ ) L ( η ) − L ( η ) L ( ξ ) ) α = − L ( [ ξ , η ] ) α

Now, using the adjoint map a d : D → D : P → a d ( P ) , we may introduce the adjoint functor a d : m o d ( D ) → m o d ( D o p ) : M → a d ( M ) with m ⋅ P = a d ( P ) m , ∀ m ∈ M , ∀ P ∈ D and we have m ⋅ ( P Q ) = a d ( P Q ) m = a d ( Q ) a d ( P ) m = ( m ⋅ P ) ⋅ Q , ∀ P , Q ∈ D .

It remains to introduce the K-linear isomorphism a d ( M ) ≃ M D : m → m ⊗ α with:

( m ⊗ α ) a = a m ⊗ α = m ⊗ a α , ( m ⊗ α ) ξ = − ξ m ⊗ α + m ⊗ α ⋅ ξ , ∀ m ∈ M , ∀ ξ ∈ T

and check that ( m ⊗ α ) ( ξ a ) = ( m ⊗ α ) ( a ξ + ξ ( a ) ) . These definitions are coherent because, when d is any d i , we have d i v ( d ) = 0 and thus ( m ⊗ α ) d = − d m ⊗ α , a result leading to the formula:

( m ⊗ α ) P = ( m ⊗ α ) ( a μ d μ ) = ( a μ m ⊗ α ) d μ = ( − 1 ) | μ | d μ a μ m ⊗ α = a d ( P ) m ⊗ α

The isomorphism a d ( D ) ≃ D D : P → a d ( P ) is also right D-linear because we have successively:

P ⋅ Q → a d ( P ⋅ Q ) = a d ( a d ( Q ) P ) = a d ( P ) a d ( a d ( Q ) ) = a d ( P ) Q

These unexpected results explain why the formal adjoint cannot be avoided in non-commutative localization and is so important for applications ranging from control theory ( [

□

DEFINITION 2.2.9: We define the system R = h o m K ( M , K ) and set R q = h o m K ( M q , K ) as the system of order q. We have the projective limit R = R ∞ → ⋯ → R q → ⋯ → R 1 → R 0 . It follows that f q ∈ R q : y μ k → f μ k ∈ K with a k τ μ f μ k = 0 defines a section at order q and we may set f ∞ = f ∈ R for a section of R. For an arbitrary differential field K, such a definition has nothing to do with the concept of a formal power series solution (care).

Similarly to the preceding definition, we may define the left (care !) differential module h o m K ( D , K ) , using again the bimodule strucure of D and setting ( Q f ) ( P ) = f ( P Q ) , ∀ P , Q ∈ D , in particular with ( ξ f ) ( P ) f ( P ξ ) , ∀ ξ ∈ T , ∀ P ∈ D . However, we should have ( a f ) ( P ) = f ( P a ) ≠ f ( a P ) = a ( f ( P ) ) , ∀ a ∈ , unless K is a field of constants like in most of the literature ( [

PROPOSITION 2.2.10: When M is a left D-module, then R is also a left D-module.

Proof: As D is generated by K and T as we already said, let us define:

( a f ) ( m ) = a f ( m ) = f ( a m ) , ∀ a ∈ K , ∀ m ∈ M

( ξ f ) ( m ) = ξ f ( m ) − f ( ξ m ) , ∀ ξ = a i d i ∈ T , ∀ m ∈ M

In the operator sense, it is easy to check that d i a = a d i + ∂ i a and that ξ η − η ξ = [ ξ , η ] is the standard bracket of vector fields. Using simply ∂ in place of any ∂ i and d in place of any d i , we have:

( ( d a ) f ) ( m ) = ( d ( a f ) ) ( m ) = d ( a f ( m ) ) − a f ( d m ) = ( ∂ a ) f ( m ) + a d ( f ( m ) ) − a f ( d m ) = ( a ( d f ) ) ( m ) + ( ∂ a ) f ( m ) = ( ( a d + ∂ a ) f ) ( m )

We finally get ( d i f ) μ k = ( d i f ) ( y μ k ) = ∂ i f μ k − f μ + 1 i k and thus recover exactly the Spencer operator of the previous section though this is not evident at all. We also get ( d i d j f ) μ k = ∂ i j f μ k − ∂ i f μ + 1 j k − ∂ j f μ + 1 i k + f μ + 1 i + 1 j k ⇒ d i d j = d j d i , ∀ i , j = 1 , ⋯ , n and thus d i R q + 1 ⊆ R q ⇒ d i R ⊂ R induces a well defined operator R → T * ⊗ R : f → d x i ⊗ d i f . This operator has been first introduced, up to sign, by F.S. Macaulay as early as in 1916 but this is still not acknowledged ( [

□

PROPOSITION 2.2.11: When M and N are left D-modules, then M ⊗ K N is also a left D-module.

Proof: As before, we may define:

a ( m ⊗ n ) = a m ⊗ n = m ⊗ a n , ∀ a ∈ K , ∀ m ∈ M , ∀ n ∈ N

ξ ( m ⊗ n ) = ξ m ⊗ n + m ⊗ ξ n , ∀ ξ ∈ T , ∀ m ∈ M , ∀ n ∈ N

and let the reader finish as an exercise.

□

COROLLARY 2.2.12: The two structures of left D-modules obtained in these two propositions are coherent with the following adjoint isomorphism existing for any triple L , M , N ∈ m o d ( D ) :

h o m D ( M ⊗ K N , L ) → φ h o m D ( M , h o m K ( N , L ) )

Proof: Whenever f ∈ h o m D ( M ⊗ K N , L ) , we may define φ ( f ) = g by ( g ( m ) ) ( n ) = f ( m ⊗ n ) ∈ L and we have successively for any ξ ∈ T :

( ξ ( g ( m ) ) ) ( n ) = ξ ( ( g ( m ) ) ( n ) ) − ( g ( m ) ) ( ξ n ) = ξ ( f ( m ⊗ n ) ) − f ( m ⊗ ξ n ) = f ( ξ ( m ⊗ n ) ) − f ( m ⊗ ξ n )

that is ( ξ ( g ( m ) ) ) ( n ) = f ( ξ m ⊗ n ) = ( g ( ξ m ) ) ( n ) and thus ξ ( g ( m ) ) = g ( ξ m ) , ∀ m ∈ M .

The inverse morphism can be studied similarly.

□

COROLLARY 2.2.13: R = h o m K ( M , K ) ≃ h o m D ( M , h o m K ( D , K ) ) .

Proof: As K is a field, thus a commutative ring, we have the isomorphism of left D-modules M ⊗ K N ≃ N ⊗ K M : m ⊗ n → n ⊗ m , ∀ m ∈ M , ∀ n ∈ N and we may exchange M and N. As K is a left differential module for the rule ( P , a ) → P ( a ) , ∀ P ∈ D , ∀ a ∈ K , we obtain:

h o m D ( M , h o m K ( D , K ) ) ≃ h o m D ( M ⊗ K D , K ) ≃ h o m D ( D ⊗ K M , K ) ≃ h o m D ( D , h o m K ( M , K ) ) ≃ h o m K ( M , K )

□

COROLLARY 2.2.14: The differential module h o m K ( D , K ) is an injective differential module.

Proof: When 0 → M ′ → M → M ″ → 0 is a short exact sequence of modules and N is any module, we have only the exact sequence 0 → h o m ( M ″ , N ) → h o m ( M , N ) → h o m ( M ′ , N ) obtained by composition of morphisms. In the present situation, using the previous corollaries, we have the following commutative and exact diagram because K is a field (See [

0 0 ↓ ↓ h o m D ( M , h o m K ( D , K ) ) → h o m D ( M ′ , h o m K ( D , K ) ) ↓ ↓ h o m K ( M , K ) → h o m K ( M ′ , K ) → 0 ↓ ↓ 0 0

Chasing in this diagram, we deduce that the upper morphism is an epimorphism and h o m K ( D , K ) is an injective module because h o m ( • , h o m K ( D , K ) ) transforms a short exact sequence into a short exact sequence. The reader may compare such an approach with the one used in ( [

□

DEFINITION 2.2.15: With any differential module M we shall associate the graded module G = g r ( M ) over the polynomial ring g r ( D ) ≃ K [ χ ] by setting G = ⊕ q = 0 ∞ G q with G q = M q / M q + 1 and we get g q = G q * where the symbol g q is defined by the short exact sequences:

0 → M q − 1 → M q → G q → 0 ⇔ 0 → g q → R q → R q − 1 → 0

We have the short exact sequences 0 → D q − 1 → D q → S q T → 0 leading to g r q ( D ) ≃ S q T and we may set as usual T * = h o m K ( T , K ) in a coherent way with differential geometry.

The two following definitions, which are well known in commutative algebra, are also valid (with more work) in the case of differential modules (See [

DEFINITION 2.2.16: The set of elements t ( M ) = { m ∈ M | ∃ 0 ≠ P ∈ D , P m = 0 } ⊆ M is a differential module called the torsion submodule of M. More generally, a module M is called a torsion module if t ( M ) = M and a torsion-free module if t ( M ) = 0 . In the short exact sequence 0 → t ( M ) → M → M ′ → 0 , the module M ′ is torsion-free. Its defining module of equations I ′ is obtained by adding to I a representative basis of t ( M ) set up to zero and we have thus I ⊆ I ′ .

DEFINITION 2.2.17: A differential module F is said to be free if F ≃ D r for some integer r > 0 and we shall define r k D ( F ) = r . If F is the biggest free dfferential module contained in M, then M/F is a torsion differential module and h o m D ( M / F , D ) = 0 . In that case, we shall define the differential rank of M to be r k D ( M ) = r k D ( F ) = r . Accordingly, if M is defined by a linear involutive operator of order q, then r k D ( M ) = α q n .

PROPOSITION 2.2.18: If 0 → M ′ → M → M ″ → 0 is a short exact sequence of differential modules and maps or operators, we have r k D ( M ) = r k D ( M ′ ) + r k D ( M ″ ) .

REMARK 2.2.19: We emphasize once more that the left D-module h o m K ( D , K ) used in the literature ( [

( a f ) ( P ) = f ( P a ) , ( ξ f ) ( P ) = f ( P ξ ) , ∀ a ∈ K , ∀ ξ ∈ T , ∀ P ∈ D

As h o m K ( M , K ) ≃ h o m D ( M , h o m K ( D , K ) ) , they are not coherent at all with the formulas of Proposition 2.2.1, namely:

( a f ) ( m ) = a ( f ( m ) ) = f ( a m ) , ( ξ f ) ( m ) = ξ ( f ( m ) ) − f ( ξ m ) , ∀ a ∈ K , ∀ ξ ∈ T , ∀ m ∈ M

unless K is a field of constants, in particular because, when M = D , then P a ≠ a P in general ( [

In order to conclude this section, we may say that the main difficulty met when passing from the differential framework to the algebraic framework is the “inversion” of arrows. Indeed, with d i m ( E ) = m , d i m ( F ) = p , when an operator

D : E → F is injective, that is when we have the exact sequence 0 → E → D F , like in the case of the operator 0 → E → j q J q ( E ) , on the contrary, using differential modules, we have the epimorphism D p → D D m → 0 . The case of a formally surjective operator, like the d i v operator, described by the exact sequence E → D F → 0 is now providing the exact sequence of differential modules 0 → D p → D D m → M → 0 because D has no CC.

We are now ready for using the results of the second section on the Cartan-Kähler theorem. For such a purpose, separating the parametric jets that can be chosen arbitrarily from the principal jets that can be obtained by using the fact that the given OD or PD equations have coefficients in a differential field K, we may write the solved equations in the symbolic form y p r i − c p r i p a r y p a r = 0 with c ∈ K and an implicit (finite) summation in order to obtain for the sections f p r i − c p r i p a r f p a r = 0 . Using the language of Macaulay, it follows that the so-called modular equations are E ≡ f p r i a p r i + f p a r a p a r = 0 with eventually an infinite number of terms in the implicit summations. Substituting, we get at once E ≡ f p a r ( a p a r + c p r i p a r a p r i ) = 0 . Ordering the y p a r as we already did and using a basis { ( 1,0, ⋯ ) , ( 0,1,0, ⋯ ) , ( 0,0,1,0, ⋯ ) , ⋯ } for the f p a r , we may select the parametric modular equations E p a r ≡ a p a r + c p r i p a r a p r i = 0 .

When k is a field of constants, a polynomial P = a μ χ μ ∈ k [ χ ] of degree q is multiplied by a monomial χ ν with | ν | = r , we get χ ν P = a μ χ μ + ν . Hence, if 0 ≤ | μ | ≤ q , the “shifted” polynomial thus obtained is such that r ≤ | μ + ν | ≤ q + r and the difference between the maximum degree and the minimum degree of the monomials involved is always equal to q and thus fixed. When n = 1 , one can exhibit a series only made with 0 or 1 like f = ( 1,0,1,0,0,1,0,0,0,1,0,0, ⋯ ) with “zero zones” of successive increasing lengths 1,2,3,4, ⋯ and so on, separated by 1 in such a way that the contraction with the shifted polynomial is the leading term of the given polynomial and extend this procedure to n arbitrary.

Replacing χ i by d i and degree by order, we may use the results of section 2 in order to split the CK-data into m formal power series of 0 (constants), 1, ⋯ , n variables that we shall call series of type i for i = 0 , 1 , ⋯ , n . However, as the following elementary example will show, the shifting procedure cannot be applied to the variable coefficient case, namely when K is used in place of k. Indeed,

with n = 1 , d x = d , K = ℚ ( x ) and P = d 2 − x 3 , if we contract d 3 P = d 5 − x 3 d 3 − d 2 with the series f = ( 1,0,1,0,0,1,0, ⋯ ) already defined when n = 1 , we get 1 − 1 = 0 though P does not kill f because d 2 f = ( 1,0,0,1,0,0, ⋯ ) ⇒ P f = ( 1 − x 3 , ⋯ ) and the contraction of P with f is 1 − x 3 ≠ 0 .

WE SHALL ESCAPE FROM THIS DIFFICULTY BY MEANS OF A TRICK BASED ON A SYSTEMATIC USE OF THE SPENCER OPERATOR (Compre to [

The idea will be to shift the series to the left (decreasing ordering), up to sign, instead of shifting the operator to the right (increasing ordering). For this, we notice that we want that the contraction of P = a μ d μ where | μ | ≤ q = o r d ( P ) with f should be zero, that is a μ f μ = 0 ⇒ ( ∂ i a μ ) f μ + a μ ( ∂ i f μ ) = 0 , ∀ i = 1 , ⋯ , n . But d i P = a μ d μ + 1 i + ( ∂ i a μ ) d μ must also contract to zero wih f that is a μ f μ + 1 i + ( ∂ i a μ ) f μ = 0 . Substracting, we obtain therefore the condition a μ ( ∂ i f μ − f μ + 1 i ) = 0 , that is P must also contract to zero with the shift d i f or even d ν f of f when f is made with 0 and 1 only. Applying this computation to the above example, we get − d f = ( 0,1,0,0,1,0, ⋯ ) ⇒ d 2 f = ( 1,0,0,1,0, ⋯ ) ⇒ − d 3 f = ( 0,0,1,0, ⋯ ) and the contraction with P provides the leading coefficient 1 ≠ 0 of P like the contraction of d 3 P with d 4 f = ( 0,1,0,0,0,1,0, ⋯ ) , that is the same series can be used but in a quite different framework. Also, in the finite dimensional case existing when the symbol g q of R q is finite type, that is when g q + r = 0 for a certain integer r ≥ 0 , applying the δ -sequence inductively to g q + n + i for i = r − 1 , ⋯ , 0 as in ( [

THEOREM 2.2.20: If M is a differential module over D = K [ d ] defined by a first order involutive system in the m unknowns y 1 , ⋯ , y m with no zero order equation, the differential module R = h o m K ( M , K ) may be generated over D by a finite basis of sections containing m generators.

In the general situation, counting the number of CK data, we have α q 1 + ⋯ + α q n = d i m ( g q ) and d i m ( R q ) = d i m ( g q ) + d i m ( R q − 1 ) . We obtain therefore the following result which is coherent with the number of unknowns in the Spencer form R q + 1 ⊂ J 1 ( R q ) .

COROLLARY 2.2.21: If M is a differential module over D = K [ d ] defined by an involutive system R q ⊂ J q ( E ) , the differential module R = h o m K ( M , K ) may be generated over D by a finite basis of sections containing d i m ( R q ) generators.

EXAMPLE 2.2.22: Among the most interesting examples with m = 1 , n = 3 , K = ℚ we present the following second order system provided by Macaulay in 1916 ( [

y 33 = 0 , y 23 − y 11 = 0 , y 22 = 0

It is easy to check that g 2 with d i m ( g 2 ) = 3 is not involutive, that g 3 with d i m ( g 3 ) = 1 is 2-acyclic because y 123 − y 111 = 0 and that g 4 = 0 is trivially involutive. Accordingly, only the system R 4 is involutive with the 8 = 2 3 parametric jets ( y , y 1 , y 2 , y 3 , y 11 , y 12 , y 13 , y 111 ) and all the three characters vanish both with all the jets of order ≥ 4 because the system is homogeneous.

Let us consider the following 8 sections with all other components equal to zero:

section y y 1 y 2 y 3 y 11 y 12 y 13 y 23 y 111 y 123 f 1 1 0 0 0 0 0 0 0 0 0 f 2 0 1 0 0 0 0 0 0 0 0 f 3 0 0 1 0 0 0 0 0 0 0 f 4 0 0 0 1 0 0 0 0 0 0 f 5 0 0 0 0 1 0 0 1 0 0 f 6 0 0 0 0 0 1 0 0 0 0 f 7 0 0 0 0 0 0 1 0 0 0 f 8 0 0 0 0 0 0 0 0 1 1

Taking into account the two PD equations y 23 − y 11 = 0 and y 123 − y 111 = 0 , we obtain successively:

d 1 f 8 = − f 5 , d 1 f 7 = − f 4 , d 3 f 7 = − f 2 , ⋯ , d 1 f 2 = − f 1

Or, equivalently, working with the corresponding modular equations:

E 8 ≡ a 111 + a 123 = 0 ⇒ − d 1 E 8 ≡ E 5 ≡ a 11 + a 23 = 0

and so on. It follows that all the sections can be generated by the single section f 8 and all the modular equations can be generated by the single modular equation E 8 = 0 , a result absolutely not evident at first sight but coherent with the fact that the radical of the annihilator of M is the maximal ideal m = ( d 1 , d 2 , d 3 ) .

Finally, with m = 1 , n = 3 but K = ℚ ( x 1 , x 2 , x 3 ) an even more striking example has been provided by M. Janet in 1920 ( [

y 33 − x 2 y 11 = 0 , y 22 = 0

In this case, d i m ( R ) = 12 < ∞ but R can be generated by the single modular equation:

E ≡ a 12333 + x 2 a 1333 + a 1113 = 0

because all the jets of order >5 vanish (See [

EXAMPLE 2.2.23: If K = ℚ ( x ) and n = 1 , m = 1 , let us consider the third order OD equation Φ ≡ y x x x − y x = 0 for which we may exhibit the basis of sections:

{ f 1 = ( 1,0,0,0,0, ⋯ ) , f 2 = ( 0,1,0,1,0,1, ⋯ ) , f 3 = ( 0,0,1,0,1,0, ⋯ ) }

With d = d x and ∂ = ∂ x , we obtain d f 1 = 0 , d f 2 = − f 1 − f 3 , d f 3 = − f 2 and check that all the sections can be generated by a single one, namely f 3 which describes the power series of the solution y = f ( x ) = c h ( x ) − 1 . We have indeed ∂ f = s h ( x ) and ∂ 2 f ( x ) = c h ( x ) .

With now m = 2 , let us consider the differential module defined by the system y x x 1 − y 1 = 0 , y x 2 = 0 . Setting y = y 1 − y 2 , we successively get:

y x = y x 1 , y x x = y x x 1 = y 1 , y x x x = y x 1 ⇒ y 1 = y x x , y x 1 = y x , y x x 1 = y x x , y x x x 1 = y x , ⋯ ⇒ y 2 = y x x − y , y x 2 = 0 , ⋯

and a differential isomorphism with the module defined by the new system y x x x − y x = 0 . We have seen that the sections of the second system are easily seen to be generated by the single section f 3 = ( 0,0,1,0, ⋯ ) , a result leading to the only generating section f 1 = 1 , f x 1 = 0 , f x x 1 = 1 , ⋯ , f 2 = 1 , f x 2 = 0 , ⋯ of the initial system but these sections do not describe solutions because ∂ f 1 − f x 1 = 0 and ∂ f 2 − f x 2 = 0 but ∂ f x 1 − f x x 1 = − 1 ≠ 0 . We do not know any reference in computer algebra dealing with sections.

EXAMPLE 2.2.24: With n = 1 , m = 1 , q = 2 , K = ℚ ( x ) , let us consider the second order OD equation Φ ≡ y x x − x y = 0 . We successively obtain by prolongation:

y x x x − x y x − y = 0 , y x x x x − 2 y x − x 2 y = 0 , y x x x x x − x 2 y x − 4 x y = 0 , y x x x x x x − 6 x y x − ( x 3 + 4 ) y = 0

and so on. We obtain the corresponding board describing the maps ρ r ( Φ ) :

order y y x y x x y x x x y x x x x y x x x x x y x x x x x x ⋯ 2 − x 0 1 0 0 0 0 ⋯ 3 − 1 − x 0 1 0 0 0 ⋯ 4 − x 2 − 2 0 0 1 0 0 ⋯ 5 − 4 x − x 2 0 0 0 1 0 ⋯ 6 − ( x 3 + 4 ) − 6 x 0 0 0 0 1 ⋯

Let us define the sections f ′ and f ″ by the following board where d = d x :

section y y x y x x y x x x y x x x x y x x x x x y x x x x x x ⋯ f ′ 1 0 x 1 x 2 4 x x 3 + 4 ⋯ f ″ 0 1 0 x 2 x 2 6 x ⋯ d f ′ 0 − x 0 − x 2 − 2 x − x 3 − 6 x 2 ⋯ d f ″ − 1 0 − x − 1 − x 2 − 4 x − x 3 − 4 ⋯

in order to obtain d f ′ = − x f ″ , d f ″ = − f ′ . Though this is not evident at first sight, the two boards are orthogonal over K in the sense that each row of one board contracts to zero with each row of the other though only the rows of the first board do contain a finite number of nonzero elements. It is absolutely essential to notice that the sections f ′ and f ″ have nothing to do with solutions because d f ′ ≠ 0 , d f ″ ≠ 0 on one side and also because

d 2 f ′ − x f ′ = − f ″ = 1 x d f ′ ≠ 0 even though d 2 f ″ − x f ″ = 0 on the other side. As a byproduct, f ′ or f ″ can be chosen separately as unique generating section of the inverse system over K (care) and we may write for example:

f ′ → E ′ ≡ a 0 + x a x x + a x x x + x 2 a x x x x + ⋯ = 0 , f ″ → E ″ ≡ a x + x a x x x + 2 a x x x x + ⋯ = 0

EXAMPLE 2.2.25: With n = 1 , m = 2 , q = 2 , K = ℚ ( x ) , let us consider again the second order system y x x 1 − y 1 = 0 , y x 2 = 0 . Setting z 1 = y 1 , z 2 = y x 1 , z 3 = y 2 , we obtain the first order involutive system:

{ z x 1 − z 2 = 0 z x 2 − z 1 = 0 z x 3 = 0 x x x

It follows that the CK data for z = g ( x ) are { g 1 = g 1 ( 0 ) , g 2 = g 2 ( 0 ) , g 3 = g 3 ( 0 ) } . Using the given equations and their solved prolongations like y x x 1 − y 1 = 0 , y x x 2 − y 2 = 0 and so on, we have the finite basis (care!):

z 1 z 2 z 3 z x 1 z x 2 z x 3 z x x 1 z x x 2 z x x 3 ⋯ g 1 1 0 0 0 1 0 1 0 0 ⋯ g 2 0 1 0 1 0 0 0 1 0 ⋯ g 3 0 0 1 0 0 0 0 0 0 ⋯

As d g 1 = − g 2 , d g 2 = − g 1 , d g 3 = 0 , a basis with only two generators may be { g 2 , g 3 } . However:

h = g 1 − g 3 , d h = − g 2 , d 2 h = g 1 ⇔ g 1 = d 2 h , g 2 = − d h , g 3 = d 2 h − h

and we obtain the unique generator h.

The most striking aspect of the application of module/system theory to linear control theory is that it is coming from rather unexpected chases in commutative and exact diagrams looking like rather abstract at first sight. As more details and examples can be found in book form ( [

PROPOSITION 2.3.1: One has the short exact sequence of (differential) modules:

0 → ( L + M ) → N → ( N / L ) / ( M / ( L ∩ M ) ) → 0

Proof: Using elementary classical homological algebra, one obtains the following commutative and exact diagram:

0 0 0 ↓ ↓ ↓ 0 → L ∩ M → L → L / ( L ∩ M ) → 0 ↓ ↘ ↓ ↓ 0 → M → N → N / M → 0 ↓ ↓ ↘ ↓ 0 → M / ( L ∩ M ) → N / L → ( N / L ) / ( M / ( L ∩ M ) ) → 0 ↓ ↓ ↓ 0 0 0

At first, the lower southeast arrow being the composition of two epimorphisms is an epimorphism. A circular chase finally proves that any element of N killed by this southeast arrow is the sum of an element of L and an element of M, achieving the proof. It is important to notice the symmetric part plaid by L and M in N.

□

In the general situation, we obtain from the left upper commutative square the useful formulas:

r k D ( N / ( L ∩ M ) ) = r k D ( L / ( L ∩ M ) ) + r k D ( N / L ) = r k D ( M / ( L ∩ M ) ) + r k D ( N / M )

in a coherent way with the following corollary:

COROLLARY 2.3.2: If L + M = N , then one has L / ( L ∩ M ) ≃ N / M and M / ( L ∩ M ) ≃ N / L .

THEOREM 2.3.3: There is a bijective correspondence between the intermediate differential modules ( L ∩ M ) ⊂ L ′ ⊂ L and the intermediate differential modules M ⊂ N ′ ⊂ N defined by the rules:

L ′ → ( L ′ + M ) = N ′ ⊂ N , N ′ → ( L ∩ N ′ ) = L ′ ⊂ L

Proof: We have the following commutative diagram of injections:

M → N ′ → N ↑ ↑ ↑ L ∩ M → L ′ → L

Let us start with L ′ , construct N ′ = L ′ + M and obtain L ″ = L ∩ N ′ . First of all, we get L ′ ⊂ L and L ′ ⊂ N ′ ⇒ L ′ ⊆ L ″ . Now, using the left commutative square, we obtain from the previous proposition N ′ / L ′ ≃ M / ( L ∩ M ) . Similarly, using the right commutative square, we obtain N ′ / N ″ ≃ N / L and thus an isomorphism N ′ / L ′ ≃ N ′ / L ″ . However, we have the following commutative and exact diagram:

0 0 0 ↓ ↓ ↓ 0 → L ′ = L ′ → 0 ↓ ↓ ↓ 0 → L " → N ′ → N ′ / L ″ → 0 ↓ ↓ ∥ 0 → L ″ / L ′ → N ′ / L ′ → N ′ / L ″ → 0 ↓ ↓ ↓ 0 0 0

and thus L ″ / L ′ = 0 that is L ′ = L ″ .

Finally, starting with N ′ , we should obtain L ′ = N ′ ∩ L , then define N ″ = L ′ + M ⊆ N ′ and conclude as before that N ′ = N ″ . Replacing specialzations by injections while chasing in the following commutative and exact diagram:

0 → N ′ → N → N ″ → 0 ↑ ↑ ↑ 0 → L ′ → L → L ″ → 0 ↑ ↑ ↑ 0 0 0

we have thus been able to deal only with submodules of N. In paricular, if N is torsion-free, that is t ( N ) = 0 , the interest of this aproach is that all the submodules are torsion-free.

□

EXAMPLE 2.3.4: (See [

If X is a manifold with local coordinates ( x i ) for i = 1, ⋯ , n = d i m ( X ) , let us consider the fibered manifold E over X with d i m X ( E ) = m , that is a manifold with local coordinates ( x i , y k ) for i = 1, ⋯ , n and k = 1, ⋯ , m simply denoted by ( x , y ) , projection π : E → X : ( x , y ) → ( x ) and changes of local coordinates x ¯ = φ ( x ) , y ¯ = ψ ( x , y ) . If E and F are two fibered manifolds over X with respective local coordinates ( x , y ) and ( x , z ) , we denote by E × X F the fibered product of E and F over X as the new fibered manifold over X with local coordinates ( x , y , z ) . We denote by f : X → E : ( x ) → ( x , y = f ( x ) ) a global section of E , that is a map such that π ∘ f = i d X but local sections over an open set U ⊂ X may also be considered when needed. Under a change of coordinates, a section transforms like f ¯ ( φ ( x ) ) = ψ ( x , f ( x ) ) and, differentiating with respect to x i , we may introduce new coordinates ( x i , y k , y i k ) transforming like:

y ¯ r l ∂ i φ r ( x ) = ∂ ψ l ∂ x i ( x , y ) + ∂ ψ l ∂ y k ( x , y ) y i k

We shall denote by J q ( E ) the q-jet bundle of E with local coordinates ( x i , y k , y i k , y i j k , ⋯ ) = ( x , y q ) called jet coordinates and sections f q : ( x ) → ( x , f k ( x ) , f i k ( x ) , f i j k ( x ) , ⋯ ) = ( x , f q ( x ) ) transforming like the sections j q ( f ) : ( x ) → ( x , f k ( x ) , ∂ i f k ( x ) , ∂ i j f k ( x ) , ⋯ ) = ( x , j q ( f ) ( x ) ) where both f q and j q ( f ) are over the section f of E . It will be useful to introduce a multi-index μ = ( μ 1 , ⋯ , μ n ) with length | μ | = μ 1 + ⋯ + μ n and to set μ + 1 i = ( μ 1 , ⋯ , μ i − 1 , μ i + 1, μ i + 1 , ⋯ , μ n ) . Also, a jet coordinate y μ k is said to be of class i if μ 1 = ⋯ = μ i − 1 = 0 , μ i ≠ 0 . As the background will always be clear enough, we shall use the same notation for a vector bundle or a fibered manifold and their sets of sections. We finally notice that J q ( E ) is a fibered manifold over X with projection π q while J q + r ( E ) is a fibered manifold over J q ( E ) with projection π q q + r , ∀ r ≥ 0 .

DEFINITION 3.1.1: A (nonlinear) system of order q on E is a fibered submanifold R q ⊂ J q ( E ) and a global or local solution of R q is a section f of E over X or U ⊂ X such that j q ( f ) is a section of R q over X or U ⊂ X .

DEFINITION 3.1.2: When the changes of coordinates have the linear form x ¯ = φ ( x ) , y ¯ = A ( x ) y , we say that E is a vector bundle over X. Vector bundles will be denoted by capital letters E , F , ⋯ and will have sections denoted by ξ , η , ⋯ . In particular, we shall denote as usual by T = T ( X ) the tangent bundle of X, by T * = T * ( X ) the cotangent bundle, by ∧ r T * the bundle of r-forms and by S q T * the bundle of q-symmetric covariant tensors. When the changes of coordinates have the form x ¯ = φ ( x ) , y ¯ = A ( x ) y + B ( x ) we say that E is an affine bundle over X and we define the associated vector bundle E over X by the local coordinates ( x , v ) changing like x ¯ = φ ( x ) , v ¯ = A ( x ) v .

DEFINITION 3.1.3: If the tangent bundle T ( E ) has local coordinates ( x , y , u , v ) changing like u ¯ j = ∂ i φ j ( x ) u i , v ¯ l = ∂ ψ l ∂ x i ( x , y ) u i + ∂ ψ l ∂ y k ( x , y ) v k , we may introduce the vertical bundle V ( E ) ⊂ T ( E ) as a vector bundle over E with local coordinates ( x , y , v ) obtained by setting u = 0 and changes v ¯ l = ∂ ψ l ∂ y k ( x , y ) v k . Of course, when E is an affine bundle over X with associated vector bundle E over X, we have V ( E ) = E × X E . We have the short exact sequence of vector bundles over E :

0 → V ( E ) → T ( E ) → T ( π ) E × X T → 0

Accordingly, in variational calculus, the couple ( f , δ f ) made by a section f of E and its variation δ f is nothing else but a section of V ( E ) while δ f has no reason at all to be “small”.

For a later use, if E is a fibered manifold over X and f is a section of E , we denote by E = f − 1 ( V ( E ) ) the reciprocal image of V ( E ) by f as the vector bundle over X obtained when replacing ( x , y , v ) by ( x , f ( x ) , v ) in each chart, along with the following commutative diagram:

E → V ( E ) δ f ↑ ↓ ↓ X → f ← π E

A similar construction may also be done for any affine bundle over E . When the background is clear enough, with a slight abuse of language, we shall sometimes set E = V ( E ) as a vector bundle over E and call “vertical machinery” such a useful systematic notation.

Looking at the transition rules of J q ( E ) , we deduce easily the following results:

PROPOSITION 3.1.4: J q ( E ) is an affine bundle over J q − 1 ( E ) modeled on S q T * ⊗ E E but we shall not specify the tensor product in general.

PROPOSITION 3.1.5: There is a canonical isomorphism V ( J q ( E ) ) ≃ J q ( V ( E ) ) = J q ( E ) of vector bundles over J q ( E ) given by setting v μ k = v , μ k at any order and a short exact sequence:

0 → S q T * ⊗ E → J q ( E ) → π q − 1 q J q − 1 ( E ) → 0

of vector bundles over J q ( E ) allowing to establish a link with the formal theory of linear systems.

PROPOSITION 3.1.6: There is an exact sequence:

0 → E → j q + 1 J q + 1 ( E ) → d T * ⊗ J q ( E )

where d f q + 1 = j 1 ( f q ) − f q + 1 is over f q with components ( d f q + 1 ) μ , i k = ∂ i f μ k − f μ + 1 i k is called the (nonlinear) Spencer operator.

DEFINITION 3.1.7: If R q ⊂ J q ( E ) is a system of order q on E , then R q + 1 = ρ 1 ( R q ) = J 1 ( R q ) ∩ J q + 1 ( E ) ⊂ J 1 ( J q ( E ) ) is called the first prolongation of R q and we may define the subsets R q + r . In actual practice, if the system is defined by PDE Φ τ ( x , y q ) = 0 the first prolongation is defined by adding the PDE d i Φ τ ≡ ∂ i Φ τ + y μ + 1 i k ∂ Φ τ / ∂ y μ k = 0 . accordingly, f q ∈ R q ⇔ Φ τ ( x , f q ( x ) ) = 0 and f q + 1 ∈ R q + 1 ⇔ ∂ i Φ τ + f μ + 1 i k ( x ) ∂ Φ τ / ∂ y μ k = 0 as identities on X or at least over an open subset U ⊂ X . Differentiating the first relation with respect to x i and substracting the second, we finally obtain:

( ∂ i f μ k ( x ) − f μ + 1 i k ( x ) ) ∂ Φ τ / ∂ y μ k = 0 ⇒ d f q + 1 ∈ T * ⊗ R q

and the Spencer operator restricts to d : R q + 1 → T * ⊗ R q . We set R q + r ( 1 ) = π q + r q + r + 1 ( R q + r + 1 ) .

DEFINITION 3.1.8: The symbol of R q is the family g q = R q ∩ S q T * ⊗ E of vector spaces over R q . The symbol g q + r of R q + r only depends on g q by a direct prolongation procedure. We may define the vector bundle F 0 over R q by the short exact sequence 0 → R q → J q ( E ) → F 0 → 0 and we have the exact induced sequence 0 → g q → S q T * ⊗ E → F 0 .

Setting a k τ μ ( x , y q ) = ∂ Φ τ / ∂ y μ k ( x , y q ) whenever | μ | = q and ( x , y q ) ∈ R q , we obtain:

g q = { v μ k ∈ S q T * ⊗ E | a k τ μ ( x , y q ) v μ k = 0 } , | μ | = q , ( x , y q ) ∈ R q

⇒ g q + r = ρ r ( g q ) = { v μ + ν k ∈ S q + r T * ⊗ E | a k τ μ ( x , y q ) v μ + ν k = 0 } , | μ | = q , | ν | = r , ( x , y q ) ∈ R q

In general, neither g q nor g q + r are vector bundles over R q .

On ∧ s T * we may introduce the usual bases { d x I = d x i 1 ∧ ⋯ ∧ d x i s } where we have set I = ( i 1 < ⋯ < i s ) . In a purely algebraic setting, one has:

PROPOSITION 3.1.9: There exists a map δ : ∧ s T * ⊗ S q + 1 T * ⊗ E → ∧ s + 1 T * ⊗ S q T * ⊗ E which restricts to δ : ∧ s T * ⊗ g q + 1 → ∧ s + 1 T * ⊗ g q and δ 2 = δ ∘ δ = 0 .

Proof: Let us introduce the family of s-forms ω = { ω μ k = v μ , I k d x I } and set ( δ ω ) μ k = d x i ∧ ω μ + 1 i k . We obtain at once ( δ 2 ω ) μ k = d x i ∧ d x j ∧ ω μ + 1 i + 1 j k = 0 and a k τ μ ( δ ω ) μ k = d x i ∧ ( a k τ μ ω μ + 1 i k ) = 0 .

□

The kernel of each δ in the first case is equal to the image of the preceding δ but this may no longer be true in the restricted case and we set:

DEFINITION 3.1.10: Let B q + r s ( g q ) ⊆ Z q + r s ( g q ) and H q + r s ( g q ) = Z q + r s ( g q ) / B q + r s ( g q ) with H 1 ( g q ) = H q 1 ( g q ) be the coboundary space i m ( δ ) , cocycle space k e r ( δ ) and cohomology space at ∧ s T * ⊗ g q + r of the restricted δ -sequence which only depends on g q and may not be vector bundles. The symbol g q is said to be s-acyclic if H q + r 1 = ⋯ = H q + r s = 0 , ∀ r ≥ 0 , involutive if it is n-acyclic and finite type if g q + r = 0 becomes trivially involutive for r large enough. In particular, if g q is involutive and finite type, then g q = 0 . Finally, S q T * ⊗ E is involutive for any q ≥ 0 if we set S 0 T * ⊗ E = E .

We have (See [

PROPOSITION 3.1.11: If g q is 2-acyclic and g q + 1 is a vector bundle over R q , then g q + r is a vector bundle over R q , ∀ r ≥ 1 .

LEMMA 3.1.12: If g q is involutive and g q + 1 is a vector bundle over R q , then g q is also a vector bundle over R q . In this case, changing linearly the local coordinates if necessary, we may look at the maximum number β of equations that can be solved with respect to v n ... n k and the intrinsic number α = m − β indicates the number of y that can be given arbitrarily.

We notice that R q + r + 1 = ρ r ( R q + 1 ) and R q + r = ρ r ( R q ) in the following commutative diagram:

R q + r + 1 → π q + 1 q + r + 1 R q + 1 ↓ π q + r q + r + 1 ↓ π q q + 1 R q + r ( 1 ) → π q q + r R q ( 1 ) ∩ ∩ R q + r → π q q + r R q

but we only have in general R q + r ( 1 ) ⊆ ρ r ( R q ( 1 ) ) . We finally obtain the following crucial Theorem and its Corollary (Compare to [

THEOREM 3.1.13: Let R q ⊂ J q ( E ) be a system of order q on E such that R q + 1 is a fibered submanifold of J q + 1 ( E ) . If g q is 2-acyclic and g q + 1 is a vector bundle over R q , then we have R q + r ( 1 ) = ρ r ( R q ( 1 ) ) for all r ≥ 0 .

DEFINITION 3.1.14: A system R q ⊂ J q ( E ) is said to be formally integrable at the order q + r if π q + r q + r + s : R q + r + s → R q + r is an epimorphism of fibered manifolds for all s ≥ 1 , formally integrable if π q + r q + r + 1 is an epimorphism of fibered manifolds ∀ r ≥ 0 and involutive if it is formally integrable with an involutive symbol g q . We have the following useful test ( [

COROLLARY 3.1.15: Let R q ⊂ J q ( E ) be a system of order q on E such that R q + 1 is a fibered submanifold of J q + 1 ( E ) . If g q is 2-acyclic (involutive) and if the map π q q + 1 : R q + 1 → R q is an epimorphism of fibered manifolds, then R q is formally integrable (involutive).

This is all what is needed in order to study nonlinear systems of ordinary differential (OD) or partial differential (PD) equations, using calligrphic letters like E for the nonlinear framework and capital letters like E = V ( E ) for the linear or vertical linearized framework.

Let A , B , ⋯ be commutative unitary rings or even integral domain with fields of quotients Q ( A ) = K , Q ( B ) = L , ⋯ containing a field k as a subring or subfield, for example a polynomial ring k [ x ] in many indeterminates with coefficients in k and the corresponding field k ( x ) of rational functions. The ideas that led Erich Kähler to the next definitions in 1930 are of two kinds ( [

· The derivative of a polynomial with respect to any one of the indeterminates is a polynomial while the derivative of a rational function is a rational function, a reason sufficient for believing that the concept of derivation could be useful in algebra.

· The variational and linearization procedures presented in the last section and used for many applications to physics should be extended to differential algebra in order to obtain the algebraic counterpart of definition 3.1.3 and proposition 3.1.5, replacing X by a ring A or a field K.

DEFINITION 3.2.1: A derivation from A to an A-module M over k is a map δ : A → M such that δ ( a + b ) = δ a + δ b , δ ( a b ) = a δ b + b δ a with δ | k = 0 and the set of such maps is denoted by d e r k ( A , M ) with 1 × 1 = 1 ⇒ δ 1 = 0 . When M = A , we simply set d e r k ( A , A ) = d e r k ( A ) .

PROPOSITION 3.2.2: Given any A-module M and any derivation δ ∈ d e r k ( A , M ) , there exists a unique A-module denoted by Ω A / k , called module of Kähler differentials of A over k, a derivation d ∈ d e r k ( A , Ω A / k ) and a unique morphism f ∈ h o m A ( Ω A / k , M ) such that δ = f ∘ d in the following commutative diagram:

A → d Ω A / k δ ↘ ↓ f M

The element d a ∈ Ω A / k is called the differential of a and d e r k ( A , M ) = h o m A ( Ω A / k , M ) .

Proof: Let F be the free A-module made by the symbols d a , a ∈ A and let N be the submodule of F generated by d α , d ( a + b ) − d a − d b , d ( a b ) − a d b − b d a for α ∈ k , a , b ∈ A . We set Ω A / k = F / N and the derivation d = d A / k : A → Ω A / k : a → d a is the universal derivation allowing to define f : Ω A / k → M by ( f ∘ d ) ( a ) = f ( d a ) = δ a .

Another way is to take into account the limit procedure that is classically used in analysis, namely ( x + h ) − x = h ⇒ ( x + h ) 2 − x 2 = 2 x h + h 2 ⇒ d x 2 = 2 x d x when h ↦ 0 in order to avoid the square quatity h 2 and so on. For this, let us denote by I the kernel of the map A ⊗ k A → A : a ⊗ b → a b and define Ω A / k = I / I 2 while setting d a = 1 ⊗ a − a ⊗ 1 , ∀ a ∈ A . Using the bimodule structure of A A A while identifying a ∈ A with a ⊗ 1 ∈ A ⊗ k A , it follows that d is indeed a derivation from A to the A-module Ω A / k as we have successively:

a d b + b d a = a ( 1 ⊗ b − b ⊗ 1 ) + b ( 1 ⊗ a − a ⊗ 1 ) = ( a ⊗ 1 ) ( 1 ⊗ b − b ⊗ 1 ) + ( b ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) = ( 1 ⊗ a ) ( 1 ⊗ b − b ⊗ 1 ) + ( b ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) − ( 1 ⊗ a − a ⊗ 1 ) ( 1 ⊗ b − b ⊗ 1 ) = ( 1 ⊗ a b − a b ⊗ 1 ) m o d ( I 2 ) = d ( a b ) m o d ( I 2 )

and thus:

d a 2 = 1 ⊗ a 2 − a 2 ⊗ 1 = ( 1 ⊗ a + a ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) = ( a ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) + ( 1 ⊗ a ) ( 1 ⊗ a − a ⊗ 1 ) = 2 ( a ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) + ( 1 ⊗ a − a ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) = 2 ( a ⊗ 1 ) ( 1 ⊗ a − a ⊗ 1 ) m o d ( I 2 ) = 2 a d a m o d ( I 2 )

□

Among the elementary properties of the Kähler differentials, we notice that, if f : A → B is a k-algebra homomorphism and M is a B-module, then M becomes a A-module under the rule f ( a ) = b ⇒ a m = f ( a ) m , ∀ a ∈ A , b ∈ B , m ∈ M and we have the exact sequence of B-modules:

0 → d e r A ( B , M ) → d e r k ( B , M ) → d e r k ( A , M )

because δ ∈ d e r k ( B , M ) ⇒ δ ∘ f ∈ d e r k ( A , M ) and thus d e r k ( A , M ) ≃ h o m B ( B ⊗ A Ω A / k , M ) .

The proof of the following two propositions is classical and can be found in ( [

PROPOSITION 3.2.3: (First fundamental exact sequence) We have the exact sequence of B-modules:

B ⊗ A Ω A / k → Ω B / k → Ω B / A → 0

where the first map is a monomorphism when f is a monomorpism. In particular, if k ⊂ K ⊂ L is a chain of field extensions, then one has the short exact sequence of vector spaces over L:

0 → L ⊗ K Ω K / k → Ω L / k → Ω L / K → 0

PROPOSITION 3.2.4: (Second fundamental exact sequence) If we have the short exact sequence 0 → a → A → f B → 0 , then we have the exact sequence of B-modules:

a / a 2 → B ⊗ A Ω A / k → Ω B / k → 0

EXAMPLE 3.2.5: Let x , y be two indeterminates over the field k = ℚ and consider the case A = k [ x , y ] , B = A / ( x 2 , y 2 ) . Then the ideal a = ( x 2 , y 2 ) ⊂ A is not prime because r a d ( a ) = p = ( x , y ) ∈ s p e c ( A ) . The image of x 3 is 1 ⊗ 3 x 2 d x in B ⊗ A Ω A / k , that is 3 x 2 ⊗ d x = 0 because x 2 ∈ a . A similar comment applies to y 3 and it is easy to see that the kernel of the map a / a 2 → B ⊗ A Ω A / k is of the form a x 3 + b y 3 , ∀ a , b ∈ A .

Finally, if S is a multiplicatively closed subset of A, we may use the morphism θ S : A → S − 1 A in Proposition 2.2.3, we shall study the behaviour of derivations and differentials under localization. As δ ∈ d e r k ( A , M ) induces a unique derivation δ ∈ d e r k ( S − 1 A , S − 1 M through the known formula δ ( a / s ) : s δ a − a δ s ) / s 2 , it follows that the morphism d e r k ( S − 1 A , S − 1 M ) → d e r k ( A , S − 1 M ) given by δ → δ ∘ θ S is an epimorphism. We obtain the short exact sequence:

0 → d e r A ( S − 1 A , S − 1 M ) → d e r k ( S − 1 A , S − 1 M ) → d e r k ( A , S − 1 M ) → 0

and thus the short exact sequence:

0 → S − 1 A ⊗ A Ω A / k → Ω S − 1 A / k → Ω S − 1 A / A → 0

Taking into account the previous standard formula, it follows that Ω S − 1 A / A = 0 and we obtain:

PROPOSITION 3.2.6: There is an isomorphism S − 1 A ⊗ A Ω A / k ≃ Ω S − 1 A / k .

EXAMPLE 3.2.7: We now present in an independent manner a few OD or PD cases showing the difficulties met when studying differential ideals and ask the reader to revisit them later on while reading the main Theorems. As only a few results will be proved, the interested reader may look at [

· OD1: If k = ℚ , y is a differential indeterminate and d x is a formal derivation, we may set d x y = y x , d x y x = y x x and so on in order to introduce the differential ring A = k [ y , y x , y x x , ⋯ ] = k { y } . We consider the differential ideal a ⊂ A generated by the differential polynomial P = y x 2 − 4 y . We have d x P = 2 y x ( y x x − 2 ) and a cannot be a prime differential ideal, ⋯ and so on. After no less than 4 differentiations, we let the reader discover that y x x x 5 ∈ a ⇒ y x x x ∈ r a d ( a ) and thus a is neither prime nor perfect, that is equal to its radical, but r a d ( a ) is perfect as it is the intersection of the prime differential ideal generated by y with the prime differential ideal generated by y x 2 − 4 y and y x x − 2 , both containing y x x x .

· OD2: With the same notations, let us consider the differential ideal a ⊂ A generated by the differential polynomial P = y x 2 − 4 y 3 . We have d x P = 2 y x ( y x x − 6 y 2 ) and a cannot be prime differential ideal. Hence, we must have either y x = 0 ⇒ y = 0 or y x x − 6 y 2 = 0 and so on. After 3 differentiations we obtain ( y x x − 6 y 2 ) 4 ∈ a ⇒ y x x − 6 y 2 ∈ r a d ( a ) and thus a is neither prime nor perfect as before but r a d ( a ) is the prime differential ideal generated by y x 2 − 4 y 3 and y x x − 6 y 2 .

· PD1: If k = ℚ as before, y is a differential indeterminate and ( d 1 , d 2 ) are two formal derivations, let us consider the differential ideal generated by

P 1 = y 22 − 1 2 ( y 11 ) 2 and P 2 = y 12 − y 11 in k { y } . Using crossed derivatives

and differentiating twice, we get ( y 111 ) 3 ∈ a ⇒ y 111 ∈ r a d ( a ) and thus a is again neither prime nor perfect but r a d ( a ) is a perfect differential ideal and even a prime differential ideal p because we obtain easily from the last subsection that the resisual differential ring k { y } / p ≃ k [ y , y 1 , y 2 , y 11 ] is a differential integral domain. Its quotient field is thus the differential field K = Q ( k { y } / p ) ≃ k ( y , y 1 , y 2 , y 11 ) with the rules:

d 1 y = y 1 , d 1 y 1 = y 11 , d 1 y 11 = 0 , d 2 y = y 2 , d 2 y 1 = y 11 , d 2 y 11 = 0

as a way to avoid “looking for solutions”. The formal linearization is the linear system R 2 ⊂ J 2 ( E ) obtained in the last section where it was defined over R 2 , but not over K, by the two linear second order PDE:

η 22 − y 11 η 11 = 0 , η 12 − η 11 = 0 ⇒ ( y 11 − 1 ) η 111 = 0

changing slightly the notations with η = δ y and keeping the letter v only when looking at the symbols. It is at this point that the problem starts because R 2 is indeed a fibered manifold with arbitrary parametric jets ( y , y 1 , y 2 , y 11 ) but R 3 = ρ 1 ( R 2 ) is no longer a fibered manifold because the dimension of its symbol changes when y 11 = 1 . We understand therefore that there should be a close link existing between formal integrability and the search for prime differential ideals or differential fields. The solution of this problem has been provided as early as in 1983 for studying the “Differential Galois Theory” in ( [

· PD2: With the same notations, let us consider the differential ideal generated by the differential polynomials P 1 = y 22 − 1 3 ( y 11 ) 3 and P 2 = y 12 − 1 2 ( y 11 ) 2 in k { y } . We get:

P 1 , P 2 ∈ a ⇒ d 2 P 2 − d 1 P 1 + y 11 d 1 P 2 = 0 ⇒ R 2 involutive

with d i m ( g q ) = 1, ∀ q ≥ 1 . As the symbol g 2 is involutive, there is an infinite number of parametric jets ( y , y 1 , y 2 , y 11 , y 111 , ⋯ ) and thus k { y } / a ≃ k [ y , y 1 , y 2 , y 11 , y 111 , ⋯ ] is a differential integral domain with d 2 y 2 = y 22 = 1 3 ( y 11 ) 3 , d 2 y 11 = y 112 = y 11 y 111 , ⋯ . It follows that a = p is a prime differential ideal with r a d ( p ) = p . The second order linearized system is:

η 22 − ( y 11 ) 2 η 11 = 0 , η 12 − y 11 η 11 = 0

is now well defined over the differential field K = Q ( k { y } / p ) and is involutive.

DEFINITION 3.2.8: A differential ring is a ring A with a finite number of commuting derivations ( ∂ 1 , ⋯ , ∂ n ) that can be extended to derivations of the ring of quotients Q ( A ) as we already saw. We shall suppose from now on that A is even an integral domain and introduce the differential field K = Q ( A ) . For example, if x 1 , ⋯ , x n are indeterminates over ℚ , then ℚ [ x ] = ℚ [ x 1 , ⋯ , x n ] is a differential ring with quotient differential field ℚ ( x ) .

If K is a differential field as above and ( y 1 , ⋯ , y m ) are indeterminates over K, we transform the polynomial ring K { y } = lim q → ∞ K [ y q ] into a differential ring by introducing as usual the formal derivations d i = ∂ i + y μ + 1 i k ∂ / ∂ y μ k and we shall set K 〈 y 〉 = Q ( K { y } ) .

DEFINITION 3.2.9: We say that a ⊂ K { y } is a differential ideal if it is stable by the d i , that is if d i a ∈ a , ∀ a ∈ a , ∀ i = 1 , ⋯ , n . We shall also introduce the radical r a d ( a ) = { a ∈ A | ∃ r , a r ∈ a } ⊇ a and say that a is a perfect (or radical) differential ideal if r a d ( a ) = a . If S is any subset of A, we shall denote by { S } the differential ideal generated by S and introduce the (non-differential) ideal ρ r ( S ) = { d ν a | a ∈ S ,0 ≤ | ν | ≤ r } in A.

LEMMA 3.2.10: If a ⊂ A is differential ideal, then r a d ( a ) is a differential ideal containing a .

Proof: If d is one of the derivations, we have a r − 1 d a = 1 r d a r ∈ { a r } and thus:

( r − 1 ) a r − 2 ( d a ) 2 + a r − 1 d 2 a ∈ { a r } ⇒ a r − 2 ( d a ) 3 ∈ { a r } , ⋯ ⇒ ( d a ) 2 r − 1 ∈ { a r }

□

LEMMA 3.2.11: If a ⊂ K { y } , we set a q = a ∩ K [ y q ] with a 0 = a ∩ K [ y ] and a ∞ = a . We have in general ρ r ( a q ) ⊆ a q + r and the problem will be to know when we may have equality.

We shall say that a differential extension L = Q ( k { y } / p ) is a finitely generated differential extension of K and we may define the evaluation epimorphism K { y } → K { η } ⊂ L with kernel p by calling η or y ¯ the residue of y modulo p . If we study such a differential extension L/K, by analogy with Section 2, we shall say that R q or g q is a vector bundle over R q if one can find a certain number of maximum rank determinant D α that cannot be all zero at a generic solution of p q defined by differential polynomials P τ , that is to say, according to the Hilbert Theorem of Zeros, we may find polynomials A α , B τ ∈ K { y q } such that ∑ α A α D α + ∑ τ B τ P τ = 1 . The following Lemma will be used in the next important Theorem:

LEMMA 3.2.12: If p is a prime differential ideal of K { y } , then, for q sufficiently large, there is a polynomial D ∈ K [ y q ] such that D ∉ p q and:

D p q + r ⊂ r a d ( ρ r ( p q ) ) ⊂ p q + r , ∀ r ≥ 0

THEOREM 3.2.13: (Primality test) Let p q ⊂ K [ y q ] and p q + 1 ⊂ K [ y q + 1 ] be prime ideals such that p q + 1 = ρ 1 ( p q ) and p q + 1 ∩ K [ y q ] = p q . If the symbol g q of the algebraic variety R q defined by p q is 2-acyclic and if its first prolongation g q + 1 is a vector bundle over R q , then p = ρ ∞ ( p q ) is a prime differential ideal with p ∩ K [ y q + r ] = ρ r ( p q ) , ∀ r ≥ 0 .

COROLLARY 3.2.14: Every perfect differential ideal of { y } can be expressed in a unique way as the non-redundant intersection of a finite number of prime differential ideals.

COROLLARY 3.2.15: (Differential basis) If r is a perfect differential ideal of K { y } , then we have r = r a d ( ρ ∞ ( r q ) ) for q sufficiently large.

EXAMPLE 3.2.16: As K { y } is a polynomial ring with an infinite number of variables it is not noetherian and an ideal may not have a finite basis. With K = ℚ , n = 1 and d = d x , then a = { y y x , y x y x x , y x x y x x x , ⋯ } ⇒ ( y x ) 2 + y y x x ∈ a ⇒ r a d ( a ) = { y x } is a prime differential ideal.

PROPOSITION 3.2.17: If ζ is differentially algebraic over K 〈 η 〉 and η is differentially algebraic over K, then ζ is differentially algebraic over K. Setting ξ = ζ − η , it follows that, if L/K is a differential extension and ξ , η ∈ L are both differentially algebraic over K, then ξ + η , ξ η and d i ξ are differentially algebraic over K.

If L = Q ( K { y } / p ) , M = Q ( K { z } / q ) and N = Q ( K { y , z } / r ) are such that p = r ∩ K { y } and q = r ∩ K { z } , we have the two towers K ⊂ L ⊂ N and K ⊂ M ⊂ N of differential extensions and we may therefore define the new tower K ⊆ L ∩ M ⊆ 〈 L , M 〉 ⊆ N . However, if only L/K and M/K are known and we look for such an N containing both L and M, we may use the universal property of tensor products an deduce the existence of a differential morphism L ⊗ K M → N by setting d ( a ⊗ b ) = ( d L a ) ⊗ b + a ⊗ ( d M b ) whenever d L | K = d M | K = ∂ . Looking for an abstract composite differential field amounts therefore to look for a prime differential ideal in L ⊗ K M which is a direct sum of integral domains (See [

DEFINITION 3.2.18: A differential extension L of a differential field K is said to be differentially algebraic over K if every element of L is differentially algebraic over K. The set of such elements is an intermediate differential field K 0 ⊆ L , called the differential algebraic closure of K in L. If L/K is a differential extension, one can always find a maximal subset S of elements of L that are differentially transcendental over K and such that L is differentially algebraic over K 〈 S 〉 . Such a set is called a differential transcedence basis and the number of elements of S is called the differential transcendence degree of L/K.

THEOREM 3.2.19: If L/K is a finitely generated differential extension, then any intermediate differential field K ′ between K and L is also finitely generated over K.

THEOREM 3.2.20: The number of elements in a differential basis of L/K does not depent on the generators of L/K and his value is d i f f t r d ( L / K ) = α . Moreover, if K ⊂ L ⊂ M are differential fields, then d i f f t r d ( M / K ) = d i f f t r d ( M / L ) + d i f f t r d ( L / K ) .

Comparing the differential geometric approach to nonlinear algebraic systems with the differential algebraic approach just presented while setting δ y q = η q , we obtain:

COROLLARY 3.2.21: When L/K is a finitely generated differential extension, then Ω L / K is a differential module over the differential ring L ⊗ K K [ d ] = L [ d ] of differential operators with coefficients in L. The linearized “system” R = h o m L ( Ω L / K , L ) is thus a (left) differential module for the Spencer operator like in the linear framework.

It is not evident to grasp these results in order to apply them to control theory or mathematical physics for two reasons. The first is that the formal theory of nonlinear systems has not been accepted by differential geometers because of the homological background based on the so-called “vertical machinery” and the systematic use of the Spencer δ -cohomology. The recent study of the Schwarzschild and Kerr metrics (Compare [

As we have already explained in ( [

When X is a manifold of dimension n, let us consider two fibered manifolds over X, namely E with local coordinates ( x , y ) and F with local coordinates ( x , z ) . The fibered product E × X F is a fibered manifold over X with local coordinates ( x , y , z ) and we have the canonical identification:

J q ( E × X F ) = J q ( E ) × X J q ( F )

with local coordinates ( x , y q , z q ) .

For most applications, we shall suppose that E = X × Y and F = X × Z .

DEFINITION 3.3.1: Let R q ⊂ J q ( E × X F ) be a nonlinear system of order q on E × X F called a differential correspondence between ( y , z ) . When r → ∞ , we may consider the resolvent systems P q + r ⊂ J q + r ( E ) for y and Q q + r ⊂ J q + r ( F ) for z, induced by the canonical projections of E × X F onto E and F respectively.

Roughly, finding P amounts to eliminate z while finding Q amounts to eliminate y and we shall only consider the first problem as the second will be similar.

· In the linear case, pushing y on the left and z on the right, we are left with the search of the CC for y or the CC for z that may be quite difficult. One of the best examples has been provided by M. Janet with the second order system (See [

y 33 − x 2 y 11 = z 1 , y 22 = z 2

where y = f ( x ) can be given arbitrarily for getting z = g ( x ) while z = g ( x ) must satisfy one CC of order 3 and one CC of order 6.

· In the nonlinear case, we have ( [

THEOREM OF THE RESOLVENT SYSTEMS 3.3.2: In general, one may find two integers r , s ≥ 0 such that R q + r ( s ) is formally integrable (involutive) with formally integrable (involutive) projections P q + r ( s ) ⊂ J q + r ( E ) and Q q + r ( s ) ⊂ J q + r ( F ) . Moreover, r and s can be (tentatively) found by a finite algorithm preserving the symmetry existing between E and F .

Proof: First of all, we know that, in general, one can find the two integers r , s ≥ 0 in such a way that R q + r ( s ) is formally integrable (involutive). Hence, using the commutative and exact diagram:

R q + r + s → P q + r + s → 0 ↓ ↓ R q + r ( s ) → P q + r ( s ) → 0 ∩ ∩ R q + r → P q + r → 0

we may suppose, without any loss of generality, that R q is formally integrable (involutive).

Now, chasing in the commutative diagram:

J s ( R q + r ) → J s ( J q + r ( E × X F ) ) ↗ | ↗ | R q + r + s → J q + r + s ( E × X F ) | | ↓ | ↓ | J s ( P q + r ) → J s ( J q + r ( E ) ) ↓ ↗ ↓ ↗ P q + r + s → J q + r + s ( E )

we obtain therefore P q + r + s ⊆ ρ s ( P q + r ) = J s ( P q + r ) ∩ J q + r + s ( E ) ⊂ J s ( J q + r ( E ) ) , ∀ r , s ≥ 0 .

Then, chasing in the commutative diagram:

R q + r + s → P q + r + s → 0 ↓ ↓ R q + r → P q + r → 0 ↓ 0

we notice that π q + r q + r + s : P q + r + s → P q + r is an epimorphism ∀ r , s ≥ 0 .

Finally, chasing in the commutative and exact diagram:

0 → P q + r + s → ρ s ( P q + r ) → 0 ↓ ↓ 0 → P q + r = P q + r → 0 ↓ 0

we deduce that each P q + r is formaly integrable at each q + r , ∀ r ≥ 0 , though not always formally integrable as we shall see on examples.

Looking at the symbol h of P , we have h q + r + s ⊆ ρ s ( h q + r ) over P q + r + s . According to standard Noetherian arguments, such a situation is stabilizing for r and s large enough but such an approach is not constructive in general.

For this reason, we shall prefer to use a different approach which is closer to the one met in the case of linear differential correspondences. For this, if z = g ( x ) is an arbitrary section of F , we shall consider the new system for y defined by A q = j q ( f ) − 1 ( R q ) over K 〈 g 〉 . Such a system, which is in general neither involutive nor even formally integrable as we shall see on examples, may also be not even compatible as it may not provide a fibered manifold but this way may give informations on the order of the OD or PD equations that should be satisfied by z. A similar procedure could be used by setting y = f ( x ) and introducing B q = j q ( f ) − 1 ( R q ) in order to obtain a system for z over K 〈 f 〉 .

□

Let us now turn to the differential algebraic counterpart.

DEFINITION 3.3.3: If K is a differential field and we have a differential algebraic correspondence defined by a prime differential ideal r ⊂ K { y , z } , we may define the resolvent system for y by the resolvent differential ideal p = r ∩ K { y } and the resolvent system for z by the resolvent differential ideal q = r ∩ K { z } .

LEMMA 3.3.4: The resolvent ideal for y is the prime differential resolvent ideal p = r ∩ K { y } for which one can find a differential basis. Similarly, the prime differential resolvent ideal for z is q = r ∩ K { z } .

Proof: We have the commutative and exact diagram:

0 0 0 ↓ ↓ ↓ 0 → p → K { y } → A → 0 ↓ ↓ ↓ 0 → r → K { y , z } → B → 0

First of all, B is an integral domain because r is a prime differential ideal. It follows from a chase that the induced morphism A → B is a monomorphism and A ≃ i m ( A ) ⊂ B is thus also an integral domain, a result showing that p is a prime differential ideal. It is essential to notice that projections of ideals cannot be used in the nonlinear framework. Hence, the idea is to reduce the study of differential algebraic correspondences to the study of purely algebraic correspondences.

□

We end this last section with a few basic motivating examples showing the importance of the non-commutative localization of integral domains for explicit computations and applications. We hope therefore that these examples could be used as test examples for future applications of computer algebra (Compare to [

EXAMPLE 3.3.5: With m = 1 , n = 2 , q = 2 , K = ℚ ( x 1 , x 2 ) while using local coordinates ( x 1 , x 2 , y ) for the fibered manifold E let us consider anew the nice example presented by J. Johnson in ( [

P 1 ≡ y 22 − x 2 y = 0 , P 2 ≡ y 12 − ( y ) 2 = 0

We let the reader prove successively as an exercise that:

R 2 ( 1 ) is adding 2 y y 2 − x 2 y 1 = 0 .

R 2 ( 2 ) is adding 2 ( y 2 ) 2 − y 1 + x 2 ( y ) 2 = 0 .

R 2 ( 3 ) is adding 6 x 2 y y 2 = 0 and thus y 1 = 0 .

R 2 ( 4 ) is adding ( y ) 2 = 0 .

Accordingly, the prime ideal p 2 ∈ K [ y , y 1 , y 2 , y 11 , y 12 , y 22 ] generated by the two given differential polynomias ( P 1 , P 2 ) is such that y ∈ r a d ( ρ 4 ( p 2 ) ) ⇒ r a d ( ρ ∞ ( p 2 ) ) = ( y ) , a result not evident at first sight and leading to the trivial differential extension L = K . The linearization procedure is even less evident. Indeed, starting with the linearized second order system:

δ P 1 ≡ η 22 − x 2 η = 0 , δ P 2 ≡ η 12 − 2 y η = 0

we let the reader prove that we successively get:

R 2 ( 1 ) is adding 2 y η 2 − x 2 η 1 + 2 y 2 η = 0 .

R 2 ( 2 ) is adding 4 y 2 η 2 − η 1 + 2 x 2 y η = 0 .

R 2 ( 3 ) is adding η 1 = 0 .

R 2 ( 4 ) is adding y η = 0 but one cannot conclude.

Such an example is proving that, in general, one must start from a formally integrable or even involutive system in order to be able to define the module of Kähler differentials for the differential extension L/K.

EXAMPLE 3.3.6: (Burgers) With n = 2 , m = 2 , local coordinates ( x 1 , x 2 , y , z ) and differential field K = ℚ , let us consider the algebraic first order involutive system R 1 defined by two differential algebraic PD equations:

{ z 2 − y = 0 y 2 − z 1 + ( y ) 2 = 0 [ 1 2 1 2 ]

These two differential polynomials generate a prime differential ideal r ⊂ K { y , z } and provide thus a differential extension N/K. Indeed, K [ y , y 1 , y 2 ; z , z 1 , z 2 ] / r 1 ≃ K [ y , y 1 ; z , z 1 ] is an integral domain and r 1 is a prime ideal. Then, using one prolongation, we may introduce the following second order system R 2 = ρ 1 ( R 1 ) :

{ z 22 − y 2 = 0 y 22 + 2 y y 2 − y 1 = 0 z 12 − y 1 = 0 y 12 − z 11 + 2 y y 1 = 0 z 2 − y = 0 y 2 − z 1 + ( y ) 2 = 0 1 2 1 2 1 • 1 • • • • •

and use the Janet tabular to prove that it is a nonlinear involutive system. It follows that K [ y , ⋯ , y 22 ; z , ⋯ , z 22 ] / r 2 ≃ K [ y , y 1 , y 11 ; z , z 1 , z 11 ] is an integral domain and r 2 is a prime ideal. Thanks to Theorem 3.2.13, we obtain thus finally K { y , z } / r ≃ K [ y , y 1 , y 11 , ⋯ ; z , z 1 , z 11 , ⋯ ] which is also an integral domain. It follows hat p = r ∩ K { y } and q = r ∩ K { z } are prime differential ideals.

Taking now any section y = f ( x ) , we obtain the system B 1 = j 1 ( f ) − 1 ( R 1 ) for z:

{ z 2 − f ( x ) = 0 z 1 − ∂ 2 f ( x ) − ( f ( x ) ) 2 = 0

and its first prolongation B 2 = j 2 ( f ) − 1 ( R 2 ) for z:

{ z 22 − ∂ 2 f ( x ) = 0 z 12 − ∂ 1 f ( x ) = 0 z 11 − ∂ 12 f ( x ) − 2 f ( x ) ∂ 1 f ( x ) = 0 z 2 − f ( x ) = 0 z 1 − ∂ 2 f ( x ) − ( f ( x ) ) 2 = 0 ∂ 22 f ( x ) + 2 f ( x ) ∂ 2 f ( x ) − ∂ 1 f ( x ) = 0

First of all, this is a fibered manifold if and only if f is solution of the second order system P 2 defined by the single second order PD equation:

y 22 + 2 y y 2 − y 1 = 0

which is the resolvent system for y generating the prime differential ideal p ⊂ K { y } allowing to define a differential extension L = Q ( K { y } / p ) of K and we have p = r ∩ K { y } ⇒ L ⊂ N .

We are thus left with the first order (nonlinear) system for z:

{ z 2 − f ( x ) = 0 z 1 − ∂ 2 f ( x ) − ( f ( x ) ) 2 = 0 1 2 1 2

which is easily seen to be involutive for any solution y = f ( x ) of P 2 .

Taking finally any section z = g ( x ) , we obtain the system A 1 = j 1 ( g ) − 1 ( R 1 ) :

{ y 2 + ( y ) 2 − ∂ 1 g ( x ) = 0 y − ∂ 2 g ( x ) = 0

and the projecion A 1 ( 1 ) of its first prolongation A 2 = j 2 ( g ) − 1 ( R 2 ) :

{ y 2 + ( y ) 2 − ∂ 1 g = 0 y 1 − ∂ 12 g = 0 y − ∂ 2 g = 0 ∂ 22 g ( x ) + ( ∂ 2 g ( x ) ) 2 − ∂ 1 g ( x ) = 0

is compatible if and only if g is solution of the second order system Q 2 defined by the single second order PD equation obtained after subsitution of y = ∂ 2 g ( x ) :

z 22 + ( z 2 ) 2 − z 1 = 0

which is the resolvent system for z generating the prime differential ideal q = r ∩ K { z } allowing to define a differential extension M = Q ( K { z } / q ) of K and we have q = r ∩ K { z } ⇒ M ⊂ N .

We are thus left with the only zero order (linear) equation for y, namely:

y − ∂ 2 g ( x ) = 0

for any solution z = g ( x ) of Q 2 . The differential correspondence that must be used is thus R 2 .

Both L , M andN are differential algebraic extensions of K of zero differential transcendence degree.

EXAMPLE 3.3.7: (Korteweg-de Vries) With the same notations, we let the reader provide the details of the following similar example with the second order nonliear differential correspondence R 2 :

z 2 + ( z ) 2 + 2 y = 0 , y 22 + 2 y ( z ) 2 + 4 ( y ) 2 − 2 z y 2 − 1 2 z 1 = 0

by exhibiting the nonlinear formally integrable involutive system R 2 ( 1 ) .

{ z 22 + 2 z z 2 + 2 y 2 = 0 y 22 + 2 y ( z ) 2 + 4 ( y ) 2 − 2 z y 2 − 1 2 z 1 = 0 z 12 + 2 z z 1 + 2 y 1 = 0 z 2 + ( z ) 2 + 2 y = 0 1 2 1 2 1 • • •

for ( y , z ) such that R 3 ( 1 ) = ρ 1 ( R 2 ( 1 ) ) according to Theorem 3.1.13, with characters α 2 1 = 3 , α 2 2 = 0 (Compare to [

Taking now any section y = f ( x ) , we obtain the system B 2 = j 2 ( f ) − 1 ( R 2 ( 1 ) ) for z which is the first prolongation of the first order (nonlinear) system B 1 defined by:

{ z 2 + ( z ) 2 + 2 f ( x ) = 0 z 1 + 4 ∂ 2 f ( x ) z − 4 f ( x ) ( z ) 2 − 8 ( f ( x ) ) 2 − 2 ∂ 22 f ( x ) = 0 1 2 1 •

Using crossed derivatives and tedious but elementary substitutions, this system is involutive if and only if y = f ( x ) is a solution of the third order involutive resolvent system P 3 ( 1 ) for y:

y 222 + y 1 + 12 y y 2 = 0

Similarly, taking now any section z = g ( x ) , we obtain the system A 2 = j 2 ( g ) − 1 ( R 2 ( 1 ) ) defined by:

{ 2 y 22 + 4 ( g ( x ) ) 2 + 8 ( y ) 2 − 4 g ( x ) y 2 − ∂ 1 g ( x ) = 0 2 y 2 + ∂ 22 g ( x ) + 2 g ( x ) ∂ 2 g ( x ) = 0 2 y 1 + ∂ 12 g ( x ) + 2 g ( x ) ∂ 1 g ( x ) = 0 2 y + ∂ 2 g ( x ) + ( g ( x ) ) 2 = 0

Differentiating the second equation with respect to x 2 and substracting the first while using the other equations, we discover that this system is compatible if and only if z = g ( x ) is a solution of the third order involutive resolvent system Q 3 ( 1 ) for z:

z 222 + z 1 − 6 ( z ) 2 z 2 = 0

It follows that we are left with a single zero order equation for y, namely:

2 y + ∂ 2 g ( x ) + ( g ( x ) ) 2 = 0

for any solution z = g ( x ) of Q 2 . The differential correspondence that must be used is thus R 3 ( 1 ) .

EXAMPLE 3.3.8: With n = 1 , m = 2 , K = ℚ , let us consider the single input/single output (SISO) nonlinear control system P ≡ y 1 y x 2 − y x 1 − a = 0 with a constant parameter a ∈ K . The differential ideal p ⊂ K { y 1 , y 2 } = K { y } generated by P is prime because K { y } / p = K [ y 1 ; y 2 , y x 2 , ⋯ ] is an integral domain and we set as usual L = Q ( K { y } / p ) . The corresponding linearized system is y 1 η x 2 + y x 2 η 1 − η x 1 = 0 . Multiplying by a test function λ and integrating by parts, the adjoint operator is:

{ η 1 → d λ + y x 2 λ = μ 1 η 2 → − y 1 d λ − y x 1 λ = μ 2

Multiplying the first OD equation by y 1 , the second by 1 and adding them, we get a λ = y 1 μ 1 + μ 2 . As L [ d ] is a principal ideal domain, it follows that M = Ω L / K is a torsion-free and thus free differential module over L [ d ] if and only if this operator is injective ( [

If a = 0 , then t ( M ) is generated by ω = y 1 η 2 − η 1 = y 1 δ y 2 − δ y 1 satisfying:

d ω = y 1 δ y x 2 + y x 1 δ y 2 − δ y x 1 = δ ( y 1 y x 2 ) + y x 2 ω − δ y x 1 = y x 2 ω ⇒ d ω − y x 2 ω = 0

As δ ω = δ y 1 ∧ δ y 2 , we obtain ω ∧ δ ω = 0 and one can thus use the analytic Frobénius theorem with integrating factor y 1 in order to get ω = y 1 δ ( y 2 − l o g ( y 1 ) ) .

If a ≠ 0 , say a = 1 , we have λ = y 1 μ 1 + μ 2 and obtain the only CC:

y 1 μ x 1 + 2 y x 1 μ 1 + μ x 2 + y x 2 μ 2 = 0

Multiplying by a test function ξ and integrating by parts, we obtain the parametrization:

− y 1 d ξ + y x 1 ξ = η 1 , − d ξ + y x 2 ξ = η 2

which is injective with potential ξ = y 1 η 2 − η 1 .

EXAMPLE 3.3.9: With n = 1 , m = 3 , K = ℚ , let us consider the first order nonlinear system ( [

P ≡ 2 y x 3 + ( y x 2 ) 2 − ( y x 1 ) 2 = 0

The differential ideal p is prime because K { y } / p = K [ y 1 , y x 1 , ⋯ ; y 2 , y x 2 , ⋯ ; y 3 ] is an integral domain and we define as usual the differential extension L = Q ( L { y } / p ) .

Setting δ y = η and dividing by 2, the linearized system becomes:

η x 3 + y x 2 η x 2 − y x 1 η x 1 = 0

Multiplying as usual by the Lagrange multiplier λ and integrating by parts, we get the adjoint operator with d = d x :

{ η 1 → y x 1 d λ + y x x 1 λ = μ 1 η 2 → − y x 2 d λ − y x x 2 λ = μ 2 η 3 → − d λ = μ 3

which is injective with the two CC:

μ 1 + y x 1 μ 3 y x x 1 + μ 2 − y x 2 μ 3 y x x 2 = 0 , d ( μ 1 + y x 1 μ 3 y x x 1 ) + μ 3 = 0

It follows that M = Ω L / K is a torsion-free differential module over L [ d ] which is thus also free because it is known that any module over a principal ideal ring which is torsion-free is also free ( [

{ μ 1 → − 1 y x x 1 d ξ 2 − 1 y x x 1 ξ 1 = η 1 μ 2 → 1 y x x 2 ξ 1 = η 2 μ 3 → − y x 1 y x x 1 d ξ 2 + ( y x 1 y x x 1 − y x 2 y x x 2 ) ξ 1 + ξ 2 = η 3

This parametrization is injective because ξ 1 = y x x 2 η 2 , ξ 2 = η 3 + y x 2 η 2 − y x 1 η 1 . Hence, we can replace any solution ( η 1 , η 2 , η 3 ) of the linearized system by any couple ( ξ 1 , ξ 2 ) , a result not evident at first sight.

Finally, considering the two parametrization vertical 1-forms:

ω 1 = 1 y x x 2 ξ 1 = η 2 = δ y 2 , ω 2 = δ y 3 + y x 2 δ y 2 − y x 1 δ y 1

we have:

δ ω 1 = 0 ⇒ ω 1 ∧ ω 2 ∧ δ ω 1 = 0 , ω 1 ∧ ω 2 ∧ δ ω 2 = δ y 1 ∧ δ y 2 ∧ δ y 3 ∧ δ y x 1 ≠ 0

and cannot therefore use the Frobenius theorem in order to integrate this vertical exterior system.

According to what has been said, the linear and the nonlinear systems are both controllable. In particular, if the nonlinear system should not be controllable, it means that there should exist at least one autonomous element in L that should be constrained by at least one OD equation. The linearization of such an element should produce a torsion element in L. The striking feature of this example is that one can prove that L is a purely differentially transcendental extension of the ground field K. Indeed, we may rewrite the system like:

d ( 2 y 3 − ( y 1 − y 2 ) d ( y 1 + y 2 ) ) + ( y 1 − y 2 ) d 2 ( y 1 + y 2 ) = 0

Setting:

z 1 = 2 y 3 − ( y 1 − y 2 ) ( y x 1 + y x 2 ) , z 2 = y 1 + y 2 ⇒ − z x 1 z x x 2 = y 1 − y 2

we obtain the second order nonlinear parametrization:

2 y 1 = − z x 1 z x x 2 + z 2 , 2 y 2 = z x 1 z x x 2 + z 2 , 2 y 3 = − z x 1 z x 2 z x x 2 + z 1

and thus L = K 〈 z 1 , z 2 〉 . Hence, introducing ζ 1 = δ z 1 , ζ 2 = δ z 2 , we can similarly replace any solution ( η 1 , η 2 , η 3 ) of the linearized system by any couple ( ζ 1 , ζ 2 ) , a result even less evident at first sight. The new parametrization is also injective but, contrary to the previous situation, we have now ω ¯ = ζ = δ z ⇒ δ ω ¯ = 0 . As a (difficult) exercise of formal integrability, we let the reader prove that the second order 2 × 2 operator matrix ξ → ζ is an isomorphism (See [

EXAMPLE 3.3.10: With n = 2 , m = 3 , q = 1 , K = ℚ , let us consider the differential ideal p generated by the two differential polynomials:

P 1 ≡ y 2 1 − y 3 y 1 1 = 0 , P 2 ≡ y 2 2 − y 3 y 1 2 = 0

The corresponding system is easily seen to be involutive and we have:

K { y 1 , y 2 , y 3 } / p = K { y 3 } [ y 1 , y 1 1 , ⋯ ; y 2 , y 1 2 , ⋯ ]

as an integral domain and p is thus a prime differential ideal allowing to define the differential field L = Q ( K { y } / p ) with differential transcendence basis K 〈 y 3 〉 ⊂ L .

The linearized system D 1 η = 0 is:

d 2 η 1 − y 3 d 1 η 1 − y 1 1 η 3 = 0 , d 2 η 2 − y 3 d 1 η 2 − y 1 2 η 3 = 0

Multiplying the first by λ 1 , the second by λ 2 and integrating by parts, we obtain the adjoint operator a d ( D 1 ) λ = μ :

{ η 1 → − d 2 λ 1 + y 3 d 1 λ 1 + y 1 3 λ 1 = μ 1 η 2 → − d 2 λ 2 + y 3 d 1 λ 2 + y 1 3 λ 2 = μ 2 η 3 → − y 1 1 λ 1 − y 1 2 λ 2 = μ 3

However, this operator is not involutive because it is not even formally integrable. Nevertheless, adding the first order PD equation obtained by prolonging the zero order equation with respect to x 1 , we obtain:

{ η 1 → − d 2 λ 1 + y 3 d 1 λ 1 + y 1 3 λ 1 = μ 1 η 2 → − d 2 λ 2 + y 3 d 1 λ 2 + y 1 3 λ 2 = μ 2 − y 1 1 d 1 λ 1 − y 1 2 d 1 λ 2 − y 11 1 λ 1 − y 11 2 λ 2 = d 1 μ 3 η 3 → − y 1 1 λ 1 − y 1 2 λ 2 = μ 3 1 2 1 2 1 • • •

We obtain the unique generating first order CC a d ( D ) μ = 0 , namely:

d 2 μ 3 − y 3 d 1 μ 3 − y 1 1 μ 1 − y 1 2 μ 2 − 2 y 1 3 μ 3 = 0

Multiplying this CC by the test function ξ and integrating by parts, we get D ξ = η over L:

{ μ 1 → − y 1 1 ξ = η 1 μ 2 → − y 1 2 ξ = η 2 μ 3 → − d 2 ξ + y 3 d 1 ξ − y 1 3 ξ = η 3

Sustituting, we check the two CC described by D 1 η = 0 plus an additional zero order CC providing the torsion element ω = y 1 1 δ y 2 − y 1 2 δ y 1 generating e x t 1 ( M ) as we have indeed d 2 ω − y 3 d 1 ω − y 1 3 ω = 0 . We let the reader check that ω ∧ δ ω ≠ 0 . It follows that ω cannot have any integrating factor according to the Frobenius Theorem.

Eliminating y 3 , we are left with ( y 1 , y 2 ) and the nonlinear system y 1 1 y 2 2 − y 2 1 y 1 2 = 0 with the same conlusions as before. On the contrary, we let the reader check that the module of Kähler differentials is torsion-free for the nonlinear system y 1 1 y 2 2 − y 2 1 y 1 2 = 1 .

The author of this paper got his PhD thesis and has been collaborating with Prof. A. Lichnerowicz who, till his death in 1998, became more and more convinced that the variational origin of mathematical physics (elasticity, electromagnetism, general relativity) through the corresponding Euler-Lagrange equations was a kind of “screen” hiding a more important concept allowing to describe the “duality” existing between “fields” and “inductions”. After the author discovered in 1995 the impossibility to parametrize the Einstein operator, solving therefore negatively the challenge proposed by J. Wheeler in 1970, he did also notice that, in control theory, “a control system is controllable if and only if it is parametrizable” and that the “screen” is just indeed the “differential double duality” involved in differential homological algebra through the use of the “extension modules” (See Zbl 1079.93001). Hence it remained to study the use of the formal adjoint in the commutative or even noncommutative situation met when linearizing nonlinear systems of OD or PD equations. Among the best useful examples, one has the following differential sequence that can be found in Riemannian geometry, indicating below the fiber dimensions of the vector bundles involved with F 0 = S 2 T * and the order of the differential operators involved:

0 → Θ → T → K i l l i n g 1 F 0 → R i e m a n n 2 F 1 → B i a n c h i 1 F 2 → ⋯

0 → Θ → n → n ( n + 1 ) / 2 → n 2 ( n 2 − 1 ) / 12 → n 2 ( n 2 − 1 ) ( n − 2 ) / 24 → ⋯

where Θ is the sheaf of Killing vector fields for the Euclidean metric when n = 3 in elasticity or the Minkowski metric when n = 4 in general relativity. Defining the adjoint operators:

C a u c h y = a d ( K i l l i n g ) , B e l t r a m i = a d ( R i e m a n n ) , L a n c z o s = a d ( B i a n c h i )

one discovers that Lanczos was in fact dreaming to construct the adjoint differential sequence:

0 ← a d ( T ) ← C a u c h y 1 a d ( F 0 ) ← B e l t r a m i 2 a d ( F 1 ) ← L a n c z o s 1 a d ( F 2 ) ← ⋯

where a d ( E ) = ∧ n T * ⊗ E * for any vector bundle E where E * is obtained from E by inverting the transition rules when changing local coordinates. Accordingly, the only true problem left was to prove that each operator is indeed parametrized by the preceding one in both sequences, a highly non-evident fact ( [

T * → d 1 ∧ 2 T * → d 1 ∧ 3 T *

n → n ( n − 1 ) / 2 → n ( n − 1 ) ) ( n − 2 ) / 6

with d A = F ⇒ d F = 0 for electromagnetism when n = 4 where A is the EM potential and F the EM field, describing the first Maxwell operator and its parametrization by the EM potential.

The adjoint sequence:

a d ( T * ) ← a d ( d ) a d ( ∧ 2 T * ) ← a d ( d ) a d ( ∧ 3 T * )

which is, because n = 4 , locally isomorphic through the Hodge duality to the Poincaré sequence:

∧ 3 T * ← d ∧ 2 T * ← d T *

is used for the EM induction and second Maxwell operator both with its parametrization by the so-called EM pseudopotential. In both situations, there is no need to appeal to variational calculus which is only used for exhibiting the respective constitutive laws. Meanwhile, we point out the usefulness of the formal adjoint in the study of differential duality which is used as a substitute for the operational/symbolic calculus in the classical sense of Heaviside and Makusinski. We finally hope that the many tricky examples presented in this paper will open new domains of research in mathematical linear or nonlinear control theory and will be used later on as test-examples for computer algebra.

The author declares no conflicts of interest regarding the publication of this paper.

Pommaret, J.-F. (2021) Differential Correspondences and Control Theory. Advances in Pure Mathematics, 11, 835-882. https://doi.org/10.4236/apm.2021.1111056