A-Equation and Its Connections to Nonlinear Integrable System

Abstract

A novel approach to inverse spectral theory for Schrödinger Equation operators on a half-line was first introduced by Barry Simon and actively studied in recent literatures. The remarkable discovery is a new object A-function and intergo-differential Equation (called A-Equation) it satisfies. Inverse problem of reconstructing potential is then directly connected to finding solutions of A-Equation. In this work, we present a large class of exact solutions to A-Equation and reveal the connection to a class of arbitrarily large systems of nonlinear ordinary differential Equations. This non-linear system turns out to be C-integrable in the sense of F. Calogero. Integration scheme is proposed and the approach is illustrated in several examples.

Share and Cite:

Zhang, Y. (2017) A-Equation and Its Connections to Nonlinear Integrable System. Journal of Applied Mathematics and Physics, 5, 1320-1334. doi: 10.4236/jamp.2017.56110.

1. Introduction

Several years ago, Barry Simon investigated a new approach to inverse spectral

theory for the half-line Schrödinger operator, d 2 d x 2 + q ( x ) in L 2 ( 0, ) in [3] .

A new A-function introduced in [1] [2] [3] , is related to Weyl-Titchmarsh func- tion by the following relation:

m ( x , κ 2 ) = κ 0 A ( α , x ) e 2 α κ d α (1)

where A ( α , x ) L 1 ( 0, a ) for all a.

In [3] , the key discovery is that A ( α , x ) satisfies the following integro-diffe- rential Equation:

A x ( α , x ) = A α ( α , x ) + 0 α A ( β , x ) A ( α β , x ) d β . (2)

Given the fact that

lim α 0 A ( α , x ) = q ( x ) (3)

(at least in the L 1 sense), q ( x ) can be determined directly from A ( α , x ) . And A ( α , x ) can be calculated from A ( α ,0 ) (which is essentially the inverse Laplace transform of the data), by solving an equation which does not involve q ( x ) . Thus the inverse problem to determine q from m, becomes a problem to solve the integro-differential Equation (2). Properties of (2) are discussed in [4] [5] [6] [7] . To construct numerical solvers to this integro-differential equation, one needs to study sets of exact analytic solutions.

In this paper, we study a larger class of analytic solutions of (2), which is of the form

A ( α , x ) = j = 1 n f j ( x ) e 2 α γ j ( x ) . (4)

This ansatz is motivated by the explicit example in [1] , where A ( α ,0 ) is calculated for Bargmann potentials using inverse scattering theory (which is valid only under restrictive assumptions). Our aim is to determine the behavior of such solutions for all ( α , x ) and to do so using only (2).

Substituting (4) in (2), we find that f j ( x ) = 2 γ j ( x ) , and γ j satisfy the nonlinear equations:

γ j = 2 γ j γ j + l j 2 γ j γ l γ j γ l , 1 j n . (5)

Then we give a method for solving (5) explicitly in Section 3. The idea is to introduce new variables c j , the symmetric functions of γ j ( j = 1 , , n ), that is c j = i 1 i 2 i j γ i 1 γ i 2 γ i j . Via this “change of variable”, (5) yields a new non- linear system:

2 c j = 2 c 1 c j 1 + c j 1 1 j n + 1. c 0 = 1 , c j = 0 when j > n . (6)

This nonlinear system turns out to be solvable. Calogero proved that a certain family of n-body problems is solvable in a 2004 J. Math. Phys. paper and his model includes system (6), and the method we use in this method is different from his approach. Our method also shows some insightful connection to scattering problems. In Section 3, first we find n constants of motion for the system (6) which allow us to reduce it to a first order nonlinear system. Explicitly we will prove

Theorem 1. (i) Supposing that for any x in an open interval I, c j are solutions of the second order nonlinear system (6). Then on I, c j solves the first order system

k = 1 n ( 1 ) j k c 2 j k 1 ( c k c k + 1 ) = μ j (7)

for j = 0 , , n . Here μ j ( j 0 ) are constants and μ 0 = 1 .

(ii) Conversely, if c j ( x ) is solutions of (7) with c n ( x ) 0 and γ j 2 ( x ) γ l 2 ( x ) for x I , then (6) holds.

The latter is then solved by finding a nonlinear analogue of the method of integrating factors (Theorem 13).

We note that γ j is zeros of polynomials with coefficients c j . Calogero pointed out in [8] [9] that some nonlinear systems can be linearized by non- linear mapping between coefficients of polynomial and its zero, and thus is integrable. The novelty in this paper is that the nonlinear mapping from γ j to c j relates the system (5) to a solvable yet still nonlinear system. Interestingly, a system similar to (6) arises ( [10] [11] ) if one seeks potentials for which the large frequency WKB series is finite and yields solutions of the corresponding Sch- rödinger equations (with no error).

Section 4 shows how we obtain analytic examples of (2) by following this systematic procedure.

2. The g Equation

As described in introduction, we relate a large class of exact solution of A- Equation (2) to a second order non-linear system (5).

Without loss generality, we assume γ i γ j for all i j . Then the following proposition can be followed by direct calculation.

Proposition 2. If A ( α , x ) is of the form (4), and satisfies (2), then f j ( x ) = 2 γ j ( x ) , and γ j ( x ) satisfy (5). Conversely, if γ j ( x ) satisfy (5), then the function A ( α , x ) = 2 j = 1 n γ j e 2 α γ j solves (2).

Our goal is to solve (5) explicitly. To begin with, we need some notations. Let δ l j be the lth symmetric function on γ k 2 , k j :

δ l j = i 1 < < i l i ν j γ i 1 2 γ i 2 2 γ i l 2 . (8)

Lemma 3. If γ 1 2 , , γ n 2 are distinct, and δ l j ( 0 l n 1 ) are the symmetric functions on γ k 2 , k j , 1 k n , then the matrix

( δ 0 1 δ 0 2 δ 0 n δ 1 1 δ 1 2 δ 1 n δ n 1 1 δ n 1 2 δ n 1 n ) (9)

is invertible.

Proof. Suppose the matrix is not invertible; then there exists a non-zero vector ( a 1 , a 2 , , a n ) T , such that

j = 1 n δ l 1 j a j = 0 l . (10)

Then

j = 1 n a j m j ( z γ m 2 ) = l = 1 n ( 1 ) l ( j = 1 n δ l 1 j a j ) z n l = 0 z . (11)

Evaluate the above at z = γ j 0 2 ( j 0 = 1 , , n ) :

a j 0 m j 0 ( γ j 0 2 γ m 2 ) = 0. (12)

We assumed γ k 2 are distinct, so a j 0 = 0 for all 1 j 0 n . This contradicts our assumption, and proves the given matrix must be invertible.

3. A Transformed System and Explicit Solutions

3.1. Non-Linear Integrable Equation

To solve Equation (5) explicitly, we construct a nonlinear mapping from γ to new dependent variables c. We take c j to be the j-th symmetric function of γ j ,

c j = i 1 < < i j γ i 1 γ i j . (13)

For convenience, we define c 0 1 , c k 0 for k < 0 or k > n .

Proposition 4. If { γ j } satisfy Equation (5), then { c j } as defined by (13), satisfy the system:

2 c j = 2 c 1 c j 1 + c j 1 , 1 j n + 1. (14)

c 0 = 1 , c j = 0 when j > n . (15)

Proof. It follows directly by calculation, that for every 1 j n ,

2 c j 2 c 1 c j 1 c j 1 = i 1 < < i j 1 1 m < t j 1 2 γ i m γ i t γ i m γ i t ( γ i t γ i m ) s m s t j 1 γ i s i 1 < < i j 1 1 m < t j 1 2 γ i m γ i t s m s t j 1 γ i s = 0. (16)

And for j = n + 1 , we have

2 c 1 c n c n = 2 i = 1 n γ i c n i = 1 n ( γ i γ i + j i γ i γ j γ i γ j ) c n = c n i = 1 n j i 2 γ i γ j ( γ i + γ j ) ( γ i γ j ) γ i γ j = 0 (17)

Conversely, we have

Proposition 5. If c j satisfy the system (14), and γ 1 γ n are the distinct roots of the polynomial with coefficients ( 1 ) j c j , then γ j satisfy the system (5).

Proof. As in the previous calculations, for every 1 < j n + 1 , we have

2 c j 2 c 1 c j 1 c j 1 = i 1 < < i j 1 1 ν j 1 s ν j 1 γ i s ( γ i ν 2 γ i ν γ i ν + l i ν 2 γ i ν γ l γ i ν γ l ) = m = 1 n ( γ m 2 γ m γ m + l m 2 γ m γ l γ i γ l ) i 1 < < i j 2 i ν m γ i 1 γ i j 2 = 0 (18)

By assumption, γ i γ j , i j . The proof of Lemma 3 then shows that the matrix ( δ j 2 m ) ( j = 2 , , n + 1 , m = 1 , , n ), where

δ j m = i 1 < < i j i ν m γ i 1 γ i j ,

is invertible. Thus

γ m 2 γ m γ m + l m 2 γ m γ l γ i γ l = 0 for m = 1 , , n ;

and γ j satisfy (5).

3.2. Second Order Nonlinear System to First Order Nonlinear System

We have identified n constants of motion for the system (14). This will allow us to reduce the second order system to a first order system.

Proposition 6. The nonlinear system (14) has the following constants of mo- tion:

k 0 ( 1 ) k { | c j k 1 c j + k c j k 1 c j + k | | c j k 1 c j + k c j k c j + k + 1 | } = Const (19)

for all the 0 j n . Here c j = 0 , when j > n or j < 0 .

Proof. Since 2 c j = 2 c 1 c j 1 + c j 1 , 1 j n + 1 , we can write

c j 1 = 2 c j 2 c 1 c j 1 , (20)

c j = 2 c j + 1 2 c 1 c j . (21)

Multiplying the first of these equations by c j and the second equation by c j 1 , and subtracting, we have

c j c j 1 c j 1 c j = 2 c j c j 2 c j + 1 c j 1 ,

so that,

2 c j + 1 c j 1 = ( c j 1 c j c j 1 c j + c j 2 ) . (22)

Similarly, from

c j + 1 = 2 c j + 2 2 c 1 c j + 1

and

c j 2 = 2 c j 1 2 c 1 c j 2 ,

we find

c j 2 c j + 1 c j + 1 c j 2 = 2 c j 2 c j + 2 2 c j + 1 c j 1 .

Using this equation we obtain (compare with (22))

2 c j + 1 c j 1 = 2 c j + 2 c j 2 + ( 2 c j + 1 c j 1 + c j 2 c j + 1 c j 2 c j + 1 ) .

It follows by induction that

2 c j + 1 c j 1 = [ k = 1 n j ( 1 ) k 1 2 c j + k c j k + k = 1 n j ( 1 ) k 1 ( c j k 1 c j + k c j k 1 c j + k ) ] .

This identity, together with (22) shows that

k = 0 n j ( 1 ) k 1 ( c j k 1 c j + k c j k 1 c j + k ) + c j 2 k = 1 n j ( 1 ) k 1 2 c j + k c j k = Const ,

for 0 j n .

We can also write (19) as

k = 1 n ( 1 ) j k c 2 j k 1 ( c k c k + 1 ) = μ j (23)

for j = 0 , , n . Here μ j ( j 0 ) are constants and μ 0 = 1 .

Theorem 1 presents the equivalence of the first order system (23) and the second order system (14).

Proof. of Theorem 1

(i) this result follows directly from Proposition 6.

(ii) let T j = c j 2 c j + 1 + 2 c 1 c j . Differentiate (23), for 1 j n ,

0 = ( k = 1 n ( 1 ) j k c 2 j k 1 ( c k c k + 1 ) ) = k = j min { j 1 , n j } j + min { j 1 , n j } c 2 j k 1 T k .

If we write the above equation in matrix form, we have

( c 0 0 0 0 c 2 c 1 0 0 c 4 c 3 0 0 ( 1 ) n 2 1 c n ( 1 ) n 2 c n 1 ( 1 ) n 2 1 c 2 ( 1 ) n 2 c 1 0 0 c n 2 c n 3 0 0 c n c n 1 ) n × n ( T 1 T 2 T 3 T n 1 T n ) = 0 (24)

The coefficient matrix of above Equation (24) is a Sylvester resultant matrix. A well known theorem from linear algebra then expands the determinant of the Sylvester resultant matrix as the resultant of the two polynomials,

a ( s ) = c 1 s n 2 1 + c 3 s n 2 2 + ( 1 ) n 2 c n 1 ,

b ( s ) = c 0 s n 2 c 2 s n 2 1 + c 4 s n 2 2 + ( 1 ) n 2 c n .

The coefficient matrix of (24) is nonsingular if and only if a ( s ) and b ( s ) are coprime for x I . Let Q 1 ( s ) be the polynomial

Q 1 ( s ) = l = 0 n ( 1 ) l c l s n l . (25)

We observe that ( 1 ) n / 2 ( b ( s 2 ) s a ( s 2 ) ) = Q 1 ( s ) , and since c j is j-th symmetric function of γ j , we have

( 1 ) n / 2 ( b ( s 2 ) s a ( s 2 ) ) = Q 1 ( s ) = j = 1 n ( s γ j ) . (26)

If a ( s ) and b ( s ) are not coprime, they have a common root s 0 , such that s 0 0 . Let s 1 , s 2 be the two distinct square roots of s 0 . Substitute s = s 1 and s = s 2 in (26); this yields Q 1 ( s 1 ) = 0 and Q 1 ( s 2 ) = 0 respectively. Thus there exist γ i 0 , γ j 0 such that γ i 0 = s 1 , γ j 0 = s 2 .

Since γ 1 2 , , γ n 2 are assumed distinct, we obtain s 1 2 s 2 2 , which contradicts the fact that s 1 2 = s 2 2 = s 0 .

Therefore a ( s ) and b ( s ) must be coprime for x I , and (24) has the unique trivial solution:

T j = c j 2 c j + 1 + 2 c 1 c j = 0 for j = 1 , , n .

Thus c j solve the second order system (14).

3.3. Method of Integrating Factor

We have reduced the second order non-linear system (14) to the first order non-linear system (23). To solve the latter system explicitly, we begin by writing it in matrix form. Let

S = ( c 0 0 0 0 0 c 2 c 1 c 0 0 0 c 4 c 3 c 2 0 0 c 6 c 5 c 4 0 0 ( 1 ) n 2 1 c n ( 1 ) n 2 c n 1 ( 1 ) n 2 1 c n 2 ( 1 ) n 2 1 c 0 0 0 0 ( 1 ) n 2 c n ( 1 ) n 2 c 2 ( 1 ) n 2 1 c 1 0 0 0 c n c n 1 ) ( n + 1 ) × ( n + 2 ) (27)

We will assume n is even from now on. When n is odd, we obtain similar results. The Equation (23) can be written as a matrix equation.

S ( c 0 c 1 c 1 c 2 c k c k + 1 c n ) = ( μ 0 μ 1 μ 2 μ k μ n ) for j = 0 , , n . (28)

We will show that the nonlinear system (28) can be solved explicitly.

Let

C = ( c 0 c 1 c n ) , C = ( 0 c 1 c n ) . (29)

The nonlinear system (28) can be written as

S ( c ) J 1 C S ( c ) J 2 C = μ , (30)

where

J 1 = ( 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 ) ( n + 2 ) × ( n + 1 ) . (31)

J 2 = ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 ) ( n + 2 ) × ( n + 1 ) . (32)

Our goal is to find an integrating factor M such that after multiplication on the left by M, (30) takes the form

M S ( c ) J 1 C M S ( c ) J 2 C = ( N C ) = M μ = 0. (33)

Thus we would like to find an n × ( n + 1 ) matrix M and an n × ( n + 1 ) matrix N such that

M S ( c ) J 1 = N ; (34)

M S ( c ) J 2 = N ; (35)

M μ = 0. (36)

This leads to

M S ( c ) = ( B 1 N ) = ( N B n + 2 ) . (37)

B 1 is the first column vector of M S ( c ) and B n + 2 is the last column vector of M S ( c ) .

For any n × n + 1 matrix N = ( a i j ) which satisfies (37), and B 1 = Math_176#, B n + 2 = ( β 1 n + 2 , β 2 n + 2 , , β n n + 2 ) T ,

( a 11 a 12 a 1 n + 1 β 1 n + 2 a 21 a 22 a 2 n + 1 β 2 n + 2 a n 1 a n 2 a n n + 1 β n n + 2 ) = ( β 11 a 11 a 1 n a 1 n + 1 β 21 a 21 a 2 n a 2 n + 1 β n 1 a n 1 a n n a n n + 1 ) = M S .

We must have a u v = a u , v 1 , a u 1 = β u 1 , for u = 1 , , n ; v = 2 , , n + 1 .

Let f u = a u n + 1 , then these conditions show that N must be of the form

( f 1 ( n ) f 1 ( n 1 ) f 1 ( n 2 ) f 1 f 2 ( n ) f 2 ( n 1 ) f 2 ( n 2 ) f 2 f n ( n ) f n ( n 1 ) f n ( n 2 ) f n ) , (38)

and moreover β u 1 = f u ( n + 1 ) , for u = 1 , , n .

To find M, we now rewrite (34) and (36) in matrix form,

( S ( c ) T μ T ) M T = ( B 1 T N T 0 ) . (39)

Thus each of the n rows of M solves an over-determined linear system, con- sisting of n + 3 equations and n + 1 unknowns.

Studying the structure of the matrix S ( c ) , we notice the following algebraic identity,

Lemma 7. S ( c ) J 1 C = 0

As an immediate corollary, we have the following

Lemma 8. Given a nontrivial solution { c i } ( i = 1 , , n ) of (28), the overdeter- mined system (39) is solvable only if N satisfies N C = 0 . Moreover, we also have N C = 0 .

For each overdetermined system

S ( c ) T M i T = ( f i ( n + 1 ) f i ( n ) f i ( n 1 ) f i ) , i = 1 , , n . (40)

where M i T is the i-th column vector of M T , Lemma 7 and Lemma 8 show that the rank of the augmented matrix is less than n + 2 . The over-determined system has at most n + 1 linear independent equations.

Let S 1 ( c ) T denote the sub-matrix of S ( c ) T obtained by deleting the first row, the n-th row, the (n + 1)-th row, first column and last column of S ( c ) T . We observe that S 1 ( c ) T is also a Sylvester resultant matrix. The determinant of a Sylvester resultant matrix is the resultant of the two polynomials

a ( s ) = c 1 s n 2 1 + c 3 s n 2 2 + ( 1 ) n 2 c n 1 ,

b ( s ) = c 0 s n 2 c 2 s n 2 1 + c 4 s n 2 2 + ( 1 ) n 2 c n .

Two cases need to be considered here. (1) a ( s ) and b ( s ) are coprime: Then S 1 ( c ) T is nonsingular. The augmented matrix of (40) has the same rank n + 1 as the corresponding coefficient matrix. Thus (40) is solvable. (2) a ( s ) and b ( s ) are noncoprime: We then use the following result of Laidacker [12] : let d ( s ) be the greatest common divisor of two polynomials a ( s ) , b ( s ) , then the rank of the Sylvester resultant matrix is n d e g ( d ( s ) ) 1 . Thus the rank of the augmented matrix should be also n + 1 d e g ( d ( s ) ) , if (40) is to be solvable.

We obtain an algebraic fact about the Sylvester matrix.

Lemma 9. Suppose that c 0 t n c 1 t n 1 t 1 + c 2 t n 2 c 3 t n 3 t 1 + + c n = 0 , and that a ( t 2 ) , and b ( t 2 ) do not vanish simultaneously. Then the following alge- braic system is always solvable.

S 1 ( c ) T M ˜ 1 = ( c 1 c 3 ( 1 ) n 2 c n 1 0 0 c 0 c 2 ( 1 ) n 2 1 c n 2 ( 1 ) n 2 c n 0 0 0 ( 1 ) n 2 c 1 ( 1 ) n 2 1 c 3 c n 1 ) ( m n m n 1 m 2 ) = ( t n t n 1 t 1 t 2 )

Proof. Let η j ( s ) be the polynomial with coefficients consisting of the i-th row of S 1 ( c ) T and let e i ( s ) be the polynomial with coefficients consisting of the i-th row of S 1 ( c ) T in echelon form. It should be noted that e i ( s ) is a linear combination of the polynomials of η j ( s ) . The algebraic system is solva- ble if and only if each zero row of S 1 ( c ) T in echelon form corresponds to a zero row of the augmented matrix in echelon form.

Suppose the i-th row of S 1 ( c ) T in echelon form is zero, that is e i ( s ) = 0 = k 0 η 1 ( s ) + k 1 η 2 ( s ) + + k n 2 η n 1 ( s ) . From the structure of S 1 ( c ) T ,

e i ( s ) = ( k 0 s n 2 1 k 2 s n 2 2 + + ( 1 ) n 2 1 k n 2 ) a ( s ) + ( k 1 s n 2 2 k 3 s n 2 3 + + ( 1 ) n 2 k n 3 ) b ( s ) = 0.

Let d ( s ) be the greatest common divisor of the two polynomials a ( s ) and b ( s ) , let a ( s ) = d a ( s ) d ( s ) , b ( s ) = d b ( s ) d ( s ) , where g c d ( d a ( s ) , d b ( s ) ) = 1 . The above shows that there exists a polynomial d 0 ( s ) such that

k 0 s n 2 1 k 2 s n 2 2 + + ( 1 ) n 2 1 k n 2 = d 0 ( s ) d b ( s )

k 1 s n 2 2 k 3 s n 2 3 + + ( 1 ) n 2 k n 3 = d 0 ( s ) d a ( s )

We need to prove

( k 0 t n + k 2 t n 2 + + k n 2 t 2 ) ( k 1 t n 1 + k 3 t n 3 + + k n 3 t 3 ) t 1 = 0.

Using the above identities, the left side of the equation can be written as

( 1 ) n 2 1 t 2 d 0 ( t 2 ) ( d a ( t 2 ) + t 1 d b ( t 2 ) ) = ( 1 ) n 2 1 t 2 d 0 ( t 2 ) a ( t 2 ) + b ( t 2 ) t 1 d ( t 2 ) = 0.

The left side of this equation vanishes, since the condition in the hypothesis can be represented as a ( t 2 ) + b ( t 2 ) t 1 = 0 .

Inspired by the fact that N C = 0 and by Lemma 9, we prove the existence of N by constructing f j such that f j = Const f j . To be more specific, let

f j = κ j 2 f j for j = 1 , , n , (41)

where κ j 2 are arbitrarily distinct and non-zero constants.

Lemma 8 also shows that (39) is solvable only if N satisfies N C = 0 .

To calculate N C , we first introduce some new notation. For each κ i , we define O i , E i as follows,

O i = κ i n c 0 + κ i n 2 c 2 + + c n , (42)

E i = κ i n 1 c 1 + κ i n 3 c 3 + + κ i c n 1 . (43)

Then, we have

N C = ( κ 1 n f 1 c 0 κ 1 n 2 f 1 + + f 1 c n κ 2 n f 2 c 0 κ 1 n 2 f 2 + + f 2 c n κ n n f n c 0 κ n n 2 f n + + f n c n ) = ( O 1 f 1 1 κ 1 E 1 f 1 O 2 f 2 1 κ 2 E 2 f 2 O n f n 1 κ n E n f n ) . (44)

N C = 0 only if O j f j 1 κ j E j f j = 0 for j = 1 , , n , i.e f j = κ j O j E j f j .

The following propostion proves that N satisfies both N C = 0 and N C = 0 .

Proposition 10. If (28) is satisfied and κ j , O j , E j are defined as above, then

1 κ j ( O j E j O j E j ) + O j 2 = E j 2 . (45)

Proof. Straight forward calculation.

Recall that our objective is to construct an integrating factor to reduce (23) to an algebraic system. As described above, let

f j ( x ) = e κ j O j E j d x for j = 1 , , n . (46)

Thus

f j = κ j O j E j f j , (47)

f j = κ j ( O j E j ) f j + κ j 2 ( O j E j ) 2 f j = κ j 2 f j . (48)

We assume E j 0 in this transformation.

Lemma 11. f j ( l ) = κ j l O j E j f j if l is odd and f j ( l ) = κ j l f j if l is even.

N can be rewritten as

( κ 1 n f 1 κ 1 n 1 O 1 E 1 f 1 κ 1 n 2 f 1 f 1 κ 2 n f 2 κ 2 n 1 O 2 E 2 f 2 κ 2 n 2 f 2 f 2 κ n n f n κ n n 1 O n E n f n κ n n 2 f n f n ) . (49)

With f j as given above, we can define a matrix M 1 ,

M 1 = ( κ 1 2 n f 1 κ 1 2 n 2 f 1 ( 1 ) n f 1 κ 2 2 n f 2 κ 2 2 n 2 f 2 ( 1 ) n f 2 κ n 2 n f n κ n 2 n 2 f n ( 1 ) n f n ) . (50)

The following proposition shows that M = 1 E 1 E 2 E n M 1 is a solution of

(39), i.e. M is an integrating factor of (30).

Proposition 12. Given M = 1 E 1 E 2 E n M 1 , fj by (46), and κ j 2 ( j = 1 , , n ) be distinct roots of Ψ ( z ) = l = 0 n ( 1 ) l μ l z n l , then M solves (34), (35), (36).

Proof. Straightforward calculation shows

M 1 S ( c ) J 1 = ( κ 1 n f 1 O 1 κ 1 n 1 f 1 E 1 f 1 O 1 κ 2 n f 2 O 2 κ 2 n 1 f 2 E 2 f 2 O 2 κ n n f n O n κ n n 1 f n E n f n O n ) ,

thus, M S ( c ) J 1 = N . Similar calculation proves M S ( c ) J 2 = N . At the same time, we have

M μ = 1 E 1 E 2 E n ( l = 0 n ( 1 ) l μ l ( κ 1 2 ) n l f 1 l = 0 n ( 1 ) l μ l ( κ k 2 ) n l f k l = 0 n ( 1 ) l μ l ( κ n 2 ) n l f n ) = 0 .

Theorem 13. if S ( c k c k + 1 ) = ( μ j ) , then M 1 S ( c k c k + 1 ) = 0 , Conversely, if M 1 S ( c k c k + 1 ) = 0 and f j 0 , then S ( c k c k + 1 ) = ( μ j ) , moreover, ck satisfy the linear system,

N C = k = 0 n f j ( n k ) ( 1 ) k c k = 0 , 0 < j n . (51)

The system (28) is integrable and equivalent to this linear system.

Remark 14. f j can be solved directly from Equation (48) as f j = Math_294#. The algebraic system (51) will then lead to the solution c k .

Proof. Multiplying Equation (30) by M 1 , and using Proposition 12, yields

M 1 S ( c k c k + 1 ) = 0 . Further, multiplying both sides by 1 E 1 E 2 E n , we have

M S ( c k c k + 1 ) = ( N C ) = 0. (52)

Thus k = 0 n ( 1 ) k f j ( n k ) c k = Const . The fact that

M S J 1 C = k = 0 n ( 1 ) k f j ( n k ) c k = 0

forces the constant to be zero.

Conversely if M 1 S ( c k c k + 1 ) = 0 , since M 1 is a n × ( n + 1 ) matrix and κ j 2 are assumed distinct, f j 0 , so the dimension of the kernel is 1. The solution μ which satisfies M 1 μ = 0 and μ 0 = 1 is unique. Since the first entry of S ( c k c k + 1 ) is ( c 0 ) ( c 0 ) = 1 , it follows that S ( c k c k + 1 ) = ( μ j ) .

This proves that (28) is integrable and provides a procedure to obtain explicit solutions from the linear system (51).

4. Exact Analytic Examples

We will illustrate the procedure of section 2 and section 3 by a simple exact analytic example in this section.

We explicitly discuss the case, where n = 2 in (4). Then A ( α , x ) = 2 γ 1 ( x ) e 2 α γ 1 ( x ) 2 γ 2 ( x ) e 2 α γ 2 ( x ) . We construct the non-linear mapping from γ to c ,

c 1 = γ 1 + γ 2 , c 2 = γ 1 γ 2 . (53)

Then c 1 , c 2 satisfy (14). To solve for c, we reduce the second order system (14) to the first order system (30):

( c 0 0 0 0 c 2 c 1 c 0 0 0 0 c 2 c 1 ) ( c 0 c 1 c 1 c 2 c 2 ) = ( μ 0 μ 1 μ 2 ) . (54)

Given κ 1 , κ 2 where κ 1 κ 2 , we construct M of the form (50)

M = ( κ 1 4 f 1 κ 1 2 f 1 f 1 κ 2 4 f 2 κ 2 2 f 2 f 2 ) (55)

where f 1 , f 2 are solution of (48):

f 1 = k 1 2 f 1 ,

f 2 = k 2 2 f 2 .

We can write

f i = sinh ( κ i ( x + δ i ) ) for i = 1 , 2.

with δ 1 δ 2 0 .

After multiplying (54) by M on the left, the first order system is solved explicitly. Indeed, c j satisfy linear system (51). Let

z i = κ i coth ( κ i ( x + δ i ) ) . (56)

Then (51) takes the form

z 1 c 1 c 2 = κ 1 2 , (57)

z 2 c 2 c 2 = κ 2 2 , (58)

with solution:

c 1 ( x ) = κ 2 2 κ 1 2 z 2 z 1 , (59)

c 2 ( x ) = κ 2 2 z 1 κ 1 2 z 2 z 2 z 1 . (60)

To invert the mapping (53) we find γ 1 ( x ) , γ 2 ( x ) as the roots of the equation

s 2 c 1 ( x ) s + c 2 ( x ) = 0.

This gives the following exact solutions of A-equation:

q ( x ) = 2 γ 1 ( x ) 2 γ 2 ( x ) = 2 c 1 ( x )

A ( α , x ) = 2 γ 1 ( x ) e 2 α γ 1 ( x ) 2 γ 2 ( x ) e 2 α γ 2 ( x ) .

Theorem 15. For any distinct non-zero complex κ 1 2 , , κ n 2 , and d 1 2 , , d n 2 , there exists a solution A ( α , x ) of the A-equation with the form

A ( α , x ) = j = 1 n 2 γ j ( x ) e 2 α γ j ( x ) , where γ j ( 0 ) = d j ,

γ j ( 0 ) = m = 1 n ( κ m 2 d j 2 ) l j ( d l 2 d j 2 ) 1 for 1 j n .

Proof. This theorem is a direct corollary of the results in section 3.

Remark 16. This theorem does not cover all solutions of the form A ( α , x ) = j = 1 n 2 γ j ( x ) e 2 α γ j ( x ) . Consider an example from [1] ,

A ( α ) = c 0 k 0 e 2 α k 0 + c 0 k 0 e 2 α k 0 , (61)

q ( x ) = 2 d 2 d x 2 ln [ 1 + c 0 k 0 2 0 x sinh 2 ( k 0 y ) d y ] . (62)

Working through the procedure in section 3, we get

γ 1 ( 0 ) = γ 2 ( 0 ) = k 0 ,

γ 1 ( 0 ) = γ 2 ( 0 ) = c 0 2 k 0 ,

c 1 ( 0 ) = 0 , c 1 ( 0 ) = 0 ,

c 2 ( 0 ) = k 0 2 , c 2 ( 0 ) = c 0 .

Here the values of the constants are:

σ 1 = μ 1 = 2 k 0 2 ,

σ 2 = μ 2 = k 0 4 ,

κ 1 2 = κ 2 2 = k 0 2 ;

so that κ 1 2 and κ 2 2 are not distinct.

This leads to O i = 0 , E i = 0 . Proposition 10 holds, but the nonlinear transformation (46) is not defined.

5. Conclusion

A large class of exact Equations to A-Equation was found in this work. Techniques used in our approach include non-linear transformation between coefficient of a polynomial and its zero, constants of motion, and an interesting integrating factor method. The nonlinear system studied here is of interest not only for its connection to inverse problems. It represents a larger category of integrable system than C-integrable system and is worth further investigation.

Acknowledgement

A special thanks goes to reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Gesztesy, F. and Simon, B. (2000) A New Approach to Inverse Spectral Theory. II. General Real Potentials and the Connection to the Spectral Measure. Annals of Mathematics, 152, 593-643.
https://doi.org/10.2307/2661393
[2] Ramm, A. and Simon, B. (2000) A New Approach to Inverse Spectral Theory. III. Short-Range Potentials. Journal d’Analyse Mathématique, 80, 319-334.
https://doi.org/10.1007/BF02791540
[3] Simon, B. (1999) A New Approach to Inverse Spectral Theory. I. Fundamental Formalism. Annals of Mathematics, 150, 1029-1057.
https://doi.org/10.2307/121061
[4] Gesztesy, F. (2007) Inverse Spectral Theory as Influenced by Barry Simon. Proceedings of Symposia in Pure Mathematics, American Mathematical Society, 76/2, 741-820.
[5] Remling, C. (2003) Inverse Spectral Theory for One-Dimensional Schrödinger Operators: The A Function. Mathematische Zeitschrift, 245, 597-617.
https://doi.org/10.1007/s00209-003-0559-2
[6] Remling, C. (2002) Schrödinger Operators and de Branges Spaces. Journal of Functional Analysis, 196, 323-394.
https://doi.org/10.1016/S0022-1236(02)00007-1
[7] Zhang, Y. (2006) Solvability of a Class of Integro-Differential Equations and Connections to One Dimensional Inverse Problems. Journal of Mathematical Analysis and Applications, 321, 286-298.
https://doi.org/10.1016/j.jmaa.2005.08.016
[8] Calogero, F. (1986) A Class of Solvable Dynamical Systems. Physica, 18D, 280-302.
https://doi.org/10.1016/0167-2789(86)90189-2
[9] Calogero, F. (1994) A Class of C-Integrable PDEs in Multidimensions. Inverse Problems, 10, 1231-1234.
https://doi.org/10.1088/0266-5611/10/6/004
[10] Varley, E. and Seymour, B.R. (1988) A Method of Obtaining Exact Solutions to Partial Differential Equations with Variable Coefficients. Studies in Applied Mathematics, 78, 183-205.
https://doi.org/10.1002/sapm1988783183
[11] Varley, E. and Seymour, B.R. (1998) A Simple Derivation of the N-Soliton Solutions to the Korteweg-Devries Equation. SIAM Journal on Applied Mathematics, 58, 904-911.
https://doi.org/10.1137/S0036139996303270
[12] Laidacker, M.A. (1969) Another Theorem Relating Sylvester’s Matrix and the Greatest Common Divisor. Mathematics Magazine, 42, 126-128.
https://doi.org/10.2307/2689124

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.