A-Equation and Its Connections to Nonlinear Integrable System

A novel approach to inverse spectral theory for Schrödinger Equation operators on a half-line was first introduced by Barry Simon and actively studied in recent literatures. The remarkable discovery is a new object A-function and intergo-differential Equation (called A-Equation) it satisfies. Inverse problem of reconstructing potential is then directly connected to finding solutions of A-Equation. In this work, we present a large class of exact solutions to A-Equation and reveal the connection to a class of arbitrarily large systems of nonlinear ordinary differential Equations. This non-linear system turns out to be C-integrable in the sense of F. Calogero. Integration scheme is proposed and the approach is illustrated in several examples.


Introduction
Several years ago, Barry Simon investigated a new approach to inverse spectral theory for the half-line Schrödinger operator, ( ) [3].
In [3], the key discovery is that ( ) ,  A x α satisfies the following integro-differential Equation: Y. L. Zhang Given the fact that ( ) ( ) (at least in the 1 L sense), ( ) q x can be determined directly from ( ) ,  A x α .
And ( ) ,  A x α can be calculated from ( ) ,0 A α (which is essentially the inverse Laplace transform of the data), by solving an equation which does not involve ( ) q x .Thus the inverse problem to determine q from m, becomes a problem to solve the integro-differential Equation (2).Properties of (2) are discussed in [4] [5] [6] [7].To construct numerical solvers to this integro-differential equation, one needs to study sets of exact analytic solutions.
In this paper, we study a larger class of analytic solutions of (2), which is of the form , e .
This ansatz is motivated by the explicit example in [1], where ( ) A α is calculated for Bargmann potentials using inverse scattering theory (which is valid only under restrictive assumptions).Our aim is to determine the behavior of such solutions for all ( ) , x α and to do so using only (2).
Then we give a method for solving (5) explicitly in Section 3. The idea is to introduce new variables j c , the symmetric functions of j γ ( 1, , ), that is . Via this "change of variable", (5) yields a new nonlinear system: 1, 0 when .
This nonlinear system turns out to be solvable.Calogero proved that a certain family of n-body problems is solvable in a 2004 J. Math.Phys.paper and his model includes system (6), and the method we use in this method is different from his approach.Our method also shows some insightful connection to scattering problems.In Section 3, first we find n constants of motion for the system (6) which allow us to reduce it to a first order nonlinear system.
Explicitly we will prove Theorem 1. (i) Supposing that for any x in an open interval I, j c are solutions of the second order nonlinear system (6).Then on I, j c solves the first order system ( ) ( ) The latter is then solved by finding a nonlinear analogue of the method of integrating factors (Theorem 13).
We note that j γ is zeros of polynomials with coefficients j c .Calogero pointed out in [8] [9] that some nonlinear systems can be linearized by nonlinear mapping between coefficients of polynomial and its zero, and thus is integrable.The novelty in this paper is that the nonlinear mapping from j γ to j c relates the system (5) to a solvable yet still nonlinear system.Interestingly, a system similar to (6) arises ( [10] [11]) if one seeks potentials for which the large frequency WKB series is finite and yields solutions of the corresponding Schrödinger equations (with no error).
Section 4 shows how we obtain analytic examples of (2) by following this systematic procedure.

The γ Equation
As described in introduction, we relate a large class of exact solution of A-Equation ( 2) to a second order non-linear system (5).
Without loss generality, we assume i j γ γ ≠ for all i j ≠ .Then the following proposition can be followed by direct calculation.
Proof.Suppose the matrix is not invertible; then there exists a non-zero vector ( ) , , , n a a a , such that  ( ) We assumed 2 k γ are distinct, so 0 0 j a = for all 0 1 j n ≤ ≤ .This contradicts our assumption, and proves the given matrix must be invertible.

Non-Linear Integrable Equation
To solve Equation ( 5) explicitly, we construct a nonlinear mapping from γ to new dependent variables c.We take j c to be the j-th symmetric function of For convenience, we define 0 1, 0 as defined by (13), satisfy the system: Proof.It follows directly by calculation, that for every 1 j n ≤ ≤ , ( ) And for 1 j n = + , we have ( ) ( ) Conversely, we have Proposition 5.If j c satisfy the system (14), and 1 n γ γ are the distinct roots of the polynomial with coefficients ( ) , then j γ satisfy the system (5).
Proof.As in the previous calculations, for every 1 By assumption, , The proof of Lemma 3 then shows that the matrix ( )

Second Order Nonlinear System to First Order Nonlinear System
We have identified n constants of motion for the system (14).This will allow us to reduce the second order system to a first order system.Proposition 6.The nonlinear system (14) has the following constants of motion: ( ) for all the 0 j n ≤ ≤ .Here Multiplying the first of these equations by j c and the second equation by 1 j c − , and subtracting, we have Similarly, from Using this equation we obtain (compare with ( 22)) ( ) It follows by induction that ( ) ( ) ( ) This identity, together with (22) shows that ( ) ( ) ( ) We can also write (19) as ( ) ( ) Here ( ) are constants and 0 1 µ = .
Theorem 1 presents the equivalence of the first order system (23) and the second order system (14).
Proof. of Theorem 1 (i) this result follows directly from Proposition 6.
(ii) let If we write the above equation in matrix form, we have The coefficient matrix of above Equation ( 24) is a Sylvester resultant matrix.A well known theorem from linear algebra then expands the determinant of the Sylvester resultant matrix as the resultant of the two polynomials, ( ) ( ) The coefficient matrix of ( 24) is nonsingular if and only if ( ) a s and ( ) We observe that ( )     Thus j c solve the second order system (14).

Method of Integrating Factor
We have reduced the second order non-linear system (14) to the first order non-linear system (23).To solve the latter system explicitly, we begin by writing it in matrix form.Let ( ) ( ) We will assume n is even from now on.When n is odd, we obtain similar results.The Equation (23) can be written as a matrix equation.
We will show that the nonlinear system (28) can be solved explicitly. Let The nonlinear system (28) can be written as , where Our goal is to find an integrating factor M such that after multiplication on the left by M, (30) takes the form ( ) ( ) ( ) Thus we would like to find an ( ) matrix M and an ( ) This leads to ( ) ( ) ( ) 1 B is the first column vector of ( ) , then these conditions show that N must be of the form and moreover ( ) To find M, we now rewrite (34) and (36) in matrix form, ( ) Thus each of the n rows of M solves an over-determined linear system, consisting of 3 n + equations and 1 n + unknowns.
Studying the structure of the matrix ( ) S c , we notice the following algebraic identity, Lemma 7. ( ) 1 0

S c J C =
As an immediate corollary, we have the following Lemma 8. Given a nontrivial solution { }( ) of (28), the overdetermined system (39) is solvable only if N satisfies 0 NC = .Moreover, we also have For each overdetermined system ( ) where M is the i-th column vector of T M , Lemma 7 and Lemma 8 show that the rank of the augmented matrix is less than 2 n + .The over-determined system has at most 1 n + linear independent equations.

S c
denote the sub-matrix of ( ) obtained by deleting the first row, the n-th row, the (n + 1)-th row, first column and last column of ( ) T S c .We observe that ( )

S c
is also a Sylvester resultant matrix.The determinant of a Sylvester resultant matrix is the resultant of the two polynomials ( ) ( ) We obtain an algebraic fact about the Sylvester matrix.
Lemma 9. Suppose that do not vanish simultaneously.Then the following algebraic system is always solvable.

S c
in echelon form corresponds to a zero row of the augmented matrix in echelon form.
Suppose the i-th row of ( ) From the structure of ( ) .
Let ( ) d s be the greatest common divisor of the two polynomials ( ) We need to prove ( ) ( ) Using the above identities, the left side of the equation can be written as The left side of this equation vanishes, since the condition in the hypothesis can be represented as ( ) ( ) Lemma 8 also shows that (39) is solvable only if N satisfies 0 NC = .
To calculate NC , we first introduce some new notation.For each i κ , we define , Then, we have The following propostion proves that N satisfies both 0 NC = and 0 N C ′′ = .
Proof.Straight forward calculation.
Recall that our objective is to construct an integrating factor to reduce (23) to an algebraic system.As described above, let ( ) d e for 1, , . .
We assume 0 j E ≠ in this transformation.
Lemma 11. ( ) N can be rewritten as .
With j f as given above, we can define a matrix The following proposition shows that is a solution of (39), i.e.M is an integrating factor of (30).
Proof.Straightforward calculation shows ( ) MS c J N′ = .At the same time, we have The system (28) is integrable and equivalent to this linear system.
forces the constant to be zero.

Conversely if
( ) × + matrix and 2 j κ are assumed distinct, 0 j f ≠ , so the dimension of the kernel is 1.The solution µ which satisfies 1 0 M µ ⋅ = and 0 1 µ = is unique.Since the first entry of ( ) This proves that (28) is integrable and provides a procedure to obtain explicit solutions from the linear system (51).

Exact Analytic Examples
We will illustrate the procedure of section 2 and section 3 by a simple exact analytic example in this section.
We explicitly discuss the case, where 2 n = in (4).Then ( ) Then 1 2 , c c satisfy (14).To solve for c, we reduce the second order system (14) to the first order system (30): Given 1 κ , 2 κ where 1 2 κ κ ≠ , we construct M of the form (50) where 1 2 , f f are solution of (48):  After multiplying (54) by M on the left, the first order system is solved explicitly.Indeed, j c satisfy linear system (51).Let  ( ) Proof.This theorem is a direct corollary of the results in section 3.
Remark 16.This theorem does not cover all solutions of the form (

Conclusion
A large class of exact Equations to A-Equation was found in this work.Techniques used in our approach include non-linear transformation between coefficient of a polynomial and its zero, constants of motion, and an interesting integrating factor method.The nonlinear system studied here is of interest not only for its connection to inverse problems.It represents a larger category of integrable system than C-integrable system and is worth further investigation.
and since j c is j-th symmetric function of j γ , we have If ( ) a s and ( ) b s are not coprime, they have a common root 0 s , such that 0 0 s ≠ .Let 1 s , 2 s be the two distinct square roots of 0 s − .Substitute

1 n
Two cases need to be considered here.(1) ( ) a s and ( ) nonsingular.The augmented matrix of (40) has the same rank + as the corresponding coefficient matrix.Thus (40) is solvable. (2)( )a sand ( ) b s are noncoprime: We then use the following result of Laidacker[12]: let ( ) d s be the greatest common divisor of two polynomials ( ) a s , ( )

.
The algebraic system is solvable if and only if each zero row of ( )T 1 d s = .The above shows that there exists a polynomial ( )

2 jκ
are arbitrarily distinct and non-zero constants.

1 M
system (51) will then lead to the solution k c .Proof.Multiplying Equation (30) by , and usingProposition 12, yields the non-linear mapping from γ to c , Here the values of the constants are: . Proposition 10 holds, but the nonlinear trans- formation (46) is not defined.