Solvability and Construction of a Solution to the Fredholm Integral Equation of the First Kind

Abstract

The issues of solvability and construction of a solution of the Fredholm integral equation of the first kind are considered. It is done by immersing the original problem into solving an extremal problem in Hilbert space. Necessary and sufficient conditions for the existence of a solution are obtained. A method of constructing a solution of the Fredholm integral equation of the first kind is developed. A constructive theory of solvability and construction of a solution to a boundary value problem of a linear integrodifferential equation with a distributed delay in control, generated by the Fredholm integral equation of the first kind, has been created.

Share and Cite:

Serikbai, A. , Dias, N. and Ilya, S. (2024) Solvability and Construction of a Solution to the Fredholm Integral Equation of the First Kind. Journal of Applied Mathematics and Physics, 12, 720-735. doi: 10.4236/jamp.2024.122045.

1. Introduction

The search for the necessary and sufficient conditions for the existence of a solution of the Fredholm integral equation of the first kind and the construction of its solution is one of the current unsolved problems in mathematics [1] [2] .

Many problems in natural sciences lead to the Fredholm integral equation of the first kind, where it is required to reconstruct the original phenomenon based on measurement results. Special cases of the Fredholm integral equation of the first kind include the Volterra integral equations of the first kind and Abel's integral equation.

Integrodifferential equations involving the Fredholm integral equation of the first kind, particularly including Volterra and Abel integral equations, serve as mathematical models for many phenomena in various scientific fields: biology [3] , medicine [4] , biophysics [5] , thermodynamics and biological processes [6] , mechanics and electrodynamics [7] , economics [8] and synergetics [9] .

The first work dedicated to distributed delay is the monograph [10] . A review of scientific research on differential equations with deviating arguments is contained in [11] . A qualitative theory of integrodifferential equations is presented in [12] . A review of numerical methods for solving integrodifferential equations can be found in [13] . The correct solvability of the initial problem of Volterra integral-differential equations is given in [14] . Linear homogeneous systems of integrodifferential and integral equations with Volterra and Fredholm matrix kernels with initial conditions equal to zero are considered in [15] . Nonlinear Volterra equations with loads and bifurcation parameters are described in [16] .

Based on the results of constructing ща the general solution of the Fredholm integral equation of the first kind with a fixed parameter, the following problems have been solved: boundary value problems of ordinary differential equations with phase and integral constraints without involving the Green’s function [17] [18] ; optimal control of dynamic systems with constraints without involving Pontryagin’s maximum principle [19] [20] ; controllability and performance of ordinary differential equations and parabolic equations with restrictions [21] [22] . The general theory of boundary value problems of dynamic systems is presented in [23] .

This work is a continuation of scientific research from [12] [17] - [23] . The scientific novelty of the results obtained in this article is the reduction of solvability and construction of a solution of the Fredholm integral equation of the first kind to an extremal problem in Hilbert space; construction of minimizing sequences and studies of their convergence; determination of weak limit points of minimizing sequences; creation of constructive theory of solvability and construction of solutions of integrodifferential equations with distributed delay in control.

2. Problem Definition

Let’s consider the Fredholm integral equation of the first kind

K u = a b K ( t , τ ) u ( τ ) d τ = f ( t ) , t I = [ t 0 , t 1 ] , τ I 1 = [ a , b ] , (1)

where K ( t , τ ) = K i j ( t , τ ) , i = 1 , n ¯ , j = 1 , m ¯ -known matrix of order n × m, the elements of the matrix K ( t , τ ) of the function K i j ( t , τ ) are measurable and belong to the class L 2 at the set

S 1 = { ( t , τ ) R 2 | t 0 t t 1 , a τ b } ,

a b t 0 t 1 | K i j ( t , τ ) | 2 d t d τ < ,

function f ( t ) L 2 ( I , R n ) – given, u ( τ ) L 2 ( I 1 , R m ) – desired function, values t 0 , t 1 , a , b – fixed,

t 1 > t 0 , b > a , K : L 2 ( I 1 , R m ) L 2 ( I , R n ) .

From (1), in particular, we get:

1) Volterra integral equation of the first kind

K 1 u = t 0 t K ( t , τ ) u ( τ ) d τ = f ( t ) , t I = [ t 0 , t 1 ] , τ I 2 = [ t 0 , t ] ;

2) Abel integral equation

K 2 u = t 0 t u ( τ ) t τ d τ = f ( t ) , t I 1 = [ t 0 , t 1 ] , τ I 2 = [ t 0 , t ] ,

where K ( t , τ ) = 1 t τ .

3) The Fredholm integral equation of the first kind are

K 3 u = t 0 t 1 K ( t * , τ ) u ( τ ) d τ = a , t * [ t 0 , t 1 ] , τ [ t 0 , t 1 ] , a R n ,

where t * [ t 0 , t 1 ] is a fixed parameter.

The following tasks are solved:

Problem 1. Find the necessary and sufficient conditions for the existence of a solution to the integral Equation (1) for a given f ( t ) L 2 ( I , R n ) .

Problem 2. Find a solution to the integral Equation (1) for a given f ( t ) L 2 ( I , R n )

Problem 3. Find the necessary and sufficient conditions for the existence of a solution to the integral Equation (1) for a given f ( t ) L 2 ( I , R n ) , when the desired function u ( τ ) U ( τ ) L 2 ( I 1 , R m ) . Either

U ( τ ) = { u ( t ) L 2 ( I 1 , R m ) | α ( τ ) u ( τ ) β ( τ ) , почтивсюду τ I 1 } ,

or

U ( τ ) = { u ( t ) L 2 ( I 1 , R m ) | u L 2 2 R 2 } .

Problem 4. Find a solution to the integral Equation (1) for a given f ( t ) L 2 ( I , R n ) , when u ( τ ) U ( τ ) L 2 ( I 1 , R m ) , where U ( τ ) , is a bounded convex closed set in L 2 .

Let us consider a controlled process described by an integral-differential equation of the following form

x ˙ = A ( t ) x + B ( t ) u ( t ) + C ( t ) a b K ( t , τ ) v ( τ ) d τ + μ ( t ) , t I = [ t 0 , t 1 ] , τ I 1 = [ a , b ] I , (2)

boundary value

x ( t 0 ) = x 0 R n , x ( t 1 ) = x 1 R n , (3)

where x 0 R n , x 1 R n are any fixed points?

Given data: A ( t ) , B ( t ) , C ( t ) , t I are given matrixes with piecewise continuous elements of orders n × n, n × m, n × m1 accordingly, K ( t , τ ) is a known matrix of order m1 × n1 with elements from L 2 , μ ( t ) L 2 ( I , R n ) is a given control function

u = u ( t ) L 2 ( I , R m ) , v = v ( τ ) L 2 ( I 1 , R n 1 ) . (4)

Definition 1. The solution of Equation (2) generated by the controls u ( t ) L 2 ( I , R m ) , v ( τ ) L 2 ( I 1 , R n 1 ) with x ( t 0 ) = x 0 is called the function x ( t ) = x ( t ; t 0 , x 0 , u , v ) , t I that satisfies the par

x ( t ) = x ( t 0 ) + t 0 t x ˙ ( ξ ) d ξ = x 0 + t 0 t [ A ( ξ ) x ( ξ ) + B ( ξ ) u ( ξ ) + C ( ξ ) a b K ( ξ , τ ) v ( τ ) d τ + μ ( ξ ) ] d ξ , t I .

Therefore, the function x ( t ) , t I with the primary condition x ( t 0 ) = x 0 satisfies the par of Newton-Leibnizand belongs to the class of continuous functions A C ( I , R n ) , x ( t ) A C ( I , R n ) .

Definition 2. Equation solution (2) the function x ( t ) = x ( t ; t 0 , x 0 , u , v ) , t I , generated by the controls ( u ( t ) , v ( τ ) ) L 2 ( I , R m ) × L 2 ( I 1 , R n 1 ) is called controlled equation, if the following controls u * ( t ) L 2 ( I , R m ) , v * ( τ ) L 2 ( I 1 , R n 1 ) , that change the trajectory of the system (2)-(4) from any given point x 0 = x ( t 0 ) into the moment of given time t 0 , and to any desired end state x 1 = x ( t 1 ) in timing t 1 .

Problem 5. Find the necessary and sufficient conditions for controls of the equation (2) solution with the given (3), (4).

Problem 6. Find a pair ( u * ( t ) , v * ( τ ) ) L 2 ( I , R m ) × L 2 ( I 1 , R n 1 ) , that changes the trajectory of the system (2)-(4) from any given point x 0 = x ( t 0 ) to the t 0 , to any desired end state x 1 = x ( t 1 ) in timing t 1 .

Problem 7. Find a solution x * ( t ; t 0 , x 0 , u * , v * ) A C ( I , R n ) corresponding to the pair ( u * ( t ) , v * ( τ ) ) L 2 ( I , R m ) × L 2 ( I 1 , R n 1 ) .

3. Solvability of the Fredholm Integral Equation of the First Kind

Consider the integral Equation (1). To solve problems 1, 2 it is necessary to investigate an extreme problem: to minimize the functional

J ( u ) = t 0 t 1 | f ( t ) a b K ( t , τ ) u ( τ ) d τ | 2 d t i n f (5)

with the condition of

u ( τ ) L 2 ( I 1 R m ) (6)

where f ( t ) L 2 ( I , R n ) is given function, | | is the Euclidian norm.

Theorem 1. Let the core of operator K ( t , τ ) is measured and belongs to the class L 2 in the rectangle S 1 = { ( t , τ ) | t I = [ t 0 , t 1 ] , τ I 1 = [ a , b ] } . then:

1) composed function (5) with the condition (6) is continuously differentiates on Fréchet, the gradient of functional J ( u ) L 2 ( I 1 , R m ) in any point of u ( τ ) L 2 ( I 1 , R m ) is defined by the formula

J ( u ) = 2 t 0 t 1 K * ( t , τ ) f ( t ) d t + 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) u ( σ ) d σ d t L 2 ( I 1 , R m ) ; (7)

2) gradient of function J ( u ) L 2 ( I 1 , R m ) satisfies the Lipschitz condition,

J ( u + h ) J ( u ) l h , for any u and u + h L 2 ( I 1 , R m ) ; (8)

3) composed functional (5) with the condition (6) is convex functional

J ( α u + ( 1 α ) v ) α J ( u ) + ( 1 α ) J ( v ) for any u , v L 2 ( I 1 , R m ) , (9)

if α 0 , α [ 0 , 1 ] ;

4) second Fréchet derivative is equal to

J ( u ) = 2 t 0 t 1 K * ( t , σ ) K ( t , τ ) d t ; (10)

5) if the following inequation is accomplished

a b a b ξ * ( σ ) [ t 0 t 1 K * ( t , σ ) K ( t , τ ) d t ] ξ ( τ ) d τ d σ = t 0 t 1 [ a b K ( t , τ ) ξ ( τ ) d τ ] 2 d t μ a b | ξ ( τ ) | 2 d τ , μ > 0 , f o r a n y ξ ( τ ) L 2 ( I 1 , R m ) , (11)

the functional (5) with condition (6) is strongly convex.

Proof. As follows from (5), the functional

J ( u ) = t 0 t 1 [ f * ( t ) f ( t ) 2 f * ( t ) a b K ( t , τ ) u ( τ ) d τ + a b a b u * ( τ ) K * ( t , τ ) K ( t , σ ) u ( σ ) d σ ] d t .

Then the excess functional

Δ J = J ( u + h ) J ( u ) = t 0 t 1 [ 2 f * ( t ) a b K ( t , τ ) h ( τ ) d τ + 2 a b a b h * ( τ ) K * ( t , τ ) K ( t , σ ) u ( σ ) d τ d σ + a b a b h * ( τ ) K * ( t , τ ) K ( t , σ ) h ( σ ) d τ d σ ] d t .

It follows that the Fréchet differential is equal to

d J ( u , h ) = t 0 t 1 [ 2 f * ( t ) a b K ( t , τ ) h ( τ ) d τ + 2 a b a b h * ( τ ) K * ( t , τ ) K ( t , σ ) u ( σ ) d τ d σ ] d t .

Then, applying Fubini’s theorem to the variables of integration, we have

d J ( u , h ) = a b h * ( τ ) [ t 0 t 1 2 K * ( t , τ ) f ( t ) d t + 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) u ( σ ) d σ d t ] d τ .

Then Fréchet’s derivative is

J ( u ) = 2 t 0 t 1 K * ( t , τ ) f ( t ) d t + 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) u ( σ ) d σ d t ,

where

Δ J = J ( u + h ) J ( u ) = J ( u ) , h L 2 + o ( h ) ,

| o ( h ) | = t 0 t 1 [ a b a b h * ( τ ) K * ( t , τ ) K ( t , σ ) h ( σ ) d σ d τ ] d t c 1 h L 2 2 .

It follows that J ( u ) is defined on formula (7). since

J ( u + h ) J ( u ) = 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) h ( σ ) d σ d t ,

then

| J ( u + h ) J ( u ) | 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) | h ( σ ) | d σ d t C 2 ( τ ) h L 2 , C 2 ( τ ) > 0 , τ I 1 .

therefore

J ( u + h ) J ( u ) L 2 = ( a b | J ( u + h ) J ( u ) | 2 d τ ) 1 / 2 l h L 2 ,

for any u , u + h L 2 ( I 1 , R m ) . It follows this inequation (8).

Let us show that functional (5) with the condition (6) is convex.

For any u , v L 2 ( I 1 , R m ) this inequation is true.

J ( u ) J ( v ) , u v L 2 = 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) [ u ( σ ) v ( σ ) ] d σ d t , u v L 2 = 2 a b { t 0 t 1 a b [ u ( σ ) v ( σ ) ] * K * ( t , τ ) K ( t , σ ) [ u ( σ ) v ( σ ) ] d σ d t } d τ = 2 t 0 t 1 [ a b K ( t , σ ) [ u ( σ ) v ( σ ) ] d σ ] 2 d t 0.

It means that functional (5) with condition (6) is convex.

Hence, the inequation (9) is solved. As it follows from (7)

J ( u + h ) J ( u ) = J ( u ) , h L 2 = 2 t 0 t 1 K * ( t , τ ) K ( t , σ ) d t , h L 2 = 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) h ( σ ) d σ d t .

Hence, J ( u ) is defined by formula (10). From (10), (11) it follows that

J ( u ) ξ , ξ μ ξ 2 with all u L 2 ( I 1 , R m ) , ξ L 2 ( I 1 , R m ) .

It means, that functional (5) with condition (6) is strongly convex в L 2 ( I 1 , R m ) . The Theorem is proven.

Lemma 1. Let this u * ( τ ) L 2 ( I 1 , R m ) be a solution to the optimal control problem (5), (6). For the Fredholm integral equation of the first kind (1) to have a solution, it is necessary and sufficient that the value J ( u * ) = 0 .

Proof. As follows from the optimal control problem (5), (6), the value J ( u * ) = 0 , if and only if

f ( t ) a b K ( t , τ ) u * ( τ ) d τ = 0 , for all t I .

It means the function u * ( τ ) , τ I 1 is a solution to the integral Equation (1). The Lemma is proven.

Thus, for the existence of a solution of the integral Equation (1), it is necessary and sufficient that the value of J ( u * ) = 0 (the solution of problem 1).

Lemma 2. Let this u * ( t ) U L 2 ( I 1 , R m ) be a solution for optimal control problem: minimize the functional

J 1 ( u ) = t 0 t 1 | f ( t ) a b K ( t , τ ) u ( τ ) d τ | 2 d t i n f , u ( τ ) U . (12)

For the Fredholm integral equation of the first kind (1) with u ( τ ) U L 2 ( I 1 , R m ) has a solution, it is necessary and sufficient that the value of J 1 ( u * ) = 0 .

Proof. The value J 1 ( u * ) = 0 , u * ( τ ) U , then and only then, when f ( t ) a b K ( t , τ ) u * ( τ ) d τ = 0 , with all t I . Hence u * ( τ ) U is the integral equation’ solution (1) with this condition u ( τ ) U (problem 3 solution). The Lemma is proven.

4. Construction of the Solution of the Fredholm Integral Equation of the First Kind

Consider the integral Equation (1). The solution to problem 2 follows from the following theorem.

Theorem 2. Let for the extreme problem (5), (6) the sequence

{ u n ( τ ) } L 2 ( I 1 , R m ) with algorithm

u n + 1 ( τ ) = u n ( τ ) α n J ( u n ) , g n ( α n ) = min g n ( α ) , α 0 ,

g n ( α ) = J ( u n α J ( u n ) ) , n = 0 , 1 , 2 , ,

where J ( u n ) is defined by formula (7) with u = u n , u 0 = u 0 ( τ ) is the starting point.

Then:

1) the numerical sequence is strictly decreasing { J ( u n ) } the limit lim n J ( u n ) = 0 ;

If the set M ( u 0 ) = { u ( t ) L 2 ( I 1 , R m ) | J ( u ) J ( u 0 ) } is limited, then:

2) the sequence { u n ( τ ) } M ( u 0 ) is a minimizing lim n J ( u n ) = J * = inf J ( u ) , u M ( u 0 ) ;

3) the sequence { u n } M ( u 0 ) weakly converges to the set X * M ( u 0 ) , where

X * = { u * ( τ ) M ( u 0 ) | J ( u * ) = min u M ( u 0 ) J ( u ) = J * = inf u M ( u 0 ) J ( u ) } ;

4) the following estimate of the convergence rate is valid

0 < J ( u n ) J ( u * ) m 0 n , m 0 = c o n s t , n = 1 , 2 , (13)

5) if inequality (11) is fulfilled, then the sequence is { u n } M ( u 0 ) strongly converges to the u * ( τ ) X * . The following convergence estimates are valid

0 J ( u n ) J ( u * ) [ J ( u 0 ) J ( u * ) ] q n , q = 1 μ l , 0 q 1 , μ > 0

u n u * 2 μ [ J ( u 0 ) J ( u * ) ] q n , n = 0 , 1 , 2 , , (14)

where l > 0 , μ > 0 are constant from (8), (11)accordingly;

6) For the Fredholm integral equation of the first kind (1) to have a solution, it is necessary and sufficient that the value J ( u * ) = 0 , u * X * . In this case u * = u * ( τ ) X * is a solution for integral Equation (1);

7) if the value is J ( u * ) > 0 , then integral Equation (1) doesn’t have a solution with the given f ( t ) L 2 ( I , R n ) .

Proof. Since g n ( α n ) g n ( α ) , then J ( u n ) J ( u n + 1 ) J ( u n ) J ( u n α J ( u n ) ) , α 0 , n = 0 , 1 , 2 , from the inclusion of J ( u ) C 1 , 1 ( L 2 ( I 1 , R m ) ) it follows, that

J ( u n ) J ( u n α J ( u n ) ) α ( 1 α l 2 ) J ( u n ) 2 , n = 0 , 1 , 2 ,

Then J ( u n ) J ( u n + 1 ) 1 2 l J ( u n ) 2 > 0 . Hence, the numerical sequence is

strictly decreasing and lim n J ( u n ) = 0 . The first statement of the theorem is proved.

Since the funcitonal J ( u ) is convex with the u L 2 , then set of M ( u 0 ) . Hence the M ( u 0 ) is bounded convex closed set in L 2 weakly bicompact. Convex continuously differentiable functional J ( u ) is weakly semicontinuous from below on the set M ( u 0 ) . Hence the set X * , is a null set, X * M ( u 0 ) , the following inequation is valid.

0 J ( u n ) J ( u * ) J ( u n ) , u n u * J ( u n ) u n u * D J ( u n ) ,

where D is a the M ( u 0 ) diameter. Notice, that

0 lim n J ( u n ) J ( u * ) D lim n J ( u n ) = 0 , lim n J ( u n ) = J ( u * ) = J * .

Hence, on the set M ( u 0 ) the lower bound of the functional J ( u ) in point u * X * , the sequence { u n } M 0 is minimized. Hence, the second statement of the theorem is proved.

third statement of the theorem follows from the inclusion of { u n } M ( u 0 ) , M ( u 0 ) that is weakly semi-compact set, J ( u * ) = min J ( u ) = J * = inf J ( u ) , u M ( u 0 ) . Hence, u n сл u * при n .

From inequations

J ( u n ) J ( u n + 1 ) 1 2 l J ( u n ) 2 , 0 J ( u n ) J ( u * ) D J ( u n ) ,

where u n сл u * npu n estimate follows (13), where m 0 = 2 D 2 l . The fourth statement of the theorem is proved.

If inequality (11) is satisfied, then the functional (5) under condition (6) is strongly convex. Then

J ( u n ) J ( u * ) J ( u n ) , u n u * μ 2 u n u * 2 2 μ J ( u n ) 2 , n = 0 , 1 , 2 ,

J ( u n ) J ( u n + 1 ) 1 2 l J ( u n ) 2 , n = 0 , 1 , 2 ,

It follows that a n a n + 1 μ l a n , where a n = J ( u n ) J ( u * ) . Hence, 0 a n + 1 a n ( 1 μ l ) = q a n then a n q a n 1 q 2 a n 2 q n a 0 , where

a 0 = J ( u 0 ) J ( u * ) . Hence the estimate (14) follows. The fifth statement is proved.

As follows from (6), the value of J ( u ) 0 , with all u ( t ) L 2 ( I 1 , R m ) . Sequence { u n } M ( u 0 ) is minimizing for any starting point u 0 = u 0 ( τ ) M ( u 0 ) , where J ( u * ) = min u M ( u 0 ) J ( u ) = J * = inf u M ( u 0 ) J ( u ) .

If J ( u * ) = 0 , thus f ( t ) = a b K ( t , τ ) u * ( τ ) d τ . Thus, the integral Equation (1) has a solution if and only if the value J ( u * ) = 0 , where u * = u * ( τ ) M ( u 0 ) is the solution to the integral Equation (1). If value J ( u * ) > 0 , mo f ( t ) a b K ( t , τ ) u * ( τ ) d τ , thus u * = u * ( τ ) M ( u 0 ) is not a solution of the integral Equation (1). The theorem is proved.

Example 1. An integral equation is given

K u = 0 1 e ( t + 1 ) τ u ( τ ) d τ = f ( t ) , f ( t ) = 1 t + 2 ( e t + 2 1 ) , t [ 0 , 1 ] = I . (15)

The optimization problems (5), (6) will be written in the following form

J ( u ) = 0 1 [ 1 t + 2 ( e t + 2 1 ) 0 1 e ( t + 1 ) τ u ( τ ) d τ ] 2 d t i n f , u ( τ ) L 2 ( I 1 , R 1 ) , I 1 = [ 0 , 1 ]

The gradient of the functional

J ( u ) = 2 0 1 e ( t + 1 ) τ e t + 2 1 t + 2 d t + 2 0 1 e ( t + 1 ) τ 0 1 e ( t + 1 ) σ u ( σ ) d σ d t .

Lipschitz constant: l 2 0 1 0 1 e 2 ( t + 1 ) τ d τ d t .

Sequence: u n + 1 ( τ ) = u n ( τ ) α n J ( u n ) , g n ( α n ) = min α 0 J ( u n α J ( u n ) ) , n = 0 , 1 , 2 , .

The sequence { u n } converges to the element u * ( τ ) = e τ , τ [ 0 , 1 ] . The value J ( u * ) = 0 , the solution of the integral equation (15) is u * ( τ ) = e τ , τ [ 0 , 1 ] . It is easy to find that J ( u * ) = 0 .

5. Solvability of the Fredholm Integral Equation of the First Kind with Constraint

Consider the integral equation (1), when u ( τ ) U ( τ ) L 2 ( I , R m ) , where U ( τ ) is a bounded convex closed set in L 2 .

The solution to problem 3,4 can be obtained from the solution of the optimization problem as

minimize the functional:

J 1 ( u , v ) = t 0 t | f ( t ) a b K ( t , τ ) u ( τ ) d τ | 2 d t + u v L 2 2 i n f (16)

with condition

u ( τ ) L 2 ( I 1 , R m ) , v ( τ ) U ( τ ) L 2 ( I 1 , R m ) , τ I 1 , f ( t ) L 2 ( I , R n ) . (17)

Theorem 3. Let the operator kernel be measurable K ( t , τ ) and belongs to L 2 in the rectangle

S 1 = { ( t , τ ) R 2 | t I , τ I 1 } .

Then:

1) the functional (16) under condition (17) is continuously differentiable by Fréchet, the gradient of the functional

J 1 ( u , v ) = ( J 1 u ( u , v ) , J 1 v ( u , v ) ) L 2 ( I 1 , R m ) × L 2 ( I 1 , R m )

in any point ( u , v ) L 2 ( I 1 , R m ) × U ( τ ) defined by the formula

J 1 ( u , v ) = 2 t 0 t 1 K * ( t , τ ) f ( t ) d t + 2 t 0 t 1 a b K * ( t , τ ) K ( t , σ ) u ( σ ) d σ d t + 2 ( u v ) L 2 ( I 1 , R m ) ,

J 1 v ( u , v ) = 2 ( u v ) L 2 ( I 1 , R m ) ;

2) the gradient of the functional J 1 ( u , v ) satisfies the Lipschitz condition

J 1 ( u + h , v + h 1 ) J 1 ( u , v ) l 1 ( h + h 1 ) ,

for all ( u + h , v + h 1 ) L 2 ( I 1 , R m ) × U ( τ ) ;

3) functional (16) with conditions (17) is convex.

The proof of the theorem is similar to the proof of Theorem 1.

Theorem 4. Let for the optimization problem (16), (17) the last sequence { u n ( τ ) } , { v n ( τ ) } on algorithm

u n + 1 ( τ ) = u n ( τ ) α n J 1 u ( u n , v n ) , v n + 1 = P U [ v n ( τ ) α n J 1 v ( u n , v n ) ] , n = 0 , 1 , 2 , ,

if ε 0 α n 2 l 1 + 2 ε 1 , ε 0 > 0 , ε 1 > 0 or α n = 1 l 1 , n = 0 , 1 , 2 , .

Thus:

1) The numerical sequence { J 1 ( u n , v n ) } is strictly decreasing.

2) lim n u n u n + 1 = 0 , lim n v n v n + 1 = 0 ;

If, in addition, the set M ( u 0 , v 0 ) = { ( u , v ) L 2 × U | J 1 ( u , v ) J 1 ( u 0 , v 0 ) } is limited, then:

3) sequence { u n , v n } M ( u 0 , v 0 ) is minimizing

lim n J 1 ( u n , v n ) = J 1 * = inf J 1 ( u , v ) , ( u , v ) L 2 × U ;

4) u n w e a k u * , v n w e a k v * when n ,

( u * , v * ) U * = { ( u * , v * ) L 2 × U | J 1 ( u * , v * ) = min J 1 ( u , v ) = J * = inf J 1 ( u , v ) , ( u , v ) L 2 × U } ;

5) so that the integral Equation (1) under the condition u ( τ ) U has a solution, it is necessary and sufficient that the value of J 1 ( u * , v * ) = 0 .

The proof of the theorem is like the proof of Theorem 2.

6. Controllability of a Linear Integra-Differential System

Let us consider the solutions of problems 5 - 7 for the process described by a linear system of integrodifferential Equations (2) under conditions (3), (4). To solve Problems 5 - 7, it is necessary to investigate the controllability of an auxiliary system of the following kind

y ˙ = A ( t ) y + B ( t ) w 1 ( t ) + C ( t ) w 2 ( t ) + μ ( t ) , t I , (18)

y ( t 0 ) = x 0 = x ( t 0 ) , y ( t 1 ) = x 1 = x ( t 1 ) , x 0 R n , x 1 R n , (19)

w 1 ( t ) L 2 ( I , R m ) , w 2 ( t ) L 2 ( I , R m 1 ) . (20)

Let’s introduce the notation

B 1 ( t ) = ( B ( t ) , C ( t ) ) , w ( t ) = ( w 1 ( t ) , w 2 ( t ) ) * , t I . Then the system (18)-(20) will be written as

y ˙ = A ( t ) y + B 1 ( t ) w ( t ) + μ ( t ) , t I , (21)

y ( t 0 ) = x 0 = x ( t 0 ) , y ( t 1 ) = x 1 = x ( t 1 ) , (22)

w ( t ) L 2 ( I , R m + m 1 ) , (23)

where B 1 ( t ) , t I matrix of order n × ( m + m 1 ) .

Solution of differential Equation (21) with the initial condition y ( t 0 ) = x 0 has the following form

y ( t ) = Φ ( t , t 0 ) x 0 + t 0 t Φ ( t , τ ) B 1 ( τ ) w ( τ ) d τ + t 0 t Φ ( t , τ ) μ ( τ ) d τ , t I , (24)

where Φ ( t , τ ) = Θ ( t ) θ 1 ( τ ) , Θ ( t ) is the fundamental matrix of solutions of a linear homogeneous system ξ ˙ = A ( t ) ξ , Θ ˙ ( t ) = A ( t ) Θ ( t ) , Θ ( t 0 ) = E n , E n is the unit matrix of order n × n .

From given (24) considering, that y ( t 1 ) = x 1 , Φ ( t 1 , t ) = Φ ( t 1 , t 0 ) Φ ( t 0 , t ) , we have

t 0 t 1 Φ ( t 0 , t ) B 1 ( t ) w ( t ) d t = a , (25)

where a = Φ ( t 0 , t 1 ) x 1 x 0 t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t . Thus, the solution of the boundary problem (21)-(23) is a solution of the integral equation (25).

Theorem 5. For the existence of control w * ( t ) L 2 ( I , R m + m 1 ) , which translates the trajectory of equation (21) from any starting point y ( t 0 ) = x 0 R n at any given time to any desired end state y ( t 1 ) = x 1 R n , it is necessary and sufficient that the matrix

W ( t 0 , t 1 ) = t 0 t 1 Φ ( t 0 , t ) B 1 ( t ) B 1 * ( t ) Φ * ( t 0 , t ) d t (26)

order of n × n (matrix) to be positively determined, where ( ) is a transposition symbol. At the same time, the control

w * ( t ) = B 1 * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a , (27)

is the solution of the differential Equation (21), is relevant to control w * ( t ) L 2 ( I , R m + m 1 ) that defined by the formula

y * ( t ) = Φ ( t , t 0 ) W ( t , t 1 ) W 1 ( t 0 , t 1 ) x 0 + Φ ( t , t 0 ) W ( t 0 , t ) W 1 ( t 0 , t 1 ) Φ ( t 0 , t 1 ) x 1 + t 0 t Φ ( t , τ ) μ ( τ ) d τ Φ ( t , t 0 ) W ( t 0 , t ) W 1 ( t 0 , t 1 ) t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t , t I , (28)

W ( t 0 , t ) = t 0 t Φ ( t 0 , τ ) B 1 ( τ ) B 1 * ( τ ) Φ * ( t 0 , τ ) d τ , W ( t , t 1 ) = W ( t 0 , t 1 ) W ( t 0 , t ) . (29)

The proof of the theorem for the general case is given in [17] . Control w * ( t ) , t I defined by formula (27) (control with the minimum norm) is found by Kalman R.Е. [19] [20] .

From (26), (27) as follows

w * ( t ) = ( w 1 * ( t ) w 2 * ( t ) ) = ( B * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a C * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a ) = B 1 * Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a ,

where

w 1 * ( t ) = B * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a , w 2 * ( t ) = C * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) a , t I . (30)

a = Φ ( t 0 , t 1 ) x 1 x 0 t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t . (31)

The solution of problem 5 follows from the following theorem.

Theorem 6: For controllability of the solution of the linear integrodifferential Equation (2) under conditions (3), (4) it is necessary and sufficient to satisfy the following conditions:

1) set W ( t 0 , t 1 ) defined by formula (26) was positively defined.

2) identical equations are satisfied

u * ( t ) = w 1 * ( t ) , a b K ( t , τ ) v * ( τ ) d τ = w 2 * ( t ) , t I , (32)

where w 1 * ( t ) L 2 ( I , R m ) , w 2 * ( t ) L 2 ( I , R m 1 ) defined by formulas(30), (31).

Proof. Let x * ( t ) = x * ( t ; t 0 , x 0 , u * , v * ) , t I is the solution to the controllability problem for the system (2)-(4), where x * ( t 0 ) = x 0 , x * ( t 1 ) = x 1 , u * ( t ) L 2 ( I , R m ) , v * ( τ ) L 2 ( I 1 , R n 1 ) . Function y * ( t ) = y * ( t ; t 0 , w 1 * , w 2 * ) t I is the solution for the control system problem (18)-(20), where y * ( t 0 ) = x 0 , y * ( t 1 ) = x 1 , w 1 * ( t ) L 2 ( I , R m ) , w 2 * ( t ) L 2 ( I , R m 1 ) , set W ( t 0 , t 1 ) > 0 .

So x * ( t ) = y * ( t ) , t I , it is necessary and sufficient that the set W ( t 0 , t 1 ) > 0 and the identical equations are satisfied (32). The connection between the solutions of the control problem (2)-(4) and (18)-(20) is defined by equations (32). From the controllability of the system (18)-(20) follows the controllability of the system (2)-(4). The theorem is proved.

As follows from identities (32), the desired control,

u * ( t ) = B * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) [ Φ ( t 0 , t 1 ) x 1 x 0 t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t ] , t I ,

the control v * ( τ ) τ I 1 is determined from the solution of the optimization problem

J ( v ) = t 0 t 1 | w 2 * ( t ) a b K ( t , τ ) v ( τ ) d τ | 2 d t i n f (33)

with the following conditions

v ( τ ) L 2 ( I 1 , R n 1 ) , (34)

where

w 2 * ( t ) = C * ( t ) Φ * ( t 0 , t ) W 1 ( t 0 , t 1 ) [ Φ ( t 0 , t 1 ) x 1 x 0 t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t ] , t I .

The solution to the optimization problem (33), (34) is given in part 2, where w 2 * ( t ) = f ( t ) , t I , u ( τ ) = v ( τ ) , t I 1 .

The solution of problems 6, 7 follows from the following theorem.

Theorem 7. Let the set W ( t 0 , t 1 ) > 0 , the value J ( v * ) = 0 , where v * = v * ( t ) L 2 ( I 1 , R n 1 ) is the solution of optimization problem (33)-(34). then:

1) the pair ( u * ( t ) , v * ( t ) ) L 2 ( I , R m ) × L 2 ( I 1 , R n 1 ) translates the trajectory of the system (2)-(4) from any starting point x 0 = x ( t 0 ) in the time t 0 to any desired end state x 1 = x ( t 1 ) in the time t 1 ;

2) solution of the controllability problem for the system (2)-(4) function

x * ( t ) = y * ( t ) = Φ ( t , t 0 ) W ( t , t 1 ) W 1 ( t 0 , t 1 ) x 0 + Φ ( t , t 0 ) W ( t 0 , t ) W 1 ( t 0 , t 1 ) Φ ( t 0 , t 1 ) x 1 + t 0 t Φ ( t , τ ) μ ( τ ) d τ Φ ( t , t 0 ) W ( t 0 , t ) W 1 ( t 0 , t 1 ) t 0 t 1 Φ ( t 0 , t ) μ ( t ) d t , t I .

The proof of the theorem follows from Theorems 5, 6.

Example 2. Consider a controllable process

x ˙ 1 = x 2 , x ˙ 2 = u + 1 2 e ( t + 1 ) τ v ( τ ) d τ , t I = [ 0 , 2 ] , τ I 1 = [ 1 , 2 ] ,

x 1 ( 0 ) = x 10 , x 2 ( 0 ) = x 20 , x 1 ( 2 ) = x 11 , x 2 ( 2 ) = x 21 , (35)

u ( t ) L 2 ( I , R 1 ) , v ( τ ) L 2 ( I 1 , R 1 ) .

In vector form, the system (35) will be written as

x ˙ = A x + B u + C 1 2 e ( t + 1 ) τ v ( τ ) d τ , t I , τ I 1

x = ( x 1 x 2 ) , x 0 = ( x 10 x 20 ) , x 1 = ( x 11 x 21 ) , (36)

u ( t ) L 2 ( I , R 1 ) , v ( τ ) L 2 ( I 1 , R 1 ) ,

where

A = ( 0 1 0 0 ) , B = ( 0 1 ) , C = ( 0 1 ) .

1) Necessary and sufficient conditions of controllability. A linear controlled system has the form (see (18)-(20)).

y ˙ = A y + B w 1 ( t ) + C w 2 ( t ) , y ( 0 ) = x ( 0 ) , y ( 2 ) = x ( 2 ) , w 1 ( t ) L 2 ( I , R 1 ) , w 2 ( t ) L 2 ( I , R 1 ) , w ( t ) = ( w 1 ( t ) , w 2 ( t ) ) L 2 ( I , R 2 ) , (37)

where the sets are

B 1 = ( B , C ) = ( 0 0 1 1 ) , Φ ( t , τ ) = e A ( t τ ) , e A t = ( 1 t 0 1 ) ,

e A t = ( 1 t 0 1 ) , e A * t = ( 1 0 t 1 ) , Θ ( t ) = e A t , t I .

Using the initial point, we find the following vectors and matrixes:

a = Φ ( 0 , 2 ) x ( 2 ) x ( 0 ) = e 2 A x ( 2 ) x ( 0 ) = ( x 11 2 x 21 x 10 x 21 x 20 ) ,

W ( 0 , 2 ) = 0 2 Φ ( 0 , t ) B 1 B 1 * Φ * ( 0 , t ) d t = ( 16 3 4 4 4 ) > 0 , W 1 ( 0 , 2 ) = ( 3 4 3 4 3 4 1 ) .

The matrix W ( 0 , 2 ) is positively determined. Consequently, there exists a control. w * ( t ) = ( w 1 * ( t ) , w 2 * ( t ) ) , t I , where

w 1 * ( t ) = B * Φ * ( 0 , t ) W 1 ( 0 , 2 ) a = 3 ( t 1 ) 4 x 10 + 3 t 4 4 x 20 + 3 ( 1 t ) 4 x 11 + 3 t 2 4 x 21 ,

w 2 * ( t ) = C * Φ * ( 0 , t ) W 1 ( 0 , 2 ) a = 3 ( t 1 ) 4 x 10 + 3 t 4 4 x 20 + 3 ( 1 t ) 4 x 11 + 3 t 2 4 x 21 .

Function

y * ( t ) = Φ ( t , 0 ) W ( t , 2 ) W 1 ( 0 , 2 ) x ( 0 ) + Φ ( t , 0 ) W ( 0 , t ) W 1 ( 0 , 2 ) Φ ( 0 , 2 ) x ( 2 ) = ( y 1 * ( t ) , y 2 * ( t ) ) ,

where

W ( 0 , t ) = 0 t Φ ( 0 , τ ) B 1 B 1 * Φ * ( 0 , τ ) d τ = ( 2 t 3 3 t 2 t 2 2 t ) , W ( t , 2 ) = ( 16 3 2 3 t 2 4 + t 2 4 + t 2 4 2 t ) .

then

y 1 * ( t ) = t 3 + 3 t 2 12 t + 4 4 x 10 + t 3 4 t 2 + 4 t 4 x 20 + t 3 + 3 t 2 4 x 11 + t 3 2 t 2 4 x 21 ,

y 2 * ( t ) = 3 t 2 6 t 4 x 10 + 3 t 2 8 t + 4 4 x 20 + 3 t 2 + 6 t 4 x 11 + 3 t 2 4 t 4 x 21 , t I .

Now the necessary and sufficient controllability conditions for the system (37) will be written as

W ( 0 , 2 ) > 0 , u * ( t ) = w 1 * ( t ) = 3 ( t 1 ) 4 x 10 + 3 t 4 4 x 20 + 3 ( 1 t ) 4 x 11 + 3 t 2 4 x 21 , t I ,

1 2 e ( t + 1 ) τ v * ( τ ) d τ = w 2 * ( t ) = 3 ( t 1 ) 4 x 10 + 3 t 4 4 x 20 + 3 ( 1 t ) 4 x 11 + 3 t 2 4 x 21 , t I .

2) Construction of a solution to the controllability problem. The control v * ( τ ) L 2 ( I 1 , R 1 ) is defined as a solution of optimization problem: there is need to find a minimum of

J ( v ) = 0 2 | w 2 * ( t ) 1 2 e ( t + 1 ) τ v ( τ ) d τ | 2 d t i n f , v ( τ ) L 2 ( I 1 , R 1 ) .

The Freche function derivative

J v ( v ) = 2 0 2 e ( t + 1 ) τ w 2 * ( t ) d t + 2 0 2 1 2 e ( t + 1 ) τ e ( t + 1 ) σ v ( σ ) d σ d t

at any point v ( τ ) L 2 ( I 1 , R 1 ) . Minimizing sequence

v n + 1 = v n α n J v ( v n ) , g n ( α n ) = min g n ( α ) , α 0 ,

g n ( α ) = J ( v n α J ( v n ) ) , n = 0 , 1 , 2 , ,

where v n ( τ ) w e a k v * ( τ ) if n .

7. Conclusions

As a result of scientific research, the following results were obtained:

· solvability and construction of the Fredholm integral equation of the first kind are reduced to solving an extremal problem in a Hilbert space;

· necessary and sufficient conditions for the existence of a solution of the Fredholm integral equation of the first kind are obtained;

· algorithms for constructing minimizing sequences have been developed, their convergence to solutions of Fredholm integral equations of the first kind has been proven for cases with and without restrictions on the desired solution;

· a constructive theory of solvability and construction of a solution to a linear integrodifferential equation with distributed control delay was created;

· the scientific novelty of the results obtained lies in the fact that a new method has been created for studying of the Fredholm integral equation of the first kind based on convex analysis and the theory of extremal problems in Hilbert space;

· the practical value of the results obtained lies in the fact that a constructive algorithm has been created for solving the Fredholm integral equation of the first kind and an integrodifferential equation with distributed control, easily implemented by modern means of computational mathematics and computer technology.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Kolmogorov, A.N. and Fomin, S.V. (1989) Elements of Function Theory and Functional Analysis.
[2] Krasnov, M.L. (1975) Integral Equations. 304 p.
[3] Volterra, V. (1976) Mathematical Theory of the Struggle for Existence.
[4] Bellman, R. (1987) Mathematical Methods in Medicine.
[5] Romanovsky, Y.M., Stepanova, N.V. and Chernyakhovsky, D.S. (1984) Mathematical Biophysics.
[6] Rubin, A.B. (1984) Thermodynamics of Biological Processes. Moscow State University Publishing, Moscow.
[7] Imanaliev, M.I. (1977) Methods for Solving Nonlinear Inverse Problems and Their Applications.
[8] Glushkov, V.M., Ivanov, V.V. and Yanenko, V.M. (1983) Modeling of Developing Systems.
[9] Nicolas, G. and Prigogine, I. (1979) Self-Organization in Non-Equilibrium Systems.
[10] Bykov, Y.V. (1981) On Some Problems in the Theory of Integro-Differential Equations.
[11] Elsgolts, L.E. and Norkin, S.B. (1971) Introduction to the Theory of Differential Equations with Deviating Argument.
[12] Aisagaliev, S.A. (2022) Qualitative Theory of Integro-Differential Equations. Kazakh National University, Almaty.
[13] Brunner, H. (2004) Collocation Methods for Volterra Integral and Related Functional Equations. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511543234
[14] Vlasov, V.V. and Rautian, N.A. (2022) Correct Solvability of Volterra Integro-Differential Equations in Hilbert Space. Differential Equations, 58, 1414-1430.
https://doi.org/10.1134/S0012266122010010X
[15] Bulatov, M.V. and Solovarova, L.S. (2022) On Systems of Integro-Differential and Integral Equations with Identically Degenerate Matrix in the Main Part. Differential Equations, 58, 1226-1233.
https://doi.org/10.1134/S0012266122090063
[16] Sidorov, N.A. and Sidorov, D.N. (2021) Nonlinear Volterra Equations with Loads and Bifurcation Parameters: Existence Theorems and Solution Construction. Differential Equations, 27, 1654-1664.
https://doi.org/10.1134/S0012266121120107
[17] Aisagaliev, S.A. and Zhunussova, Z.K. (2015) To the Boundary Value Problem of Ordinary Differential Equations. Electronic Journal of Qualitative Theory of Differential Equations (EJQTDE), 18, 1-17.
https://doi.org/10.14232/ejqtde.2015.1.57
[18] Aisagaliev, S.A. and Kalimoldaev, N.N. (2015) Constructive Method for Solving Boundary Value Problems for Ordinary Differential Equations. Differential Equations, 51, 147-160.
https://doi.org/10.1134/S0012266115020019
[19] Aisagaliev, S.A. (1996) Optimal Control of Linear Systems with Fixed Trajectory and Bounded Control. Differential Equations, 32, 1017-1023.
[20] Aisagaliev, S.A. and Kabidoldanova, A.A. (2012) On the Optimal Control of Linear Systems with Linear Performance Criterion and Constraints. Differential Equations, 48, 1-13.
https://doi.org/10.1134/S0012266112060079
[21] Aisagaliev, S.A. (1991) Controllability of a Differential Equation Systems. Differential Equations, 27, 1037-1045.
[22] Aisagaliev, S.A. and Belogurov, A.P. (2012) Controllability and Speed of the Process Described by a Parabolic Equation with Bounded Control. Siberian Mathematical Journal, 53, 13-28.
https://doi.org/10.1134/S0037446612010028
[23] Aisagaliev, S.A. (2021) Theory of Boundary Value Problems of Dynamic Systems. Almaty.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.