Feedback Systems on a Reflexive Banach Space—Linearization

Abstract

The aim of our work is to formulate and demonstrate the results of the normality, the Lipschitz continuity, of a nonlinear feedback system described by the monotone maximal operators and hemicontinuous, defined on real reflexive Banach spaces, as well as the approximation in a neighborhood of zero, of solutions of a feedback system [A,B] assumed to be non-linear, by solutions of another linear, This approximation allows us to obtain appropriate estimates of the solutions. These estimates have a significant effect on the study of the robust stability and sensitivity of such a system see [1] [2] [3]. We then consider a linear FS , and prove that, if ; , with the respective solutions of FS’s [A,B] and corresponding to the given (u,v) in . There exists,, positive real constants such that, . These results are the subject of theorems 3.1, ... , 3.3. The proofs of these theorems are based on our lemmas 3.2, ... , 3.5, devoted according to the hypotheses on A and B, to the existence of the inverse of the operator I+BA and . The results obtained and demonstrated along this document, present an extension in general Banach space of those in [4] on a Hilbert space H and those in [5] on a extended Hilbert space .

Share and Cite:

Khelifa, M. (2020) Feedback Systems on a Reflexive Banach Space—Linearization. International Journal of Modern Nonlinear Theory and Application, 9, 34-50. doi: 10.4236/ijmnta.2020.92003.

1. Introduction

During these last decades, see [6] [7], functional analysis and the theory of monotone operators, defined on Banach spaces have played, an important role in the study and analysis of systems. Reference [4], introduced the feedback systems described by monotone operators, defined on appropriate spaces. He has established, a series of existence and uniqueness results, of the solutions of this system on a Hilbert space H. These types of systems find their uses in several fields such as: control theory, network theory, solving the Hammerstein equation... etc. [8]. The techniques used are based, on the surjectivity theorem, of the monotonic and coercive maximal operators on a reflective Banach space. References [4] [5] introduced, the notion of extended Hilbert space He, and obtained, among others, a normality and linearization results for a feedback system, on this space.

One of our fundamental results is that, the behavior of the FS [ A , B ] is completely determined, by the inverse of some application M a = I + B ( a + A ) (see (2)). Note that, in the case where the operators A and B are not linear, and if ( u , v ) ( e , f ) , then ( e , f ) = ( M v 1 u , v + A M v 1 u ) . If one of the two operators is linear, the writing of the solution ( e , f ) , can take forms that do not necessarily depend, on the inverse of the operator M a , see (4). This approximation allows us, to obtain suitable estimates of the solutions, in the sense of Section 3, these estimates have a significant effect, on the study of robust stability and sensitivity [1]. For more details, on the study of the inverse of such an operator, which is non linear, one consult [9] [10] [11] [12].

The subject of our work, is to proceed to the approximation method. Therefore, to find an approximate solution of [ A , B ] , supposed nonlinear by one linearizes, in the neighborhood of zero. We then consider a linear [ A 0 , B 0 ] , and prove that, if ( u , v ) ( e , f ) ; ( u , v ) ( e 0 , f 0 ) , with u r , v r ( r > 0 ) and ( e , f ) , ( e 0 , f 0 ) the respective solutions of [ A , B ] and [ A 0 , B 0 ] corresponding to the given ( u , v ) in E × E . There exists, k 11 , k 12 , k 21 , k 22 , positive real constants such that e e 0 k 11 u + k 12 v and f f 0 k 21 u + k 22 v .

The paper is organized as follows. In Section 2, we recall some definitions, and we demonstrate results of normality of the FS [ A , B ] , according to whether the two operators are nonlinear or one of the operators is linear. Section 3, is reserved for our results of normality, Lipschitz continuity and approximate solution of [ A , B ] , supposed nonlinear, by one linearizes, in the neighborhood of zero. An example is presented, at the end of this section.

2. Definitions and Preliminary Results

Let E be a real vector space, 2 E the set of all parts of E, E the algebraic dual of E. Let A : E 2 E and B : E 2 E , the pair ( A , B ) is said feedback system and it is noted FS [ A , B ] or [ A , B ] . D ( A ) = { x E ; A x } , the domain of A. We say that, A is an operator, if D ( A ) = E and A x is a singleton. A is said simple if, for every ( x , x ) E 2 , x x we have A x A x = ϕ . Note that, if A is a simple operator it is injective.

The meaning of the following definition, can be understood, by looking at Figure 1.

Figure 1. 1 − (u, v) input; 2 − (y, g) output; 3 − (e, f) solution, 4 − [A, B] a feedback system.

Definition 2.1. We say that, an element ( e , f ) of E × E is a solution of the FS [ A , B ] , corresponding to the given ( u , v ) (input) in E × E and we write ( u , v ) ( e , f ) , if there exists ( y , g ) (output) in B f × A e such that:

{ e = u y ; f = v + g . (1)

Definition 2.2. We say that the FS [ A , B ] is:

1) Resoluble, if for all ( u , v ) E × E , there exists ( e , f ) E × E , checking (1).

2) Unambiguous, if each solution is unique.

3) Normal, if it is resoluble and unambiguous.

The existence and uniqueness results of the solutions of [ A , B ] , are based on the mapping M a : E 2 E defined for all ( x , a ) E × E , by

M a x = x + B ( a + A x ) . (2)

Proposition 2.1. The FS [ A , B ] is resoluble, iff M a E = E , a E .

Proof. Since [ A , B ] is resoluble, for ( u , a ) E × E , there are ( e , f ) E × E and ( y , g ) B f × A e verifying: e + y = u and f g = a . So

u e + B f = e + B ( a + g ) e + B ( a + A e ) = M a e x E M a x = M a E therefore

E = M a E . Reciprocally, let ( u , v ) E × E , since E = M v ( E ) , it exists e E such that u = M v e = e + B ( v + A e ) , satisfying u = e + y , where y B ( v + A e ) , then y B f , with f v + A e . Therefore, there are g A e verifying f = v + g .

Proposition 2.2.

1) If [ A , B ] is unambiguous, then for every a E , M a is simple.

2) If A is an operator and for every a E , M a is simple. Then [ A , B ] is unambiguous.

Proof.

1) Suppose that, there are a E , ( x , x ) E 2 such that M a x M a x ϕ . Let u M a x M a x , for u x + B ( a + A x ) , it exists y B ( a + A x ) checking u = x + y , since y B f where f = a + g , with g A x , then ( u , a ) ( x , f ) . Likewise, when u x + B ( a + A x ) , there are ( u , a ) ( x , f ) , as ( x , f ) = ( x , f ) then x = x so, for every a E , M a is simple.

2) Assume that, for ( u , a ) E × E there exists two solutions ( e , f ) , ( e , f ) of [ A , B ] , related to ( u , a ) . Then, there are y B f , y B f such that: u = e + y = e + y , and a = f A e = f A e . So, u e + B f e + B ( a + A e ) = M a e , also u M a e hence M a e M a e ϕ . As, for every a E , M a is simple, we have e = e and f = f .

Corollary 2.1. Let A and B be two operators, the FS [ A , B ] is:

1) Resoluble, iff a E M a is surjective.

2) Unambiguous, iff a E M a is injective.

3) Normal, iff a E M a is invertible. In this case, if ( u , v ) ( e , f ) then:

( e , f ) = ( M v 1 u , v + A M v 1 u ) . (3)

Proof.

1) Let Y 2 E , so Y M a E , a E , (see, proposition 2.1), then for y Y , it exists x E such that a E , M a x = y . Reciprocally, if a E , M a E = 2 E as E 2 E , there is x E , such that M a x = E . Since

x E M a x = M a E , it follows that E M a E , a E the reverse inclusion is obvious, hence [ A , B ] is resoluble.

2) If, [ A , B ] is unambiguous, from the proposition 2.2 (1), a E M a is simple, hence it is injective. Conversely, since a E M a is injective, then for x , x in E, x x we have a E M a x M a x , then a E M a x M a x = ϕ , so M a is simple, from proposition 2.2 (2) [ A , B ] is unambiguous.

3) Direct consequence of 1) and 2).

Let’s demonstrate the Formula (3). If ( u , v ) ( e , f ) it exists y B f , g A e satisfying e = u y = u B f , because B is an operator, and f = v + g = v + A e . Then, e = u B ( v + A e ) u = e + B ( v + A e ) = M v e , from where e = M v 1 u , this implies that f = v + A M v 1 u .

Proposition 2.3. Let A and B be two operators, I : E E the identity. If B is linear, then I + B A is bijective iff a E , M a is bijective.

Proof.

Proof that, I + B A is surjective a E , M a is surjective. Let ( a , y ) E × E , since y + B a E , B is linear and a E , M a is surjective, it exists x E such that a E , M a x = y + B a = x + B ( a + A x ) = x + B a + B A x = B a + ( I + B A ) x , then ( I + B A ) x = y and I + B A is surjective. Reciprocally, since y B a E , and I + B A is surjective, it exists x E such that ( I + B A ) x = y B a , a E , this implies y = ( I + B A ) x + B a = x + B ( a + A x ) = M a x , a E therefore a E , M a is surjective.

Proof that, I + B A is injective a E , M a is injective. Let x , x in E, with ( I + B A ) x = ( I + B A ) x , then a E , M a x = ( I + B A ) x + B a = ( I + B A ) x + B a = M a x , as a E , M a is injective, then x = x . Conversely, Let ( x , x ) E 2 such that a E , M a x = M a x that is to say a E , ( I + B A ) x + B a = ( I + B A ) x + B a , so

( I + B A ) x = ( I + B A ) x , since I + B A is injective then x = x ' .

Corollary 2.2. Let A and B be two operators, and I : E E the identity. If B is linear, the FS [ A , B ] is:

1) Resoluble iff I + B A is surjective.

2) Unambiguous iff I + B A is injective.

3) Normal iff I + B A is invertible. In this case, if ( u , v ) ( e , f ) then:

( e , f ) = ( ( I + B A ) 1 ( u B v ) , v + A ( I + B A ) 1 ( u B v ) ) . (4)

Proof. Direct consequence of proposition 2.3 and corollary 2.1. To demonstrate (4), let ( u , v ) ( e , f ) , there are y = B f and g = A e satisfying e = u B f and f = v + A e , from where e = u B ( v + A e ) = u B v B A e , so ( I + B A ) e = u B v , hence e = ( I + B A ) 1 ( u B v ) and f = v + A ( I + B A ) 1 ( u B v ) .

3. Linearization Results

Let E , E be, the dual and the bidual of a real normed space E. Since the canonical application π : E E , defined by: for every ( x , f ) E × E ; f , π x E , E : = x , f E , E = f ( x ) is linear and isometric, then it's continuous and injective. If the range R ( E ) = E , we say that E is reflexive, then E is topologically identical at E and x E can be considered as a linear form on E , it is natural to write for any ( x , f ) E × E ; f , x E , E = x , f E , E . Since E is a Banach space, then E is also Banach. Note that, if E is the real Hilbert space then E = E . In the sequel, we assume that E is reflexive, and we denote indifferently by .,. the scalar product in the duality between these spaces, and . their norms.

Definition 3.1. A is said:

1) Monotone if, for every ( x , y ) E 2 , ( f , g ) A x × A y ; x y , f g 0 or x y , A x A y 0 if A is an operator. It’s strictly monotone if x y , f g = 0 implies x = y or x y , f g > 0 , whenever x y .

2) Maximal monotone, if A is monotone and the following property holds: ( S : E 2 E ; G ( A ) G ( S ) , S monotone) then A = S , where G ( A ) = { ( x , f ) E × E ; f A x } the graph of A.

Definition 3.2. B is said:

1) Monotone if, for every ( f , g ) E 2 , ( x , y ) B f × B g ; f g , x y 0 or f g , B f B g 0 if A is an operator. It’s strictly monotone if f g , x y = 0 implies f = g , or f g , x y > 0 , whenever x y .

2) Maximal monotone, if B is monotone and the following property holds: ( T : E 2 E ; G ( B ) G ( T ) , T monotone) then B = T , where G ( B ) = { ( f , x ) E × E ; x B f } the graph of B.

Corollary 3.1. If A is strictly monotone, or B is an operator strictly monotone. Then N = I + B A is simple.

Proof. Let x , x in E, and y ( I + B A ) x ( I + B A ) x ϕ , there are f A x and g A x such that y ( x + B f ) ( x + B g ) , hence it exists ( z , z ) B f × B g verifying y = x + z = x + z , which implies x x + z z = 0 and f g , x x + f g , z z = 0 . As f g , x x 0 and f g , z z 0 because A and B are monotone, then f g , x x = 0 and f g , B f B g = 0 . Therefore, if A is strictly monotone or B is strictly monotone, we have f = g which implies, z = B f = B g = z , replacing in y we get x = x .

Corollary 3.2. If A or B is an operator strictly monotone, [ A , B ] is unambiguous.

Proof. According to proposition 2.2, and corollary 3.1, this amounts to demonstrating that, for every a E , M a = I + B ( a + A ) = I + B C a ( C a = a + A ) is simple. It suffices to note that C a is monotone (respectively strictly monotone) iff A is strictly monotone (respectively strictly monotone).

Definition 3.3. An operator A : E E is said:

1) Coersive, if lim x x , A x x = + .

2) Hemicontinuous, if for any x 0 , x E , t n 0 we have A ( x 0 + t n x ) weakly converges to A x 0 .

Note that hemicontinuity is operational, continuity implies hemicontinuity, and see [13] [14] [15] [16].

If A is monotone, bounded and hemicontinuous, then A is maximal monotone.

If A is maximal monotone and coercive, then R ( A ) = E .

If A is monotone, hemicontinuous and coercive, then R ( A ) = E .

If A is hemicontinuous, and there exists c > 0 such that x y , A x A y c x y 2 , for all x , y E . Then, A is invertible and the inverse A 1 : E E is monotone continuous.

To simplify the statements of linearization theorems, we note by:

M = { N : E E such that μ N = inf x , y E x y x y , N x N y x y 2 > } ;

L i p = { N : E E such that N : = sup x , y E x y N x N y x y < + } ;

M = { T : E E such that μ T : = inf f , g E f g f g , T f T g f g 2 > } ;

L i p = { T : E E such that T : = sup f , g E f g T f T g f g < + } ;

and

L i p = { S : E E such that S : = inf x , y E x y S x S y x y < + } .

The following assertions (which are also valid for M and L i p ) are true:

Proposition 3.1.

1) L i p M .

2) M , N M , α 0 , M + N , α N M , μ M + N μ M + μ N and μ α N = α μ N .

3) N M , N is monotone (respectively strictly monotone) iff μ N 0 (respectively μ N > 0 ).

4) N | μ N | , L i p M and N = 0 iff N is constant.

5) M , N L i p , α , α N , N + M L i p , α N = | α | N , N + M N + M .

6) If N M , and N is linear, then N Lip iff N is linear. In this case N = N .

7) If N L i p and M L i p , then M N L i p and M N N M .

The numbers μ N and N can be interpreted crudely as a “gain” and “minimal slope” of the operator N, respectively.

Lemma 3.1. Let N M with μ N > 0 ; if N is hemicontinuous, then N is invertible, N 1 L i p , μ N 1 0 and N 1 μ N 1 . If in addition N L i p , then μ N 1 μ N N 2 .

Proof. Since, μ N > 0 , ( x , y ) E 2 x y , N x N y μ N x y 2 (*), as N is hemicontinuous then N is invertible. From (*) and because ( f , x ) E × E , | x , f | x f , we have ( x , y ) E 2 μ N x y 2 x y , N x N y x y N x N y , then ( x , y ) E 2 ( x y ) , μ N x y N x N y ( f , g ) E 2 ( f g ) ,

μ N N 1 f N 1 g f g ( f , g ) E 2 ( f g ) , N 1 f N 1 g f g μ N 1 ,

witch implies N 1 L i p and N 1 μ N 1 . If N L i p , N x N y 2 N 2 x y 2 ( x , y ) E 2 , returning to (*) we obtain ( x , y ) E 2 x y , N x N y μ N x y 2 μ N N x N y 2 N 2 . It follows that, ( f , g ) E 2 ( f g ) ,

N 1 f N 1 g , f g f g 2 = f g , N 1 f N 1 g f g 2 μ N N 2 , which leads to μ N 1 μ N N 2 .

Lemma 3.2. Let B M which is hemicontinuous and let A L i p , with μ A > 0 . If, μ B + μ A A 2 > 0 , then A 1 + B and I + B A are invertible, ( I + B A ) 1 L i p ,

( I + B A ) 1 μ A 1 ( μ B + μ A A 2 ) 1 . (5)

Proof. Since A L i p , then ( x , y ) E 2 , A x A y A x y , so A is hemicontinuous, with μ A > 0 . By lemma 3.1 A 1 exists, A 1 L i p , A 1 μ A 1 and μ A 1 μ A A 2 . Thus A 1 + B is hemicontinuous with

μ A 1 + B μ B + μ A A 2 > 0 , using again lemma 3.1, ( A 1 + B ) 1 exists, ( A 1 + B ) 1 L i p and ( A 1 + B ) 1 μ A 1 + B 1 ( μ B + μ A A 2 ) 1 . As I + B A = ( A 1 + B ) A . then I + B A is invertible, ( I + B A ) 1 = A 1 ( A 1 + B ) 1 and ( I + B A ) 1 A 1 ( A 1 + B ) 1 μ A 1 ( μ B + μ A A 2 ) 1 .

Lemma 3.3. Let B M be linear, with μ B > 0 and let A M be hemicontinuous, with μ A 0 . If, μ B + μ A B 2 > 0 , then B 1 + A and I + B A are invertible, ( I + B A ) 1 L i p ,

( I + B A ) 1 B ( μ B + μ A B 2 ) 1 . (6)

Proof. Since B M is linear, μ B > 0 , then B is bounded B L i p , B = B = B , where B is the conjugate of B. Hence B is hemicontinuous, by lemma 3.1 B 1 exists, B 1 L i p , and B 1 μ B 1 . The open mapping theorem ensures the continuity of B 1 , and hence its hemicontinuity. To continue, let D = ( I + B A ) B = B + B A B , since for any f 0 , f E , t n 0 we have A B ( f 0 + t n f ) = A ( B f 0 + t n B f ) weakly converges to A B f 0 , taking into account the continuity of B, we have B A B ( f 0 + t n f ) weakly converges to B A B f 0 , which gives the hemicontinuity of D. Moreover, for f , g E

f g , D f D g = f g , B ( f g ) + f g , B A B f B A B g = B ( f g ) , ( f g ) + B f B g , A B f A B g . As,

B ( f g ) , ( f g ) μ B f g 2 ; B f B g , A B f A B g μ A B ( f g ) 2 and B ( f g ) B f g , then μ A B ( f g ) 2 μ A B 2 f g 2 , hence

f g , D f D g ( μ B + μ A B 2 ) f g 2 , so μ D μ B + μ A B 2 > 0 . Lemma 3.1 confirms that, D is invertible, D 1 L i p and D 1 μ D 1 ( μ B + μ A B 2 ) 1 . As B is invertible, B 1 μ B 1 ; then D B 1 = I + B A is invertible, ( I + B A ) 1 = B D 1 , so ( I + B A ) 1 B D 1 , which give (6).

Lemma 3.4. Let A M be linear, with μ A > 0 and let B M be hemicontinuous, with μ B 0 . If, μ A + μ B A 2 > 0 , then A 1 + B and I + B A are invertible, ( I + B A ) 1 L i p ,

( I + B A ) 1 A ( μ A + μ B A 2 ) 1 . (7)

Proof. Since A M is linear, μ A > 0 , then A is bounded A L i p , A = A = A , where A is the conjugate of A. Hence A is hemicontinuous, by lemma 3.1 A 1 exists, A 1 L i p , and A 1 μ A 1 . The open mapping theorem ensures the continuity of A 1 , and hence its hemicontinuity. To continue, let C = A ( I + B A ) = A + A B A , since for any x 0 , x E , t n 0 we have B A ( x 0 + t n x ) = B ( A x 0 + t n A x ) weakly converges to B A x 0 , taking into account the continuity of A, we have A B A ( x 0 + t n x ) weakly converges to A B A x 0 , which gives the hemicontinuity of C. Moreover, for x , y E

x y , C x C y = x y , A ( x y ) + x y , A B A x A B A y = A ( x y ) , ( x y ) + A x A y , B A x B A y . Since, for any

( x , f ) E × E ; f , x E , E = x , f E , E then, A ( x y ) , x y = x y , A ( x y ) μ A x y 2 ; A x A y , B A x B A y μ B A ( x y ) 2 and A ( x y ) A x y , then μ B A ( x y ) 2 μ B A 2 x y 2 , hence ( x , y ) E 2 ; x y , C x C y ( μ A + μ B A 2 ) x y 2 , so μ C μ A + μ B A 2 > 0 . Lemma 3.1 confirms that, C is invertible, C 1 L i p and C 1 μ C 1 ( μ A + μ B A 2 ) 1 . As A is invertible, A 1 μ A 1 ; then A 1 C = I + B A is invertible, ( I + B A ) 1 = C 1 A , so ( I + B A ) 1 A C 1 , which give (7).

Lemma 3.5. Let A be linear, and let A, N = I + B A be invertible, then ( x , a ) E × E , the operator M a x = x + B ( a + A x ) is invertible and

M a 1 x = N 1 ( x + A 1 a ) A 1 a . (8)

Proof. Indeed, x E , we have

M a 1 M a x = N 1 ( x + A 1 a + B ( a + A x ) ) A 1 a = N 1 ( x + A 1 a + B A ( x + A 1 a ) ) A 1 a = N 1 N ( x + A 1 a ) A 1 a = x ;

and,

M a M a 1 x = M a 1 x + B ( a + A M a 1 x ) = N 1 ( x + A 1 a ) + B A N 1 ( x + A 1 a ) A 1 a = N N 1 ( x + A 1 a ) A 1 a = x .

Definition 3.4. We say that, a normal FS [ A , B ] :

1) is Lipschitz continuous for the first inputs, if there are positive numbers k 11 and k 12 such that

e e k 11 u u

and

f f k 12 u u ,

where ( u , v ) ( e , f ) , ( u , v ) ( e , f ) .

2) is Lipschitz continuous for both inputs, if there are positive numbers k 11 , k 12 , k 21 and k 22 such that:

e e k 11 u u + k 12 v v

and

f f k 21 u u + k 22 v v ,

where ( u , v ) ( e , f ) , ( u , v ) ( e , f ) .

New, let a non linear FS [ A , B ] be. The main idea in this section is to linearize [ A , B ] in the neighborhood of the zero. We then consider a linear FS [ A 0 , B 0 ] and prove that, if ( u , v ) ( e , f ) and ( u , v ) ( e 0 , f 0 ) where ( u , v ) E × E with u , v r ( r > 0 ) and ( e , f ) , ( e 0 , f 0 ) the respective solutions of [ A , B ] and [ A 0 , B 0 ] , corresponding to the given ( u , v ) E × E . There exists k 11 , k 12 , k 21 and k 22 positive real constants such that e e 0 k 11 u + k 12 v and f f 0 k 21 u + k 22 v .

The inequalities above are given by theorem 3.1. To have suitable estimates, in the sense that the solutions of the two systems become sufficiently close. It is assumed that, one of the two operators of [ A , B ] is linear, this is the subject of theorems 3.2 and 3.3.

Before establishing the first linearization result of this paper, let us denote by B r the closed ball of E, and B r the closed ball of E , which are centered in zero and of radius r > 0 .

Theorem 3.1. Assume that:

1) A L i p , such that there exist a linear A 0 : E E , a > 0 verifying 0 < μ A a and

( A A 0 ) x a x , x B ν r . (9)

2) B L i p , such that there exist a linear B 0 : E E , b > 0 verifying

( B B 0 ) f b f , f B ( 1 + ν A ) r (10)

where ν = μ A 1 ( μ B + μ A A 2 ) 1 .

3) μ B b + ( μ A a ) ( a + A ) 2 > 0 .

Then

a) [ A , B ] and [ A 0 , B 0 ] are normal and Lipschitz continuous in the first input.

b) if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A 0 , B 0 ] , we have

e e 0 k 11 u + k 12 v (11)

and

f f 0 k 21 u + k 22 v (12)

where k 11 = κ ν ( b A + a B 0 ) ; k 12 = κ b ; k 21 = a ν + κ ν A 0 ( b A + a B 0 ) ; k 22 = κ A 0 b with κ = ( μ A a ) 1 ( μ B b + ( μ A a ) A 0 2 ) 1 .

Proof. Beginning by demonstrating (a). The linearity of A 0 and (9) implies that A 0 = 0 , and x E , A 0 x ( A A 0 ) x + A x ( a + A ) x , hence A 0 is bounded and A 0 a + A . As x B ν r ,

x , A 0 x = x , A x x , ( A A 0 ) x ; x , A x μ A x 2 ,

x , ( A A 0 ) x ( A A 0 ) x x a x 2 , then x E , x , A 0 x ( μ A a ) x 2 . Therefore μ A 0 μ A a > 0 , returning to the lemma 3.1, A 0 is invertible. By the same arguments and since for any ( x , f ) E × E ; f , x E , E = x , f E , E , we have, B 0 is bounded, B 0 b + B and μ B 0 μ B b . Now, let’s pose for ( x , z ) E × E , M z x = x + B ( z + A x ) ; B z x = z + A x and M z 0 x = x + B 0 ( z + A 0 x ) ; B z 0 x = z + A 0 x . It is clear that M z = I + B B z , M z 0 = I + B 0 B z 0 , μ B z = μ A > 0 , μ B z 0 = μ A 0 > 0 , B z = A and B z 0 = A 0 , these with (3) then give,

μ B + μ B z B z 2 = μ B + μ A A 2 > μ B b + ( μ A a ) ( a + A ) 2 > 0 . By lemma

3.2, M z is invertible M z 1 L i p and M z 1 μ A 1 ( μ B + μ A A 2 ) 1 = ν . Then corollary 2.1, implies that the FS [ A , B ] is normal. Since see (3), for ( u , v ) ( e , f ) , ( u , v ) ( e , f ) we have e e = M v 1 u M v 1 u ; f f = A M v 1 u A M v 1 u , then

e e = M v 1 u M v 1 u M z 1 u u ν u u ,

and

f f = A M v 1 u A M v 1 u A M v 1 u u A M v 1 u u A ν u u

where, k 11 = ν and k 12 = A ν in definition 3.1, (a); i.e. [ A , B ] is Lipschitz continuous for the first inputs. Using the same for M z 0 , we obtain

μ B 0 + μ B z 0 B z 0 2 = μ B 0 + μ A 0 A 0 2 > μ B b + ( μ A a ) ( a + A ) 2 > 0 . By lemma 3.2, M z 0 is invertible M z 0 1 L i p and

M z 0 1 μ A 0 1 ( μ B 0 + μ A 0 A 0 2 ) 1 ( μ A a ) 1 ( μ B b + ( μ A a ) A 0 2 ) 1 = κ .

Then corollary 2.1 with (3) imply that, the linear FS [ A 0 , B 0 ] is normal, and for ( u , v ) ( e 0 , f 0 ) , ( u , v ) ( e 0 , f 0 ) , we have e 0 e 0 = M v 0 1 u M v 0 1 u ; f f 0 = A 0 M v 0 1 u A 0 M v 0 1 u , then

e 0 e 0 = M v 0 1 u M v 0 1 u M v 0 1 u u κ u u ,

and

f f 0 = A 0 M v 0 1 u A 0 M v 0 1 u A 0 M v 0 1 u u A 0 M v 0 1 u u A 0 κ u u

where, k 11 = κ and k 12 = A 0 κ in definition 3.1, (a); i.e. [ A 0 , B 0 ] is Lipschitz continuous for the first inputs.

To demonstrate (b), let N = I + B 0 A 0 , since A 0 L i p is linear with μ A 0 > 0 , B 0 L i p is linear and μ B 0 + μ A 0 A 0 2 > 0 . By lemma 3.2, N 1

exists, N 1 L i p , N 1 μ A 0 1 ( μ B 0 + μ A 0 A 0 2 ) 1 therefore N 1 κ . Let now, ( x , z ) B r × B r , M z 1 x = w , it is obvious that w = M z 1 x M z 1 x ν x ν r , witch implies that w B ν r . By lemma 3.5 and (8), we have M z 1 x M z 0 1 x = M z 1 x N 1 ( x + A 0 1 z ) + A 0 1 z , then

M z 1 x M z 0 1 x = M z 1 x N 1 x N 1 A 0 1 z + A 0 1 z = M z 1 x N 1 x + ( I N 1 ) A 0 1 z = [ N 1 ( M z N ) M z 1 x ] + [ N 1 ( N I ) A 0 1 z ] = N 1 ( M z w N w ) + N 1 B 0 z

= N 1 [ B ( z + A w ) B 0 A 0 w B 0 z ] = N 1 [ B ( z + A w ) B 0 ( z + A w ) + B 0 ( z + A w ) B 0 ( z + A 0 w ) ] = N 1 [ ( B B 0 ) ( z + A w ) + B 0 ( A A 0 ) w ] .

As, z + A w z + A w r + A ν r = ( 1 + A ν ) r , then z + A w B ( 1 + A ν ) r hence

M z 1 x M z 0 1 x N 1 [ ( B B 0 ) ( z + A w ) + B 0 ( A A 0 ) w ] N 1 [ b z + A w + B 0 a w ] κ [ b z + ( b A + B 0 a ) ν x ] κ ν ( b A + a B 0 ) x + κ b z = k 11 x + k 12 z , (13)

New, if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A 0 , B 0 ] , by (3) we have, e e = M v 1 u M z 0 1 u and f f = A M v 1 u A 0 M z 0 1 u , to get (11) just replace x by u and z by v in (13). Finally, to have (12) and complete the demonstration of (b), it suffices to notice that

f f = A M v 1 u A 0 M v 1 u + A 0 M v 1 u A 0 M z 0 1 u A M v 1 u A 0 M v 1 u + A 0 M v 1 u A 0 M z 0 1 u ( A A 0 ) M v 1 u + A 0 M v 1 u M z 0 1 u a M v 1 u + A 0 M v 1 u M z 0 1 u a ν u + A 0 ( k 11 u + k 12 v ) = ( a ν + A 0 k 11 ) u + A 0 k 12 v = k 21 u + k 22 v .

The estimates (11) and (12) in theorem 3.1, can be improved if one of the operators of FS [ A , B ] is linear. Starting with

Theorem 3.2. Assume that:

1) B M with μ B > 0 be linear and A L i p with μ A 0 , such that there exist a linear A 0 : E E , a > 0 verifying μ A 0 0 and

( A A 0 ) x a x , x B ω ( 1 + ν B ) r , (14)

where ω = B ( μ B + μ A B 2 ) 1 .

2) μ B + ( μ A a ) B 2 > 0 .

Then

a) [ A , B ] and [ A 0 , B ] are normal and Lipschitz continuous in both inputs.

b) if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A 0 , B ] , we have

e e 0 λ u + λ B v (15)

and

f f 0 ( a ω + A 0 λ ) ( u + B v ) (16)

where λ = a B 3 ( μ B + μ A B 2 ) 1 ( μ B + μ A 0 B 2 ) 1 .

Proof. Let N = I + B A , N 0 = I + B A 0 , an in the proof of theorem 3.1 (a) we have A 0 = 0 , therefore N 0 = 0 . Also, A 0 is bounded, A 0 a + A , μ A a μ A 0 0 , then A 0 M , since by (2) μ B + μ A B 2 > a B 2 0 we have see lemma 3.3, N is invertible, N 1 L i p , and N 1 ω . Since B is linear, then by the corollary 2.2, [ A , B ] is normal. By using (4), and because A L i p , B L i p and N 1 L i p , then [ A , B ] is Lipschitz continuous in both inputs. On the other hand, by (2), μ B + μ A 0 B 2 > μ B + ( μ A a ) B 2 > 0 , so lemma 3.3 implies that N 0 is invertible, N 0 1 L i p , and N 0 1 B ( μ B + μ A 0 B 2 ) 1 = k . Always by the corollary 2.2 , [ A 0 , B ] is normal and Lipschitz continuous in both inputs. To demonstrate (b), let x B ( 1 + B ) r then N 1 x N x ω x ω ( 1 + B ) r , hence N 1 x B ω ( 1 + B ) r , by using (14) we have

N 1 x N 0 1 x = N 0 1 ( N 0 N ) N 1 x = N 0 1 B ( A 0 A ) N 1 x N 0 1 B ( A 0 A ) N 1 x k B a N 1 x a k B ω x = λ x . (17)

Now, if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A 0 , B ] , we have u B v u + B v ( 1 + B ) r , then s = u B v B ( 1 + B ) r so by (4), in corollary 2.2 and (17) we get

e e 0 = ( N 1 N 0 1 ) s λ s λ u + λ B v

and

f f 0 = A N 1 s A 0 N 0 1 s = A N 1 s A 0 N 1 s + A 0 N 1 s A 0 N 0 1 s ( A A 0 ) N 1 s + A 0 ( N 1 N 0 1 ) s a ω u B v + A 0 ( λ u + λ B v ) ( a ω + λ A 0 ) u + ( a ω + λ A 0 ) B v .

The last linearization result in this work is to assume that the operator A in the FS [ A , B ] is linear.

Theorem 3.3. Assume that

1) Let A M with μ A > 0 be linear and B L i p with μ B 0 , such that there exist a linear B 0 : E E , b > 0 verifying μ B 0 0 and

( B B 0 ) f b f , f B ρ A ( 1 + μ A 1 ) r , (18)

where ρ = A ( μ A + μ B A 2 ) 1 .

2) μ A + ( μ B b ) A 2 > 0 .

Then

a) [ A , B ] and [ A , B 0 ] are normal and Lipschitz continuous in both inputs.

b) if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A , B 0 ] , we have

e e 0 γ u + γ μ A 1 v (19)

and

f f 0 γ A u + γ A μ A 1 v (20)

where γ = b A 3 ( μ A + μ B A 2 ) 1 ( μ A + μ B 0 A 2 ) 1 .

Proof. By (18), we have B 0 = 0 , B 0 b + B furthermore B is bounded, and μ B b μ B 0 0. by lemma 3.1, A is bounded, then it is invertible, A 1 L i p , μ A 1 0 and A 1 μ A 1 . The operators N = I + B A , N 0 = I + B A 0 , they are such that: μ A + μ B A 2 μ A + ( μ b b ) A 2 > 0 (see (2)) then, lemma 3.4 with (7) imply that N is invertible, N 1 L i p and N 1 A ( μ A + μ B A 2 ) 1 = ρ . Likewise,

μ A + μ B 0 A 2 μ A + ( μ b b ) A 2 > 0 (see (2)) then, lemma 3.4 with (7) imply

that N 0 is invertible, N 0 1 L i p and N 0 1 A ( μ A + μ B 0 A 2 ) 1 = η . Now, let's for ( x , z ) E × E , M z x = x + B ( z + A x ) and M z 0 x = x + B 0 ( z + A x ) . Since N 1 , N 0 1 exist, A is linear and it is invertible, lemma 3.5 implies that, M z 1 and M z 0 1 exist. Moreover by (8), we have M z 1 x = N 1 ( x + A 1 z ) A 1 z and M z 0 1 x = N 0 1 ( x + A 1 z ) A 1 z . It’s easy to see that, M z 1 , M z 0 1 L i p ; M z 1 = N 1 ρ and M z 0 1 = N 0 1 η . Returning to corollary 2.1, [ A , B ] and [ A , B 0 ] are normal. Now, assume that for [ A , B ] , ( u , v ) ( e , f ) and ( u , v ) ( e , f ) it follows, by (3) that

e e = M v 1 u M v 1 u = N 1 ( u + A 1 v ) A 1 v N 1 ( u + A 1 v ) + A 1 v N 1 ( u u + A 1 v v ) + A 1 v v ρ u u + ( ρ + 1 ) A 1 v v ,

and

f f = u u + A ( M v 1 u M v 1 u ) u u + A M v 1 u M v 1 u u u + A ( ρ u u + ( ρ + 1 ) A 1 v v ) ( 1 + ρ A ) u u + ( A A 1 ( ρ + 1 ) ) v v .

So, [ A , B ] is Lipschitz continuous in both inputs. We prove in the same way that [ A , B 0 ] is Lipschitz continuous in both inputs, the proof of a) is then complete.

Now, if ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A , B 0 ] . Let y = u + A 1 v then y u + μ A 1 v and

A N 1 y A N 1 u + A 1 v A ρ ( 1 + A 1 ) r A ρ ( 1 + μ A 1 ) r

then A N 1 y B ρ A ( 1 + μ A 1 ) r . Using, (3) and (18) we have

e e 0 = M v 1 u M v 0 1 u = ( N 1 N 0 1 ) y = N 0 1 ( N 0 N ) N 1 y = N 0 1 ( B 0 B ) A N 1 y N 0 1 ( B 0 B ) A N 1 y η b A N 1 y η b ρ y η b ρ ( u + μ A 1 v ) = γ u + γ μ A 1 v ,

therefore (19) is checked. Finally,

f f 0 = A M v 1 u A 0 M v 0 1 u A M v 1 u M v 0 1 u γ A u + γ A μ A 1 v ,

hence (20) is established and the proof is finished.

Example Reference [4]. Let n , E = L 2 n ( + ) be, where L 2 ( + ) is the Lebesgue space, equipped with the natural inner product, then E is the Hilbert space. Let D be a real n × n matrix, denote by S ( 0 , 1 ) = { ξ n ; ξ = 1 } and d = inf ξ S ( 0,1 ) ξ T D ξ , where ξ T is the transposed of ξ . Let K ( t ) = ( k i , j ( t ) ) be n × n matrix, with k i , j ( t ) L 1 ( + ) L 2 ( + ) and let K ^ ( i w ) be the Fourier transform of K ( t ) , (defined as 0 if t < 0 ). Denote

k = 1 2 inf w inf ξ S ( 0,1 ) ξ T ( K ^ ( i w ) + K ^ ( i w ) T ¯ ) ξ and κ = sup w Λ ( K ^ ( i w ) ) , where Λ ( M )

denotes the square root of the largest eigenvalue of the matrix M ¯ T M , where M ¯ is the complex conjugate martix of M (note that < k 0 and 0 κ < + ). Furthermore, let ψ : n n be defined by: it exists α > 0 such that ψ ( ξ ) ψ ( ξ ) α ξ ξ , for every ξ , ξ n ; ψ ( 0 ) = 0 (*).

And inf ξ , ξ n ξ ξ ( ψ ( ξ ) ψ ( ξ ) ) T ( ξ ξ ) 1 ξ ξ 2 = a 2 0 . Now define operators A and B as follows, for any x E , ( A x ) ( t ) = D x ( t ) + 0 t K ( t τ ) x ( τ ) d τ ; t 0 , and ( B x ) ( t ) = ψ ( x ( t ) ) . Moreover, let a 0 = inf ξ S ( 0,1 ) ξ T F ξ 0 , where F is the

constant n × n matrix, suppose that, it exists β > 0 such that | ψ ( ξ ) F ξ | β ζ , ξ n (**) and define B 0 : E E by: ( B 0 x ) ( t ) = F x ( t ) when t 0 . Clearly, A is linear and bounded, using Parseval’s equality and the number k, we have A M , μ A d + k > 0 . Also, it is known [12] that A D + κ . On the other hand (*) shows that B is continuous, it is also easy to see that, B M , μ B = a 2 0 . Thus, if

d + k + ( a 2 β ) ( D + κ ) 2 > 0 , we have, μ A + ( μ B β ) A 2 > 0 . In the other hand, μ B 0 = a 0 0 , ( B B 0 ) x b x x E by virtue of (**), then by

theorem 3.3, [ A , B ] and [ A , B 0 ] are normal and Lipschitz continuous in both inputs, with

e e 0 δ u + δ ( d + k ) 1 v

and

f f 0 δ ( D + κ ) u + δ ( D + κ ) ( d + k ) 1 v

whenever ( u , v ) B r × B r , and ( u , v ) ( e , f ) for [ A , B ] ; ( u , v ) ( e 0 , f 0 ) for [ A , B 0 ] , with δ = α ( D + κ ) 3 [ d + k + μ B ( D + κ ) 2 ] 1 [ d + k + a 0 ( D + κ ) 2 ] 1 .

4. Conclusion

The aim of this work is to extend the results obtained in [4] [5], concerning the normality, Lipschitz continuity, of a non linear feedback system described by the monotone maximal operators, defined on real reflexive Banach spaces. In addition, the results of approximation of the solutions of the feedback system assumed to be nonlinear, by solutions of another linear are established. These types of systems find their uses in several fields such as: control theory, network theory, solving the Hammerstein equation... etc. The techniques used are based, on the surjectivity theorem, of the monotone maximal operator and hemicontinuous, defined on real reflexive Banach spaces [14].

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Dolezal, V. (1990) Estimating the Difference of Operators Inverses and Sensitivity of Systems. Nonlinear Analysis: Theory, Methods & Applications, 15, 921-930.
https://doi.org/10.1016/0362-546X(90)90075-R
[2] Dolezal, V. (1991) Robust Stability and Sensitivity of Input-Output Systems over Extend Spaces Part 1, Robust Stability. Circuit, Systems and Signal Processing, 10, 361-389. https://doi.org/10.1007/BF01187551
[3] Dolezal, V. (1991) Robust Stability and Sensitivity of Input-Output Systems over Extend Spaces Part 2, Robust Stability. Circuit, Systems and Signal Processing, 10, 443-454. https://doi.org/10.1007/BF01194882
[4] Dolezal, V. (1979) Feedback Systems Described by Monotone Operators. SIAM Journal on Control and Optimization, 17, 339-364.
https://doi.org/10.1137/0317027
[5] Messaoudi, K. (2020) Feedback Systems on Extended Hilbert Space-Normality and Linearization. Journal of Mathematics Research, 12, 28-44.
https://doi.org/10.5539/jmr.v12n2p28
[6] Zames, G. (1963) Functional Analysis Applied to Nonlinear Feedback Systems. IEEE Transactions on Communication Technology, 10, 392-404.
https://doi.org/10.1109/TCT.1963.1082162
[7] Brezis, H. (1983) Analyse Fonctionnelle, Théorie et Application. Masson, Paris.
[8] Dolezal, V. (1980) An Approximation Theorem for a Hammerstien-Type Equations and Applications. SIAM Journal on Mathematical Analysis, 11, 392-399.
https://doi.org/10.1137/0511036
[9] Dolezal, V. (1998) Some Results on the Invertiblity of Nonlinear Operators. Circuit, Systems and Signal Processing, 17, 683-690.
https://doi.org/10.1007/BF01206568
[10] Dolezal, V. (1999) The Invertiblity of Operators and Contraction Mapping. Circuit, Systems and Signal Processing, 18, 183-187.
https://doi.org/10.1007/BF01206682
[11] Dolezal, V. (2003) Approximate Inverses of Operators. Circuit, Systems and Signal Processing, 22, 69-75.
https://doi.org/10.1007/s00034-004-7014-4
[12] Sandberg, I.W. (1968) On the L2-Boundedness of Solutions of Nonlinear Functional Equations. The Bell System Technical Journal, 43, 1601-1608.
[13] Browder, F.E. (1968) Nonlinear Maximal Monotone Operators in Banach Space. Math Annal, 175, 89-113.
https://doi.org/10.1007/BF01418765
[14] Brezis, H. (1968) Equations et inéquations non linéaires dans les espaces vectoriels en dualité. Annales de l’institut Fourier Grenoble, 18, 115-175.
https://doi.org/10.5802/aif.280
[15] Alves, M.M. (2016) Maximal Monotone Operators in General Banach Spaces. Maicon Marques Alve UFSC, September 16.
[16] Phelps, R.R. (1993) Lectures in Maximal Monotone Operators. 2nd Summer School on Banach Spaces, Related Areas and Applications, Prague/Paseky Summer School, Czech Republic, 15-28 August 1993, 1-30.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.