On Nonlinear Optimal Control Problems under State Constraint

Abstract

We study some nonlinear optimal control problems under state constraint. We construct extremal flows by differential-algebraic equations to solve an optimal control problem subject to mixed control-state constraint. Then we present an approximation approach to the state constraint optimal control problem.

Share and Cite:

Zhu, J.H. (2025) On Nonlinear Optimal Control Problems under State Constraint. Open Access Library Journal, 12, 1-11. doi: 10.4236/oalib.1114454.

1. Introduction

In general, it is hard to obtain an analytic solution for nonlinear optimal control problem under state constraint. To solve a constraint optimal control problem, it is rather common to use a direct discretization approach to exact solution for the problem [1] [2]. Mathematically, the direct discretization methods should be supported by the investigation of convergence properties for the solutions of discretized problems approximating to the solution of the continuous problem. One usually expects a desired error between a numerical value and the optimal objective value of the original problem. However, for many engineering applications, the direct discretization methods may be useful to deal with some constrained optimal control problems efficiently, but so far for these methods one still expects more researches on theoretical foundation of convergence results [2]. The purpose of this paper is to provide a convergence result for a nonlinear optimal control problem under state constraint.

In this paper, according to the traditional optimal control theory, an admissible control is measurable and bounded on the interval [ 0,T ] such that the ordinary differential equation in the control problem has a unique solution.

We consider three nonlinear optimal control problems under state constraint as follows.

(1.1)

where the cost function P( x ) is continuously differentiable on R n and Q( x ) is a convex function on R n . For this problem the matrix functions f( x ),g( x ) on R n are smooth and the vector a R n satisfying Q( a )<0 is given in the control system in (1.1).

(1.2)

where the cost function P( x ) is continuously differentiable on R n and Q( x ) is a convex function on R n . For this problem the matrix functions f( x ),g( x ) on R n are smooth and the vector a R n satisfying Q( a )<0 is given in the control system in (1.2).

(1.3)

where the parameter β>0 is given and the cost function P( x ) is continuously differentiable on R n and Q( x ) is a convex function on R n . For this problem the matrix functions f( x ),g( x ) on R n are smooth and the vector a R n satisfying Q( a )<0 is given in the control system in (1.3).

Main assumption: In this paper, we assume that the sets of admissible control of all problems concerned in this paper are not empty.

Remark 1.1. Through out the paper by optimal value of the problem we mean the infimum of the cost functional, i.e. infJ( u ) . On the other hand, noting that P( x ) is continuously differentiable and that the matrix functions f( x ),g( x ) on R n are smooth, by functional analysis we see that the cost functional

J( u )=P( x( T ) )+ 0 T 1 2 u T ( t )u( t )dt is continuous on admissible control space.

The rest of the paper is organized as follows. In Section 2, to deal with the problem , we present a partial differential equation by rewriting the Hamilton-Jacobi-Bellman equation. Then we create an extremal flow by a differential-algebraic equation to compute the optimal value of the problem . We prove a conver-gence theorem for an approximation approach to the optimal value of the problem by a series of optimal values of the problem with different parameters in Section 3. In Section 4, we provide a convergence result for the optimal value of the problem by proving that the problem and the problem have the same optimal value. We give a proof of Theorem 2.1 in Section 5 and a conclusion in Section 6.

2. A Study on Optimal Control Problem Subject to Mixed Control-State Constraint

In this section, we deal with the optimal control problem by a partial differential equation. In the following, the positive number β is fixed. For given x R n , define a set

S( x )={ u: u T u< Q( x ) β }. (2.1)

Note that if Q( x )0 then S( x )= . In the following we assume that

S( x ). (2.2)

We consider the Hamilton-Jacobi-Bellman equation as follows:

v t ( t,x )+ v x T ( t,x )f( x )+ min uS( x ) { v x T ( t,x )g( x )u+ 1 2 u T u }=0,( T,x )=P( x ). (2.3)

For given ( λ,x ) R m × R n with Q( x )<0 , define a function

H( λ,x ):= min uS( x ) { λ T u+ 1 2 u T u }, (2.4)

then for t[ 0,T ] and λ= g T ( x ) v x ( t,x ) , we have

H( g T ( x ) v x ( t,x ),x )= min uS( x ) { v x T ( t,x )g( x )u+ 1 2 u T u }. (2.5)

By (2.5), we can rewrite the Hamilton-Jacobi-Bellman equation in (2.3) with global optimization to obtain the following partial differential equation [3]:

v t ( t,x )+ v x T ( t,x )f( x )+H( g T ( x ) v x ( t,x ),x )=0,( T,x )=P( x ). (2.6)

We will solve the optimal control problem in (1.3) by the partial dif-ferential equation in (2.6).

Given a pair ( x,u ) R n × R m , satisfying uS( x ) , i.e. u T u< Q( x ) β . For Q( x ) β >0 , in the following let u ^ denote the global minimizer of min uS( x ) { λ T u+ 1 2 u T u } , i.e. H( λ,x )= λ T u ^ + 1 2 u ^ T u ^ = min uS( x ) { λ T u+ 1 2 u T u } . We need the following lemma to study the expression of H( g T ( x ) v x ( t,x ),x ) . For given r>0 and λ R m , we then define auxiliary function h( λ,r )= min u 2 <r { λ T u+ 1 2 u T u } . Again let u ^ denote the global minimizer of min u 2 <r { λ T u+ 1 2 u T u } , i.e. h( λ,x )= λ T u ^ + 1 2 u ^ T u ^ = min u 2 <r { λ T u+ 1 2 u T u } . We see that, given x R n satisfying Q( x ) β >0 , for r= Q( x ) β , λ= g T ( x ) v x ( t,x ) , we have H( g T ( x ) v x ( t,x ),x )=h( λ,r ) . By primary optimi-zation theory we have the following lemma.

Lemma 2.1. Given r>0 and λ R m . If λ 2 <r , then u ^ =λ,h( λ,r )= λ 2 2 . On the other hand, if λ 2 r , then u ^ = r ( λ λ ) , ( λ,r )= r 2 r λ .

For given ( t,x )[ 0,T ]× R n such that Q( x ) β >0 , denoting Q( x ) β by r and denoting g T ( x ) v x ( t,x ) by λ , by Lemma 2.1, we see that, if g T ( x ) v x ( t,x ) 2 < Q( x ) β , then

u ^ =φ( t,x ):= g T ( x ) v x ( t,x ),H( g T ( x ) v x ( t,x ),x )= 1 2 g T ( x ) v x ( t,x ) 2 , (2.7)

and if g T ( x ) v x ( t,x ) 2 Q( x ) β ,then

u ^ =φ( t,x ):= Q( x ) β ( g T ( x ) v x ( t,x ) g T ( x ) v x ( t,x ) ), H( g T ( x ) v x ( t,x ),x )= Q( x ) 2β Q( x ) β g T ( x ) v x ( t,x ) . (2.8)

Remark 2.1. By Lemma 2.1 we see that H( λ,x ) is continuous with respect to ( λ,x ) . We can get a viscosity solution of the partial differential equation in (2.6) [4]-[6]. Then the Hamilton-Jacobi-Bellman equation in (2.3) can be solved for a numerical solution [7].

Definition 2.1. For a solution v( t,x ) of the partial differential equation in (2.6), we call ( x ^ ( ), u ^ ( ) ) an extremal flow related to v( t,x ) if it is a solution of the following differential-algebraic equation:

x ^ ˙ ( t )=f( x ^ ( t ) )+g( x ^ ( t ) ) u ^ ( t ), x ^ ( 0 )=a R n , (2.9)

v x T ( t, x ^ ( t ) )g( x ^ ( t ) ) u ^ ( t )+ 1 2 u ^ T ( t ) u ^ ( t ) =H( g T ( x ^ ( t ) ) v x ( t, x ^ ( t ) ), x ^ ( t ) ),t[ 0,T ]. (2.10)

By the same way in [7], we can prove the following theorem.

Theorem 2.1. Let v( t,x ) be a solution of the partial differential equation in (2.6) and ( x ^ ( ), u ^ ( ) ) be an extremal flow defined by (2.9), (2.10). Then, u ^ ( ) is an optimal control of the problem , and

v( 0,a )=P( x ^ ( T ) )+ 0 T 1 2 u ^ T ( t ) u ^ ( t )dt is the optimal value of the problem .

Theorem 2.2. If the continuously differentiable function v( t,x ) is a solution of the partial differential equation in (2.6) on [ 0, )×{ x:Q( x )>0 } and φ( t,x ) is the function defined in (2.7), (2.8), then u=φ( t,x ) is an optimal feedback control of the problem .

Proof: Since v x ( t,x ) is continuous, by (2.7), (2.8), we see that φ( t,x ) is continuous on [ 0,T ]×{ x:Q( x )>0 } . By classical theory of ordinary differential equation we see that the equation

x ˙ ( t )=f( x( t ) )+g( x( t ) )φ( t,x( t ) ), x ^ ( 0 )=a R n (2.11)

has a solution on [ 0,T ]×{ x:Q( x )>0 } . Let the solution of the ODE in (2.11) be denoted by x ^ ( t ) and let φ( t, x ^ ( t ) ) be denoted by u ^ ( t ) . By lemma 2.1 and (2.7),(2.8) we see that

v x T ( t, x ^ ( t ) )g( x ^ ( t ) ) u ^ ( t )+ 1 2 u ^ T ( t ) u ^ ( t ) =H( g T ( x ^ ( t ) ) v x ( t, x ^ ( t ) ), x ^ ( t ) ),t[ 0,T ]. (2.12)

Noting (2.11), (2.12), by Definition 2.1, the pair ( x ^ ( ), u ^ ( ) ) is an extremal flow related to v( t,x ) . It follows from Theorem 2.1 that u=φ( t,x ) is an optimal feedback control of the problem .

3. An Approximation Approach to the Optimal Value of Problem

In this section we show a convergent result for an approximation approach to the optimal value of problem which is restated as follows.

(3.1)

where the cost function P( x ) is continuously differentiable on R n and Q( x ) is a convex function on R n . In this problem the matrix functions f( x ),g( x ) on R n are continuously differentiable and the vector a R n such that Q( a )<0 are given in the control system in (3.1).

In the following, for a given positive number β , the optimal value of problem is denoted by V β and the optimal value of problem is denoted by V ^ .

Lemma 3.1. (i). For each given number β>0 , V β V ^ . (ii). If β 1 β 2 >0 , then V β 1 V β 2 .

Proof: Firstly, let ( x( ),u( ) ) be an admissible pair of the problem . Note that the functions f( x ),g( x ) and the vector a appearing in and are the same. It follows from the fact Q( x( t ) )+β u T ( t )u( t )<0 , t[ 0,T ] that Q( x( t ) )<0 , t[ 0,T ] . Thus ( x( ),u( ) ) is also an admissible pair of the problem . Consequently, V β V ^ .

Secondly, let ( x( ),u( ) ) be an admissible pair of the problem with the parameter β 1 . Note that functions f( x ),g( x ) and the vector c,a appearing in do not depend on different parameter β . Noting β 1 β 2 >0 , it follows from the fact Q( x( t ) )+ β 1 u T ( t )u( t )<0 , t[ 0,T ] that Q( x( t ) )+ β 2 u T ( t )u( t )<0 , t[ 0,T ] . Thus ( x( ),u( ) ) is also an admissible pair of the problem with the parameter β 2 . Consequently, V β 1 V β 2 . The lemma is proved.

Theorem 3.1. For given ϵ>0 there exists a positive number β such that

| V β V ^ |<ϵ. (3.2)

Proof: Given ϵ>0 . Let ( x ¯ , u ¯ ) be an admissible pair of the problem such that

V ^ J( u ¯ )< V ^ +ϵ, (3.3)

noting that V ^ is the infimum of J( u ) for the problem .

Noting that x ¯ ( t ) is continuous and c T x ¯ ( t )<0,t[ 0,T ] , we see that there is a δ>0 such that

Q( x ¯ ( t ) )<δ,t[ 0,T ]. (3.4)

Noting that the admissible control is bounded on [ 0,T ] , there is a number M>0 such that

u ¯ ( t ) T u ¯ ( t )M,t[ 0,T ]. (3.5)

By (3.4), (3.5), there is a β>0 , such that

Q( x ¯ ( t ) )+β u ¯ ( t ) T u ¯ ( t )δ+βM<0,t[ 0,T ]. (3.6)

Thus ( x ¯ , u ¯ ) is an admissible pair of both problem and problem with the parameter β . Then we have

J( u ¯ ) V β (3.7)

By Lemma 3.1 and (3.3),(3.7) we have

0 V β V ^ < V β ( J( u ¯ )ϵ )J( u ¯ )( J( u ¯ )ϵ )=ϵ. (3.8)

Therefore (3.2) is true and the theorem has been proved.

Corollary 3.1. Let β n ,n=1,2, be a decrease sequence of positive numbers satisfying β n 0 when n+ . Then

lim n+ V β n = V ^ . (3.9)

Proof: By Lemma 3.1, noting (3.3), (3.6) in the proof of Theorem 3.1, for each n( =1,2, ) , there is an admissible pair ( x n , u n ) of the problem satisfying

V ^ J( u n )< V ^ + 1 n , (3.10)

noting that V ^ is the infimum of J( u ) for the problem .

Noting that x n ( . ) is continuous and c T x n ( t )<0,t[ 0,T ] , we see that there is a δ n >0 such that

Q( x n ( t ) )< δ n <0,t[ 0,T ], (3.11)

and noting that the admissible control is bounded on [ 0,T ] , there is a number M n >0 such that

u n ( t ) T u n ( t ) M n ,t[ 0,T ]. (3.12)

By (3.11), (3.12), there is a β n >0 , such that

Q( x n ( t ) )+ β β u n ( t ) T u n ( t ) δ n + β n M n <0,t[ 0,T ]. (3.13)

Thus ( x n , u n ) is an admissible pair of both problem and problem with the parameter β n . Then we have

J( u n ) V β n . (3.14)

This process (3.10)-(3.14) begins from n=1 . But by Lemma 3.1 we see that if a positive number β is got to satisfy (3.13) then for every positive number β less than β the process (3.10)-(3.14) still works. For n=1 , we choose 0< β 1 <1 as in (3.10)-(3.14). After the step n( 1 ) has been done, in the next

step we choose 0< β n+1 β n n+1 ( < β n ) such that

Q( x n+1 ( t ) )+ β n+1 u n+1 ( t ) T u n+1 ( t ) δ n+1 + β n+1 M n+1 <0,t[ 0,T ]. (3.15)

Then we see that in this way the positive sequence { β n } is strictly decreasing and tends to zero when n . By the deductive process the same as (3.8) in the proof of Theorem 3.1, or by Lemma 3.1 and (3.10), (3.14), we have for each n=1,2, ,

0 V β n V ^ <J( u n )( J( u n ) 1 n )= 1 n . (3.15)

Therefore we have

lim n+ V β n = V ^ , (3.16)

with the positive sequence { β n } being strictly decreasing and tending to zero when n . The Corollary 3.1 has been proved.

4. On the Optimal Value of Problem .

In this section we deal with the problem :

(4.1)

In the following, the optimal value of the problem is denoted by V * . Recall that in Section 4 the optimal value of the problem is denoted by V ^ . In the following lemma, noting that c T a<0 , we define two sets of admissible control as follows:

D 1 :={ u( . ):Q( x u ( t ) )0,t[ 0,T ] },

D 2 :={ u( . ):Q( x u ( t ) )<0,t[ 0,T ] }.

Lemma 4.1. Under the notations above, we have

V ^ V * . (4.2)

Proof: Let x u ( . ) be the solution of the linear equation

x ˙ ( t )=f( x( t ) )+g( x( t ) )u( t ),x( 0 )=a( R n )

corresponding to an admissible control u( . ) . It is clear that D 2 D 1 . Con-sequently,

V ^ V * . (4.4)

The lemma is proved.

In the following lemma we should recall that, in the first section of this paper, we have assumed that the admissible control set D 2 ={ u( . ): c T x u ( t )<0,t[ 0,T ] } is not empty.

Lemma 4.2. Let u ¯ ( . ) D 2 ={ u( . ):Q( x u ( t ) )<0,t[ 0,T ] } . Then for any admis-sible control u ˜ ( . ) such that Q( x u ( t ) )0 , t[ 0,T ] , we have, for α( 0,1 ] ,

α u ¯ ( . )+( 1α ) u ˜ ( . ) D 2 ={ u( . ):Q( x u ( t ) )<0,t[ 0,T ] }. (4.5)

Proof: Let x ¯ ( . ) and x ˜ ( . ) be the trajectories of the linear system x ˙ ( t )=Ax( t )+Bu( t ) , x( 0 )=a R n corresponding to u ¯ ( . ) and u ˜ ( . ) respec-tively. Noting that Q( x ) is convex, we have, for t[ 0,T ] ,

Q( α x ¯ ( t )+( 1α ) x ˜ ( t ) )=αQ( x ¯ ( t ) )+( 1α )Q( x ˜ ( t ) )<0, (4.6)

also noting that Q( x ¯ ( t ) )<0 and Q( x ˜ ( t ) )0 , α( 0,1 ] . The lemma is proved.

Theorem 4.1. Let the notations of V ^ , V * be as in Lemma 4.1. Then

V * = V ^ . (4.7)

Proof: Let u ¯ ( . ) D 2 ={ u( . ): c T x u ( t )<0,t[ 0,T ] } . We show (4.7) by induc-tion. In the initial step, for given δ>0 , we have an admissible control u ^ ( . ) satisfying c T x u ^ ( t )0 , t[ 0,T ] and

V * J( u ^ )< V * +δ, (4.8)

noting that V * is the infimum of J( u ) for the problem .

Recalling Remark 1.1, noting that each admissible control is bounded, the func-tion P( x ) in the cost functional for the concerned problems is continuously differentiable and the cost functional J( . ) is continuous on the control space. By Lemma 4.2, there exists a number α>0 , such that the control

u α ( . )=α u ¯ ( . )+( 1α ) u ^ ( . )= u ^ ( . )+α( u ¯ ( . ) u ^ ( . ) ) (4.9)

satisfying

V * J( u α )< V * +δ (4.10)

and

Q( x α ( t ) )<0,t[ 0,T ]. (4.11)

Thus u α is also an admissible control for the problem . Then by Lemma 4.1 and (4.10) we have

V * V ^ J( u α )< V * +δ, (4.12)

consequently,

V * V ^ < V * +δ. (4.13)

Next the same as in the previous step we have an admissible control which is denoted by u ˜ ( . ) satisfying c T x u ˜ ( t )0 , t[ 0,T ] such that

V * J( u ˜ )< V * + δ 2 ,

noting that V * is the infimum of J( u ) for the problem . As in the previous step above, we have a number α ˜ >0 , such that the control

u α ˜ ( . )= α ˜ u ¯ ( . )+( 1 α ˜ ) u ˜ ( . )= u ˜ ( . )+ α ˜ ( u ¯ ( . ) u ˜ ( . ) ) (4.14)

satisfying

V * J( u α ˜ )< V * + δ 2 (4.15)

and

Q( x α ˜ ( t ) )<0,t[ 0,T ]. (4.16)

Thus u α ˜ is also an admissible control for the problem . Then by Lemma 4.1 and (4.15) we have

V * V ^ J( u α ˜ )< V * + δ 2 , (4.17)

consequently,

V * V ^ < V * + δ 2 . (4.18)

Similar to the process from (4.13) to (4.18), by induction, in this way, for n=0,1,2, , when n=0 , it is in the initial step, we have V * V ^ < V * +δ , and when staying in the n+1 -th step, we have,

V * V ^ < V * + 2 ( n+1 ) δ. (4.19)

Thus for each positive integer N , we have

0 V ^ V * < 2 N δ.

Let N+ , we have

V * = V ^ . (4.20)

Therefore (4.7) is true and the theorem has been proved. By Theorem 4.1 and Corollary 3.1, we have the following convergence result.

Corollary 4.1. Let β n ,n=1,2, be a decrease sequence of positive numbers satisfying β n 0 when n+ . Then

lim n+ V β n = V * .

5. A proof of Theorem 2.1

By (2.9), (2.10), we have, for t[ 0,T ] ,

d dt v( t, x ^ ( t ) )= v t ( t, x ^ ( t ) )+ v x T ( t, x ^ ( t ) )f( x ^ ( t ) )+ v x T ( t, x ^ ( t ) )g( x ^ ( t ) ) u ^ ( t ) = v t ( t, x ^ ( t ) )+ v x T ( t, x ^ ( t ) )f( x ^ ( t ) ) +H( g T ( x ^ ( t ) ) v x ( t, x ^ ( t ) ), x ^ ( t ) ) v x ( t, x ^ ( t ) ) 1 2 u ^ T ( t ) u ^ ( t ) = 1 2 u ^ T ( t ) u ^ ( t ). (5.1)

Integrating the above equality with respect to t from 0 to T , noting that v( T, x ^ ( T ) )=P( x ^ ( T ) ) , x ^ ( 0 )=a , we have

0 T 1 2 u ^ T ( t ) u ^ ( t )dt = 0 T d dt v( t, x ^ ( t ) )dt =P( x ^ ( T ) )v( 0,a ) (5.2)

and

v( 0,a )=P( x ^ ( T ) )+ 0 T 1 2 u ^ T ( t ) u ^ ( t )dt . (5.3)

Now let ( x( ),u( ) ) be an arbitrary admissible pair of the control system in the problem . We have, for t[ 0,T ] ,

c T x( t )+β u T ( t )u( t )<0, (5.4)

which implies u( t )S( x( t ) ) . Thus, by (2.5) with λ= g T ( x( t ) ) v x ( t,x( t ) ) for a t[ 0,T ] , we have

H( g T ( x( t ) ) v x ( t,x( t ) ),x( t ) ) v x T ( t,x( t ) )g( x( t ) )u( t )+ 1 2 u T ( t )u( t ). (5.5)

Then for each t[ 0,T ] , by the partial deferential equation in (2.6), also noting that ( x( ),u( ) ) is an arbitrary admissible pair of the control system in the problem , we have, by (5.5),

0= v t ( t,x( t ) )+ v x T ( t,x( t ) )f( x( t ) )+H( g T ( x( t ) ) v x ( t,x( t ) ),x( t ) ) v t ( t,x( t ) )+ v x T ( t,x( t ) )f( x( t ) )+ v x T ( t,x( t ) )g( x( t ) )u( t )+ 1 2 u T ( t )u( t ) = v t ( t,x( t ) )+ v x T ( t,x( t ) ) dx( t ) dt + 1 2 u T ( t )u( t ) = d dt v( t,x( t ) )+ 1 2 u T ( t )u( t ). (5.6)

Integrating the above inequality over [ 0,T ] , noting v( T,x( T ) )=P( x( T ) ) , x( 0 )=a , by (5.6), we have

0 0 T [ d dt v( t,x( t ) )+ 1 2 u T ( t )u( t ) ]dt =P( x( T ) )v( 0,a )+ 0 T 1 2 u T ( t )u( t )dt . (5.7)

By (5.3), (5.7), we have

P( x ^ ( T ) )+ 0 T 1 2 u ^ T ( t ) u ^ ( t )dt =v( 0,a )P( x( T ) )+ 0 T 1 2 u T ( t )u( t )dt . (5.8)

By (5.8), we see that u ^ ( ) is an optimal control for the problem and

v( 0,a )=P( x ^ ( T ) )+ 0 T 1 2 u ^ T ( t ) u ^ ( t )dt is the optimal value of the problem .

The theorem has been proved.

6. Conclusion

In this paper, we study nonlinear optimal control problem under state constraint. Firstly we deal with a nonlinear optimal control problem subject to mixed control-state constraint. We try to create a partial differential equation by Hamilton-Jacobi-Bellman equation with global optimization. Then we provide a convergence result for an approximation approach to the optimal value of a nonlinear optimal control problem under state constraint.

Conflicts of Interest

The author declares no conflicts of interest.

Conflicts of Interest

The author declares no conflicts of interest.

References

[1] Sontag, E.D. (1998) Mathematical Control Theory: Deterministic Finite Dimensional Systems. 2nd Edition, Springer.
[2] Martens, B. and Gerdts, M. (2020) Convergence Analysis for Approximations of Optimal Control Problems Subject to Higher Index Differential-Algebraic Equations and Mixed Control-State Constraints. SIAM Journal on Control and Optimization, 58, 1-33. [Google Scholar] [CrossRef
[3] Zhu, J. (2018) Singular Optimal Control by Minimizer Flows. European Journal of Control, 42, 32-37. [Google Scholar] [CrossRef
[4] Crandall, M.G. and Lions, P. (1983) Viscosity Solutions of Hamilton-Jacobi Equations. Transactions of the American Mathematical Society, 277, 1-42. [Google Scholar] [CrossRef
[5] Bardi, M. and Capuzzo-Dolcetta, I. (1997) Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Birkhauser.
[6] Fleming, W.H. (1969) The Cauchy Problem for a Nonlinear First Order Partial Differential Equation. Journal of Differential Equations, 5, 515-530. [Google Scholar] [CrossRef
[7] Zhu, J. (2023) A Computational Approach to Optimal Control Problems Subject to Mixed Control-State Constraints. International Journal of Control, 96, 41-47. [Google Scholar] [CrossRef

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.