^{1}

^{*}

^{1}

^{2}

Optimal control is one of the most popular decision-making tools recently in many researches and in many areas. The Lorenz-R össler model is one of the interesting models because of the idea of consolidation of the two models : Lorenz and össler. This paper discusses the Lorenz-Rössler model from the bifurcation phenomena and the optimal control problem (OCP). The bifurcation property at the system equilibrium is studied and it is found that saddle-node and Hopf bifurcations can be holed under some conditions on the parameters. Also, the problem of the optimal control of Lorenz-Rössler model is discussed and it u ses the Pontryagin’s Maximum Principle (PMP) to derive the optimal control inputs that achieve the optimal trajectory. Numerical examples and solutions for bifurcation cases and the optimal controlled system are carried out and shown graphically to show the effectiveness of the used procedure.

Prediction of any system’s development is a very important goal, especially in the case of chaotic systems which exist frequently in several real-life and various fields.

These systems are very important for the service of mankind. Those systems include psychology [

Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman [

The qualitative changes in the trajectories in the phase space due to the change in one or more control parameters are called bifurcations. The bifurcating study is possible for a one-dimensional system with one parameter. But it is difficult in higher-dimensional cases, especially with several parameters. so there is little research in this area [

In the next section, we provide a necessary mathematical introduction. The mathematical system of the Lorenz-Rössler model is presented and a brief discussion of the stability of the system is in Section 3. In Section 4, an analytical investigation of some cases of bifurcation is discussed and many diagrams for those cases are presented. In Section 5, the optimal control problem is discussed followed by many digital examples that were made through simulations. The conclusion is presented in Section 6.

It is known that the optimal control problem requires: A mathematical form for the system to be controlled, description of the constraints, determination of the goal to be accomplished, usually it is an additional boundary condition and determination for the performance measure [^{2}. Consider the case of no constraints on the variables, and for simplicity, we assume that the minimal curve is given as the graph of a smooth function x ( t ) , t ∈ R . The problem now can be illustrated as minimizing the integral which is given by

J = ∫ A B 1 + ( d x / d t ) 2 d t = ∫ A B 1 + x ˙ 2 d t = ∫ A B g ( t , x ( t ) , x ˙ ( t ) ) d t (2.1)

where J is called a functional or the objective function, that must be minimized with respect to t. In a simple case of one dependent variable x ( t ) and no constraints, and conditionally that the optimal curve x ¯ ( t ) exist and unique, the general problem now is to find the optimal curve that minimizes the functional

J = ∫ t 0 t f g ( t , x ( t ) , x ˙ ( t ) ) d t (2.2)

where g is a continuous function in all its variables and has a continuous first and second orders partial derivatives with respect to all its variables. Moreover, t 0 and t f are fixed. If we consider a small variation in the carve, then

x ( t ) = x ¯ ( t ) + ϵ δ ( t ) ∀ t (2.3)

Therefore

x ˙ ( t ) = x ¯ ˙ ( t ) + ϵ δ ˙ ( t ) (2.4)

where ϵ is a small parameter and δ ( t ) is an arbitrary real function of t s.t. δ ( t 0 ) = δ ( t f ) = 0 . It is clear that the optimal curve x ¯ ( t ) is a member of the family (2.3) at ϵ = 0 . See

Thus, the functional in (2.2) can be written as

J = ∫ t 0 t f g ( t , x ¯ ( t ) + ϵ δ ( t ) , x ¯ ˙ ( t ) + ϵ δ ˙ ( t ) ) d t (2.5)

The necessary condition in order to be extremum function is [

d J d ϵ | ϵ = 0 = 0 (2.6)

Under the assumption that x and all its derivatives are continuous, with some mathematical processes and considering that x ( t ) = x ¯ ( t ) and x ˙ ( t ) = x ¯ ˙ ( t ) at ϵ = 0 , the condition (2.6) can be

δ ( t f ) ∂ g ∂ x ˙ | t f − δ ( t 0 ) ∂ g ∂ x ˙ | t 0 + ∫ t 0 t f δ ( t ) ( ∂ g ∂ x − d d t ( ∂ g ∂ x ˙ ) ) d t = 0 (2.7)

but at ϵ = 0 , δ ( t f ) = δ ( t 0 ) = 0 and δ ( t ) ≠ 0 ∀ t ∈ ] t 0 , t f [ , then

∂ g ∂ x − d d t ( ∂ g ∂ x ˙ ) = 0 , ∀ t ∈ ] t 0 , t f [ , s.t. x ( t 0 ) = x 0 and x ( t f ) = x f (2.8)

Equation (2.8) gives the necessary condition to minimize J and it is known as the Euler-Lagrange (E-L) equation, which associates with the vibrational problem (2.3).

Let us now consider the n first ordinary differential equations (ODEs) as the following forms x ˙ ( t ) = ( x ˙ 1 , ⋯ , x ˙ n ) c , and suppose that the optimal state x ¯ ( t ) = ( x ¯ 1 , ⋯ , x ¯ n ) c exist and unique, that is the vector of n twice differentiable function. In this case, the functional (2.2) can take the following form

J ( x ) = ∫ t 0 t f g ( x ( t ) , x ˙ ( t ) , t ) d t , x ( t 0 ) = x 0 , x ( t f ) = x f (2.9)

Under the same conditions in the case of one dependent variable, the function g ( x ) which makes the integral (2.9) an extremum must satisfy the n simultaneous E-L equations, which are given by

∂ g ∂ x i − d d t ( ∂ g ∂ x ˙ i ) = 0 , ∀ t ∈ ] t 0 , t f [ , (2.10)

with the boundary conditions

x i ( t 0 ) = x i 0 and x i ( t f ) = x i f , i = 1 , 2 , ⋯ , n (2.11)

See [

J * ( x , u ) = G ( x f , t f ) + ∫ t 0 t f [ g 0 ( x , u , t ) + λ c ( g − x ˙ ) ] d t (2.12)

where G : ℝ n × ℝ → ℝ and g 0 : ℝ n × ℝ m × ℝ → ℝ are real valued functions that can be selected to weight the terminal and transient performance respectively. G ( x f ) can be called the terminal cost, and g 0 can be the instantaneous loss per unit of time. λ c = ( λ 1 , ⋯ , λ n ) is called Lagrange multipliers (L-m) vector, by integrating the term λ c x ˙ in (2.12) we get

J * = G ( X f , t f ) − λ c x | t 0 t f + ∫ t 0 t f [ H + λ ˙ c x ] d t (2.13)

where

H = g 0 + λ c g (2.14)

is called the Hamiltonian function (H.f). Some-times H takes the form

H ( x , u , λ , λ 0 , t ) = λ 0 g 0 ( x , u , t ) + λ c g ( x , u , t ) (2.15)

where λ 0 ≥ 0 and one can get λ 0 = 1 for maximization [

Theorem: Assume u * ( . ) is the optimal function that maximizes the objective function J * and x * ( . ) is the corresponding trajectory, then u * must satisfy the following conditions [

H ( x * , u * , λ * , λ 0 , t ) ≥ H ( x * , u , λ * , λ 0 , t ) (2.16.1)

λ ˙ j ( t ) = − ∂ H ∂ x j , λ j ( t f ) = ∂ G ∂ x j | t = t f , ∂ H ∂ u j | u * = 0 , ∀ ( u ∈ U , t ∈ [ t 0 , t f ] , j = 1 , 2 , ⋯ , n ) (2.16.2)

This system consists of 2n nonlinear differential equations with n initial conditions x j ( t 0 ) and n terminal conditions λ j ( t f ) . For more details about this theorem and its proof see [

The Lorenz-Rössler system is a three-dimensional system with five parameters. This system is described by the following equations as presented in [

x ˙ 1 = a 1 ( x 2 − x 1 ) − x 2 − x 1 x ˙ 2 = a 2 x 1 − x 2 − 20 x 1 x 3 + x 1 + a 3 x 2 x ˙ 3 = 5 x 1 x 2 − b 1 x 3 + b 2 + x 1 ( x 3 − b 3 ) (3.1)

where x 1 , x 2 and x 3 are the state variables of the system, a 1 , a 2 , a 3 , b 1 , b 2 and b 3 are the system parameters. Clearly, the zero-state is not a solution of the system (3.1) because the system is not homogeneous, and with a few mathematical calculations we can be sure that this system has the following possible equilibrium states

E 1 = ( 0 , 0 , b 2 / b 1 ) (3.2)

E 2 = ( 0 , x 2 , b 2 / b 1 ) , a 1 = a 3 = 1 (3.3)

E 3 = ( x 1 , f ( x 1 ) , B ) (3.4)

where

f ( x 1 ) = A x 1 , A = a 1 + 1 a 1 − 1 , a 1 ≠ 1 , B = 1 20 ( a 2 + 1 + A ( a 3 − 1 ) ) , (3.5)

x 1 = ( − ( B − b 3 ) ± ( ( B − b 3 ) 2 − 20 A ( b 2 − B b 1 ) ) 1 2 ) / 10 A (3.6)

It is easy to show that the system (3.1) under some conditions, is unstable at least at one of its steady-states, so be it E 1 . The Jacobian matrix W of the model (3.1) is given by

W = ( − a 1 − 1 a 1 − 1 0 a 2 + 1 − 20 x 3 a 3 − 1 − 20 x 1 5 x 2 + x 3 − b 3 5 x 1 x 1 − b 1 ) , i , j = 1 , 2 , 3

And W valued at stationary state E 1 is given by

W 1 = ( − ( a 1 + 1 ) a 1 − 1 0 a 2 + 1 − 20 b 2 / b 1 a 3 − 1 0 − b 3 + b 2 / b 1 0 − b 1 ) (3.7)

According the linear stability analysis and theory of linear differential equations, we strive to find the eigenvalues of W 1 . The determinant equation of W 1 is given by the following equation:

| λ I − W 1 | = ( λ + b 1 ) [ λ 2 + θ 1 λ + θ 2 ] = 0 (3.8)

where

θ 1 = 2 + a 1 − a 3 (3.9)

θ 2 = ( 1 + a 1 ) ( 1 − a 3 ) − ( a 1 − 1 ) ( a 2 + 1 − 20 b 2 / b 1 ) (3.10)

In general, the eigenvalues of W 1 are complex numbers. In this regard, we are not concerned with the values of the solutions of (3.8) but with their signs. Based on the linear stability theory, if there are at least one of the eigenvalues in (3.8) is positive, the equilibrium point E 1 is unstable. So, for the linear part in (3.8), the eigenvalue is λ 1 = − b 1 < 0 , while for the quadratic polynomial part, and according of the Descartes’ rule of the number of the positive real roots of a polynomial, the quadratic polynomial in (3.8) has at least one positive root if θ 1 < 0 , i.e. a 3 > a 1 + 2 . That proof that the Lorenz-Rössler for different values of parameters is unstable at least at E 1 .

In this section, we discuss the bifurcation phenomenon of the considered system. At the first equilibrium point, E 1 depends upon the following characteristic Equation (3.8), that can be rewritten as the following:

( λ + b 1 ) [ λ 2 + ( 2 + a 1 − a 3 ) λ + 2 − a 1 a 3 − a 3 − a 1 a 2 + a 2 + 20 b 2 ( a 1 − 1 ) / b 1 ] = 0 (4.1)

then the values of λ are λ 1 = − b 1 , and

λ 2 , 3 = − ( 2 + a 1 − a 3 ) / 2 ± [ ( 2 + a 1 − a 3 ) 2 − 4 { ( 2 − a 1 a 3 − a 3 − a 2 a 1 + a 2 ) + 20 b 2 ( a 1 − 1 ) / b 1 } ] 1 2 / 2 (4.2)

The bifurcation phenomena arise when one or more of the eigenvalues equal to zero, by analyzing the values of the last two eigenvalues many cases hold:

Case 1: when ( 2 − a 1 a 3 − a 3 − a 2 a 1 + a 2 ) + 20 b 2 ( a 1 − 1 ) / b 1 = 0 , then λ 2 = − ( 2 + a 1 − a 3 ) and λ 3 = 0 , this case is a Saddle-Node bifurcation (SNB).

We chose the parameter a 3 as a bifurcation parameter, with giving fixed values of the other parameters the bifurcation diagrams can be drawn as in

Case 2: when ( 2 − a 1 a 3 − a 3 − a 2 a 1 + a 2 ) + 20 b 2 ( a 1 − 1 ) / b 1 > 0 and ( 2 + a 1 − a 3 ) = 0 , then

λ 2 , 3 = ∓ i ( − 4 { ( 2 − a 1 a 3 − a 3 − a 2 a 1 + a 2 ) − 20 b 2 ( a 1 − 1 ) / b 1 } ) 1 / 2 / 2

where Hopf bifurcation (HB) holds. Choosing b 1 as a bifurcation parameter with fixed values of the other parameters give a picture of this case in the bifurcation diagram as in

Next we chose the parameter a 3 as a bifurcation parameter, with giving fixed values of the other parameters the bifurcation diagrams can be drawn as in

In the case of constraints on the control variables, Pontryagin maximum principleis considered as a design tool to get the best possible trajectory for a dynamical system by providing a necessary condition that must hold for an optimum, but not (in general) sufficient conditions [

The selected measure can be presented as the following forms:

minmize ∅ = 1 2 ∫ t 0 T ∑ i = 1 3 ( α i w i 2 + β i u i 2 ) d t (5.1)

Subject to:

The controlled system of (3.1) that is given by

x ˙ 1 = a 1 ( x 2 − x 1 ) − x 2 − x 1 + e 1 x ˙ 2 = a 2 x 1 − x 2 − 20 x 1 x 3 + x 1 + a 3 x 2 + e 2 x ˙ 3 = 5 x 1 x 2 − b 1 x 3 + b 2 + x 1 ( x 3 − b 3 ) + e 3 (5.2)

And the initial and terminal conditions

x i | t 0 = x i 0 , x i | T = x ¯ i , i = 1 , 2 , 3 (5.3)

where:

w i = ( x i − x ¯ i ) and u i = ( e i − e ¯ i ) (5.4)

- α i , β i , i = 1 , 2 , 3 are positive control constants.

- x ¯ is any steady-states of the system as E 1 , E 2 or E 3 that are defined in Equations (3.2)-(3.4).

- e i are the controlling inputs that be determined by the PMP with respect to the optimality measure for the system (3.1) near its stead-states.

- e ¯ i are the optimal control inputs.

The selected measure or the objective function (5.1) represents the sum of squares of the deviations of x i from their goal levels x ¯ i and deviations of the control inputs e i from their goal levels e ¯ i , ( i = 0 , 1 , 2 ).

Now, our aim is to keep the system states x i , i = 1 , 2 , 3 to their goal levels x ¯ i and the control inputs e i to their goal levels (optimal controllers) e ¯ i over time as close as possible. Let us consider the following an additional variable as a replacement of the cost function (5.1)

x ˙ * ( t ) = 1 2 ∑ i = 1 3 ( α i w i 2 + β i u i 2 ) (5.5)

with the initial condition x * | t 0 = 0 and the terminal condition x * | T = ∅ .

Then, introduce the co-state variables γ = ( γ 1 , γ 2 , γ 3 , γ * ) c that are related to the state variables of the system (3.1) and the additional state variable (5.5) respectively. Then the H.f takes the following form

H = γ * x ˙ * + ∑ i = 1 3 γ i x ˙ i = ∑ i = 1 3 γ * 2 ( α i w i 2 + β i u i 2 ) + γ 1 [ a 1 ( x 2 − x 1 ) − x 2 − x 1 + e 1 ] + γ 2 [ a 2 x 1 − x 2 − 20 x 1 x 3 + x 1 + a 3 x 2 + e 2 ] + γ 3 [ 5 x 1 x 2 − b 1 x 3 + b 2 + x 1 ( x 3 − b 3 ) ] (5.6)

The Hamiltonian equations are given by:

∂ γ * ∂ t = − ∂ H ∂ x * = 0 (5.7)

∂ γ i ∂ t = − ∂ H ∂ x i , i = 1 , 2 , 3

From Equation (5.7), clearly γ * is a constant, so for minimization, we can choose γ * = − 1 [

γ ˙ 1 = α 1 w 1 + γ 1 ( a 1 + 1 ) − γ 2 ( 1 + a 2 − 20 x 3 ) − γ 3 ( 5 x 2 + x 3 − b 3 ) (5.8)

γ ˙ 2 = α 2 w 2 − γ 1 ( a 1 − 1 ) − γ 2 ( a 3 − 1 ) − 5 γ 3 x 1 (5.9)

γ ˙ 3 = α 3 w 3 + 20 γ 2 x 1 − γ 3 ( x 1 − b 1 ) (5.10)

For minimizing the H.f w.r.t e i , ∀ i through the conditions ∂ H / ∂ e i = 0 , we can get

e i = e ¯ i + γ i β i , i = 1 , 2 , 3 (5.11)

By substituting (5.11) in the controlled system in (3.1) with Equations (5.8)-(5.10) we get the following system of seven nonlinear differential equations

x ˙ 1 = a 1 ( x 2 − x 1 ) − x 2 − x 1 + e ¯ 1 + γ 1 β 1

x ˙ 2 = a 2 x 1 − x 2 − 20 x 1 x 3 + x 1 + a 3 x 2 + e ¯ 2 + γ 2 β 2

x ˙ 3 = 5 x 1 x 2 − b 1 x 3 + b 2 + x 1 ( x 3 − b 3 ) + e ¯ 3 + γ 3 β 3

x ˙ * = 1 2 ∑ i = 1 3 ( α i ( x i − x ¯ i ) 2 + ( γ i 2 / β i ) ) (5.12)

γ ˙ 1 = α 1 ( x 1 − x ¯ 1 ) + γ 1 ( a 1 + 1 ) − γ 2 ( 1 + a 2 − 20 x 3 ) − γ 3 ( 5 x 2 + x 3 − b 3 )

γ ˙ 2 = α 2 ( x 2 − x ¯ 2 ) − γ 1 ( a 1 − 1 ) − γ 2 ( a 3 − 1 ) − 5 γ 3 x 1

γ ˙ 3 = α 3 ( x 3 − x ¯ 3 ) + 20 γ 2 x 1 − γ 3 ( x 1 − b 1 )

with the following boundary conditions: x i | t 0 = x i 0 , x i | T = x ¯ i , γ i | T = 0 , i = 1 , 2 , 3 .

In the following, some numerical solutions of the system in Equations (5.12), that display how the system states converge to the goal state in different cases, and how the co-state variables disappear at the end of time T.

- The optimal control to the stationary state E 1 = ( 0 , 0 , b 2 b 1 = 5 ) is shown in

- The optimal control to the stationary state E 3 = ( x 1 , A x 1 , B ) is shown in

A = a 1 + 1 a 1 − 1 , a 1 ≠ 1 , B = ( a 2 + 1 + A ( a 3 − 1 ) ) 20 , and

x 1 = ( − ( B − b 3 ) ± ( B − b 3 ) 2 − 20 A ( b 2 − B b 1 ) 1 2 ) / 10 A

Many studies can be implemented on the Lorenz-Rössler model, but in this paper, we have focused on the issues of the bifurcations and the optimal control problem of the system. The bifurcation analysis of the system at the equilibrium state ( 0 , 0 , b 2 / b 1 ) was discussed and it was found that a saddle-node bifurcation and a Hopf bifurcation can be holed under some conditions. Many bifurcation diagrams have verified those cases using examples that are showing graphically for some chosen parameters. The procedure of the Pontryagin Maximum Principle is considered to solve the optimal control problem. The optimal control inputs were analytically derived and it is found that they are functions of the co-state variables which disappear when the system arrives at the ideal state. Analytical methods are used to solve the necessary conditions, while the non-linear differential equations of the optimal controlled system are solved numerically by the math software Maple, and then some illustrative solutions are shown graphically.

The authors declare no conflicts of interest regarding the publication of this paper.

Alwan, S.M., Al-Mahdi, A.M. and Odhah, O.H. (2020) Optimal Control and Bifurcation Issues for Lorenz-Rössler Model. Open Journal of Optimization, 9, 71-85. https://doi.org/10.4236/ojop.2020.93006