^{1}

^{*}

^{1}

^{*}

For a class of nonlinear systems whose states are immeasurable, when the outputs of the system are sampled asynchronously, by introducing a state observer, an output feedback distributed model predictive control algorithm is proposed. It is proved that the errors of estimated states and the actual system's states are bounded. And it is guaranteed that the estimated states of the closed-loop system are ultimately bounded in a region containing the origin. As a result, the states of the actual system are ultimately bounded. A simulation example verifies the effectiveness of the proposed distributed control method.

Traditional process control systems just simply combine the measurement sensors with control actuators to ensure the stability of closed-loop systems. Although this paradigm to process control has been successful, the calculation burden of this kind of control is large and the performance of the system is not good enough [

Model predictive control (MPC) is receding horizon control which can deal with the constraints of systems’ inputs and states during the design of optimization control. It adopts feedback correction, rolling optimization, and has strong ability to deal with constraints and dynamic performance [

There are many research results about distributed MPC design at present. In literature [

These references are obtained on the assumption that the systems’ states can be measured continuously. The systems whose states are immeasurable are not taken into account in these references. However, immeasurable states often

happen in practice. In literature [

On the basis of the above references, this paper considers a class of nonlinear systems whose states are immeasurable. By introducing a state observer, and using output feedback, under the assumption that the outputs of the system are sampled of asynchronous measurements, an output feedback distributed model predictive control algorithm is designed. Therefore, the ultimately boundedness of the estimated states and the boundedness of the error between estimated states and the actual system’s states are proved, and then it is proved that the states of the actual system are ultimately bounded. And the stability of the closed-loop system is guaranteed. The performance of the system is improved and the burden of calculation is reduced.

This paper is arranged as follows. The second section is the preparation work. In the third section, the state observer is designed, and its stability is analyzed. The fourth section designs a controller based on Lyapunov function to make sure the asymptotic stability of the nominal observer. In the fifth section, an output feedback distributed model predictive control algorithm is proposed and the stability of the closed-loop system is proved. The instance simulation is provided in the sixth section. Conclusion is given in Section 7.

In this paper, the operator | ⋅ | denotes Euclidean norm of variates. The symbol x ˙ ( t ) denotes the derivative of x ( t ) . The symbol Ω r denotes the set Ω r = { x ∈ R n x : V ( x ) ≤ r } where V is a scalar positive definite, continuous differentiable function and V ( 0 ) = 0 and r is a positive constant. Definitions and lemmas used in this paper are as follows:

Definition 1 [

Definition 2 [

Lemma 1 [

| x ( t ) | ≤ β ( | x ( t 0 ) | , t − t 0 ) , ∀ x ( t 0 ) ∈ D 0 , ∀ t ≥ t 0 ≥ 0 (1)

Then, there is a continuously differentiable function V : [ 0, ∞ ) × D 0 → R that satisfies the inequalities

α 1 ( | x | ) ≤ V ( t , x ) ≤ α 2 ( | x | ) ∂ V ∂ t + ∂ V ∂ x f ( t , x ) ≤ − α 3 ( | x | ) | ∂ V ∂ x | ≤ α 4 ( | x | ) (2)

where α 1 , α 2 , α 3 and α 4 are class K functions defined on [ 0, r 0 ] . If the system is autonomous, V can be chosen independent of t.

Lemma 2 [

| x ( t ) | ≤ β ( | x ( t 0 ) | , t − t 0 ) , ∀ t ≥ t 0 ≥ 0, ∀ | x ( t 0 ) | < c (3)

Consider a class of nonlinear systems described as follows:

x ˙ ( t ) = f ( x ( t ) , u 1 ( t ) , u 2 ( t ) , w ( t ) ) y ( t ) = h ( x ) + υ ( t ) (4)

where x ( t ) ∈ R n x denotes the state vector which is immeasurable. u 1 ( t ) ∈ R n u 1 , u 2 ( t ) ∈ R n u 2 are control inputs. u 1 and u 2 are restricted to be in two nonempty convex sets U 1 ⊆ R n u 1 , U 2 ⊆ R n u 2 . w ( t ) ∈ R n w denotes the disturbance vector. y ( t ) ∈ R n y is the measured output and v ( t ) ∈ R n v is a measurement noise vector. The disturbance vector and noise vector are bounded such as w ∈ W , v ∈ V where

W = { w ∈ R n w : | w | ≤ c 1 , c 1 > 0 } V = { v ∈ R n v : | v | ≤ c 2 , c 2 > 0 } (5)

with c 1 and c 2 are known positive real numbers. We assume that f and h are locally Lipschitz vector functions and f ( 0 , 0 , 0 , 0 ) = 0 , h ( 0 ) = 0 . This means that the origin is an equilibrium point for system (4). And we assume that the output of system (4), y, is sampled asynchronously and measured time is denoted by { t k ≥ 0 } such that t k = t 0 + k Δ , k = 0 , 1 , ⋯ with t 0 being the initial time, Δ being a fixed time interval. Generally, there exists a possibility of arbitrarily large periods of time in which the output cannot be measured, then the stability properties of the system is not guaranteed. In order to study the stability properties in a deterministic framework, we assume that there exists an upper bound T m on the interval between two successive measured outputs such that m a x { t k + 1 − t k } ≤ T m . This assumption is reasonable from a process control perspective.

Remark 1: Generally, distributed control systems are formulated on account of the controlled systems being decoupled or partially decoupled. However, we consider a seriously coupled process model with two sets of control inputs. This is a common phenomenon in process control.

The objective of this paper is to propose an output feedback control architecture using a state observer when the states are immeasurable. The state observer has the potential to maintain the closed-loop stability and improve the closed-loop performance. We design two LMPCs to compute u 1 and u 2 . The structure of the system is as follows:

Remark 2: The procedure of the system shown in

1) When the states are immeasurable, the observer is used to estimate the current state x.

2) LMPC2 computes the optimal input trajectory of u 2 based on the estimated state x ^ and sends the optimal input trajectory to process and LMPC1.

3) Once LMPC1 receives the optimal input trajectory of u 2 , it evaluates the optimal input trajectory of u 1 based on x ^ and the optimal input trajectory of u 2 .

4) LMPC1 sends the optimal input trajectory to process.

5) At next time, return step (1).

Define the nominal system of system (4) as following:

x ˙ * ( t ) = f ( x * ( t ) , u 1 ( t ) , u 2 ( t ) , 0 ) y * ( t ) = h ( x * ) (6)

where x * ∈ R n x denotes the state vector of nominal systems, y * ∈ R n y is the noise free output.

Assume that there exists a deterministic nonlinear observer for the nominal system (6):

x ^ ˙ * = F ( x ^ * ( t ) , u 1 ( t ) , u 2 ( t ) , y * ( t ) ) (7)

such that x ^ * asymptotically converges x * for all the states x * , x ^ * ∈ R n x , where x ^ * ∈ R n x indicates the state vector of nominal observer. From Lemma 2, there exists a class K L function β such that:

| x * ( t ) − x ^ * ( t ) | ≤ β ( | x * ( t 0 ) − x ^ * ( t 0 ) | , t − t 0 ) (8)

We assume that F is a locally Lipschitz vector function. Note that the convergence property of observer (7) is obtained based on nominal system (6) with continuous measured output.

From the Lipschitz property of f and Definition 1, there exists a positive constant M 1 such that:

| f ( x * , u 1 , u 2 ,0 ) | ≤ M 1 (9)

for all x * ∈ R n x .

The actual observer of the system is obtained when the deterministic observer is applied to system (4). The observer of system (4) is described as follows with the state disturbance and measurement noise:

x ^ ˙ = F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) (10)

where y ( t k ) is the actual sampled measurement at t k , for ∀ t ≥ t k .

In this subsection, the error between the actual system's states and estimated states will be studied under the condition of state disturbance and measurement noise when observer (10) is applied to system (4).

Theorem 1: Consider observer (10) with output measurement y ( t k ) starting from the initial condition x ^ ( t k ) , the error of estimated state x ^ ( t ) and actual state x ( t ) is bounded:

| e ( t ) | = | x ( t ) − x ^ ( t ) | ≤ β ( | e ( t k ) | , t − t k ) + δ 1 ( t − t k ) + δ 2 ( t − t k ) (11)

for ∀ t ≥ t k where e ( t k ) = x ( t k ) − x ^ ( t k ) is the initial error of the states, and

δ 1 ( τ ) = l 2 c 1 l 1 ( e l 1 τ − 1 ) δ 2 ( τ ) = q 2 q 1 ( b M 1 N Δ + c 2 ) ( e q 1 τ − 1 ) (12)

where l 1 and l 2 , q 1 and q 2 , b are Lipschitz constants associated with f, F and h, respectively, and N is the predictive horizon.

Proof: For ∀ t ≥ t k , from (8) and x ^ * ( t k ) = x ^ ( t k ) , x * ( t k ) = x ( t k ) , it can be obtained that:

| x * ( t ) − x ^ * ( t ) | ≤ β ( | x * ( t k ) − x ^ * ( t k ) | , t − t k ) = β ( | x ( t k ) − x ^ ( t k ) | , t − t k ) = β ( | e ( t k ) | , t − t k ) (13)

Based on the Lipschitz property of f and Definition 1, there exist constants l 1 , l 2 , such that:

| x ˙ ( t ) − x ˙ * ( t ) | = | f ( x ( t ) , u 1 ( t ) , u 2 ( t ) , w ( t ) ) − f ( x * ( t ) , u 1 ( t ) , u 2 ( t ) ,0 ) | ≤ l 1 | x ( t ) − x * ( t ) | + l 2 | w ( t ) | (14)

Because of x * ( t k ) = x ( t k ) (that is to say x * ( t k ) − x ( t k ) = 0 ), and | w ( t ) | ≤ c 1 , the following inequality can be got by integrating the above inequality from t k to t :

| x ( t ) − x * ( t ) | ≤ l 2 c 1 l 1 ( e l 1 ( t − t k ) − 1 ) = δ 1 ( t − t k ) (15)

From the triangle inequality and inequalities (13), (15), it can be written as:

| x ( t ) − x ^ * ( t ) | ≤ | x ( t ) − x * ( t ) | + | x * ( t ) − x ^ * ( t ) | ≤ β ( | e ( t k ) | , t − t k ) + δ 1 ( t − t k ) , ∀ t ≥ t k (16)

From the Lipschitz property of F and Definition 1, there exist constants q 1 , q 2 satisfying the following inequality:

| x ^ ˙ * ( t ) − x ^ ˙ ( t ) | = | F ( x ^ * ( t ) , u 1 ( t ) , u 2 ( t ) , y * ( t ) ) − F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) | ≤ q 1 | x ^ * ( t ) − x ^ ( t ) | + q 2 | y * ( t ) − y ( t k ) | (17)

for ∀ t ≥ t k . Note that y * ( t ) = h ( x * ( t ) ) , y ( t k ) = h ( x ( t k ) ) + υ ( t k ) , hence:

| y * ( t ) − y ( t k ) | ≤ | h ( x * ( t ) ) − h ( x ( t k ) ) | + | υ ( t k ) | (18)

Due to the Lipschitz property of h and Definition 1, there exists a constant b such that:

| y * ( t ) − y ( t k ) | ≤ b | x * ( t ) − x ( t k ) | + | υ ( t k ) | (19)

Because of x * ( t k ) = x ( t k ) and the boundedness of υ , we can get:

| y * ( t ) − y ( t k ) | ≤ b | x * ( t ) − x * ( t k ) | + c 2 (20)

From (9) and the dynamics of x * , it can be derived that:

| x * ( t ) − x * ( t k ) | ≤ M 1 ( t − t k ) (21)

From (20) and (21), we can get:

| y * ( t ) − y ( t k ) | ≤ b M 1 ( t − t k ) + c 2 (22)

From (17) and (22) and | t − t k | ≤ N Δ , it can be obtained that:

| x ^ ˙ * ( t ) − x ^ ˙ ( t ) | ≤ q 1 | x ^ * ( t ) − x ^ ( t ) | + q 2 b M 1 N Δ + q 2 c 2 , ∀ t ≥ t k (23)

Integrating the above inequality from t k to t and taking into account of x ^ * ( t k ) = x ^ ( t k ) , the following inequality can be got:

| x ^ * ( t ) − x ^ ( t ) | ≤ q 2 q 1 ( b M 1 N Δ + c 2 ) ( e q 1 ( t − t k ) − 1 ) = δ 2 ( t − t k ) (24)

As a result, based on the triangle inequality and the inequalities (16) and (24), it can be written that:

| e ( t ) | ≤ | x ( t ) − x ^ * ( t ) | + | x ^ * ( t ) − x ^ ( t ) | ≤ β ( | e ( t k ) | , t − t k ) + δ 1 ( t − t k ) + δ 2 ( t − t k ) (25)

That finishes the proof of the theorem.

Theorem 1 indicates that, the upper bound of the estimated error depends on several factors including initial error of the states e ( t k ) , Lipschitz properties of the system and observer dynamics, sampling time of measurements Δ and the predictive horizon N, the bounds c 1 and c 2 of magnitudes of disturbances and noise, as well as open-loop operation time of the observer t − t k .

Remark 3: Because the bound of e ( t ) is the function of the observer's open-loop operation time and the observer’s open-loop operation time is finite, the function can be restricted to a region. We assume the region is Ω e . It can be derived that e ( t ) ∈ Ω e .

We assume that there exists a Lyapunov-based controller u 1 ( t ) = g ( x ^ ( t ) ) which satisfies the input constraints on u 1 for all x ^ inside a given stability region. And the origin of the nominal observer is asymptotically stable with u 2 = 0 . From Lemma 1, this assumption indicates that there exist class K functions α i ( ⋅ ) and a continuous Lyapunov function V for the nominal observer, which satisfy the following inequalities:

α 1 ( | x ^ * | ) ≤ V ( x ^ * ) ≤ α 2 ( | x ^ * | ) ∂ V ( x ^ * ) ∂ x ^ * F ( x ^ * , g ( x ^ * ) , 0 , y * ) ≤ − α 3 ( | x ^ * | ) | ∂ V ( x ^ * ) ∂ x ^ * | ≤ α 4 ( | x ^ * | ) g ( x ^ * ) ∈ U 1 (26)

for ∀ x ^ * ∈ D ⊆ R n x where D is an open neighborhood of the origin. We denote the region Ω ρ ⊆ D as the stability region of the nominal observer under the control law u 1 = g ( x ^ * ) and u 2 = 0 .

By continuity and the local Lipschitz property of F, it is obtained that there exists a positive constant M 2 such that:

| F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) | ≤ M 2 (27)

In addition, due to the Lipschitz property of F, there exist positive constants d 1 , d 2 such that

| F ( x ^ * , u 1 , u 2 , y * ( t ) ) − F ( x ^ 1 * , u 1 , u 2 , y * ( t k ) ) | ≤ d 1 | x ^ * − x ^ 1 * | + d 2 | y * ( t ) − y * ( t k ) | (28)

Because of y * ( t k ) = h ( x * ( t k ) ) , y ( t k ) = h ( x ( t k ) ) + v ( t k ) and x * ( t k ) = x ( t k ) , it can be written that y * ( t k ) = y ( t k ) − v ( t k ) . As a result,

| F ( x ^ * , u 1 , u 2 , y * ( t ) ) − F ( x ^ 1 * , u 1 , u 2 , y * ( t k ) ) | ≤ d 1 | x ^ * − x ^ 1 * | + d 2 | y * ( t ) − y ( t k ) + v ( t k ) | ≤ d 1 | x ^ * − x ^ 1 * | + d 2 ( | y * ( t ) − y ( t k ) | + | v ( t k ) | ) ≤ d 1 | x ^ * − x ^ 1 * | + d 2 ( b M 1 N Δ + c 2 + c 2 ) = d 1 | x ^ * − x ^ 1 * | + d 2 b M 1 N Δ + 2 d 2 c 2 (29)

LMPC2 and LMPC1 what are needed in this article are obtained through solving the following optimization problems.

First we define the optimization problem of LMPC2, which depends on the latest state estimation x ^ ( t k ) . However, LMPC2 has no information about the value of u 1 , so LMPC2 must assume a trajectory for u 1 along the prediction horizon. Therefore, the Lyapunov-based controller u 1 = g ( x ^ * ) is used. It is used to define a contractive constraint in order to guarantee a given minimum decrease rate of the Lyapunov function V to inherit the stability properties. LMPC2 is used to obtain the optimal input trajectory u 2 based on the following optimization problem:

m i n u c 2 ∈ P ( Δ ) ∫ t k t k + N Δ ( x ˜ * ( t ) T Q x ˜ * ( t ) + u c 1 ( t ) T Q c 1 u c 1 ( t ) + u c 2 ( t ) T Q c 2 u c 2 ( t ) ) d t (30a)

s .t . x ˜ ˙ * ( t ) = F ( x ˜ * ( t ) , u c 1 ( t ) , u c 2 ( t ) , y * ( t ) ) , ∀ t ∈ [ t k , t k + N Δ ] (30b)

u c 1 ( t ) = g ( x ˜ * ( t k + j Δ ) ) , ∀ t ∈ [ t k + j Δ , t k + ( j + 1 ) Δ ] , j = 0 , ⋯ , N − 1 (30c)

u c 2 ( t ) ∈ U 2 , ∀ t ∈ [ t k , t k + N Δ ] (30d)

x ¯ ˙ * ( t ) = F ( x ¯ * ( t ) , g ( x ¯ * ( t k + j Δ ) ) ,0, y * ( t ) ) , ∀ t ∈ [ t k + j Δ , t k + ( j + 1 ) Δ ] (30e)

x ˜ * ( t k ) = x ¯ * ( t k ) = x ^ ( t k ) (30f)

V ( x ˜ * ( t ) ) ≤ V ( x ¯ * ( t ) ) , ∀ t ∈ [ t k , t k + N s Δ ] (30g)

where P ( Δ ) is the family of piece-wise constant functions. Q , Q c 1 and Q c 2 are positive definite weight matrices. N s is the control horizon which is the smallest integer that satisfies the inequality T m ≤ N s Δ . To take full advantage of the nominal model in the computation of the control action, we take N ≥ N s . The optimal solution of optimization problem (30) is denoted by u c 2 ⋆ ( t | t k ) , t ∈ [ t k , t k + N Δ ] . Once the optimal input trajectory of LMPC2 is computed, it is sent to LMPC1 and its corresponding actuators.

Note that the constraints (30e)-(30f) generate a reference state trajectory (namely, a reference Lyapunov function trajectory). The constraint (30g) guarantees that the constrained decrease of the Lyapunov function from t k to t k + N s Δ , if u 1 = g ( x ^ * ) , u 2 = u c 2 ⋆ ( t ) are applied.

The optimization problem of LMPC1 depends on x ^ ( t k ) and the the optimal solution u c 2 ⋆ . LMPC1 is used to obtain the optimal input trajectory u 1 based on the following optimization problem:

m i n u c 1 ∈ P ( Δ ) ∫ t k t k + N Δ ( x ⌣ * ( t ) T Q x ⌣ * ( t ) + u c 1 ( t ) T Q c 1 u c 1 ( t ) + u c 2 ( t ) T Q c 2 u c 2 ( t ) ) d t (31a)

s .t . x ⌣ ˙ * ( t ) = F ( x ⌣ * ( t ) , u c 1 ( t ) , u c 2 ( t ) , y * ( t ) ) , ∀ t ∈ [ t k , t k + N Δ ] (31b)

x ˜ ˙ * ( t ) = F ( x ˜ * ( t ) , g ( x ˜ * ( t k + j Δ ) ) , u c 2 ( t ) , y * ( t ) ) , ∀ t ∈ [ t k + j Δ , t k + ( j + 1 ) Δ ] , j = 0 , ⋯ , N − 1 (31c)

u c 2 ( t ) = u c 2 ⋆ ( t | t k ) , ∀ t ∈ [ t k , t k + N Δ ] (31d)

u c 1 ( t ) ∈ U 1 , ∀ t ∈ [ t k , t k + N Δ ] (31e)

x ⌣ * ( t k ) = x ˜ * ( t k ) = x ^ ( t k ) (31f)

V ( x ⌣ * ( t ) ) ≤ V ( x ˜ * ( t ) ) , ∀ t ∈ [ t k , t k + N s Δ ] (31g)

The optimal solution to this optimization problem is denoted by u c 1 ⋆ ( t | t k ) , t ∈ [ t k , t k + N Δ ] . By imposing the constraint (30g) and (31g), we can prove that the proposed distributed model predictive control architecture inherits the stability properties of Lyapunov-based controller g ( x ^ ) . The control inputs are defined as follows

u i = u c i ⋆ ( t | t k ) , ∀ t ∈ [ t k , t k + 1 ] , i = 1 , 2 (32)

Note that, the actuators apply the last computed optimal input trajectories between two successive estimated states.

In this subsection, we will prove that the proposed distributed control architecture inherits the stability of the Lyapunov-based controller g ( x ^ ) . This property is described by Theorem 2 below. In order to present the theorem, we need the following propositions.

Proposition 1: Consider the trajectory x ¯ * of nominal observer (7) with the Lyapunov-based controller g ( x ^ * ) applied in a sample-and-hold fashion and u 2 = 0 . Let N , Δ , ε s > 0 and ρ > ρ s > 0 satisfy

− α 3 ( α 2 − 1 ( ρ s ) ) + α 4 ( α 1 − 1 ( ρ ) ) ( d 1 M 2 N Δ + d 2 b M 1 N Δ + 2 d 2 c 2 ) ≤ − ε s / Δ (33)

Then, if ρ m < ρ where

ρ m = max { V ( x ¯ * ( t + Δ ) ) : V ( x ¯ * ( t ) ) ≤ ρ s } (34)

and x ¯ * ( t 0 ) ∈ Ω ρ , we can obtain such result:

V ( x ¯ * ( t k ) ) ≤ m a x { V ( x ¯ * ( t 0 ) ) − k ε s , ρ m } (35)

Proof: The derivative of the Lyapunov function along the trajectory x ¯ * ( t ) of nominal observer is:

V ˙ ( x ¯ * ( t ) ) = ∂ V ∂ x ¯ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) , t ∈ [ t k , t k + 1 ] (36)

Taking into account (26), it is obtained that:

V ˙ ( x ¯ * ( t ) ) = ∂ V ∂ x ¯ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) + ∂ V ∂ x ¯ F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) − ∂ V ∂ x ¯ F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) ≤ − α 3 ( x ¯ * ( t k ) ) + ∂ V ∂ x ¯ [ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) − F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) ] (37)

From (26) and ρ > ρ s > 0 we have

− α 3 ( x ¯ * ( t k ) ) ≤ − α 3 ( α 2 − 1 ( ρ s ) ) ∂ V ∂ x ¯ ≤ α 4 ( α 1 − 1 ( ρ ) ) (38)

for ∀ x ¯ * ( t k ) ∈ Ω ρ / Ω ρ s . Substituting (29) and (38) into (37), it can be written as:

V ˙ ( x ¯ * ( t ) ) ≤ − α 3 ( α 2 − 1 ( ρ s ) ) + α 4 ( α 1 − 1 ( ρ ) ) × [ d 1 | x ¯ * ( t ) − x ¯ * ( t k ) | + d 2 b M 1 N Δ + 2 d 2 c 2 ] (39)

From (27) and the continuity of x ¯ * ( t ) , the following inequality can be gotten:

| x ¯ * ( t ) − x ¯ * ( t k ) | ≤ M 2 N Δ , t ∈ [ t k , t k + 1 ] (40)

In consequence, for all initial states x ¯ * ( t k ) ∈ Ω ρ / Ω ρ s , the bound of the derivative of Lyapunov function is derived as:

V ˙ ( x ¯ * ( t ) ) ≤ − α 3 ( α 2 − 1 ( ρ s ) ) + α 4 ( α 1 − 1 ( ρ ) ) × ( d 1 M 2 N Δ + d 2 b M 1 N Δ + 2 d 2 c 2 ) , ∀ t ∈ [ t k , t k + 1 ] (41)

If condition (33) is satisfied, the following inequality is true:

V ˙ ( x ¯ * ( t ) ) ≤ − ε s / Δ , x ¯ * ( t k ) ∈ Ω ρ / Ω ρ s (42)

Integrating the above inequality on t ∈ [ t k , t k + 1 ] , we get:

V ( x ¯ * ( t k + 1 ) ) ≤ V ( x ¯ * ( t k ) ) − ε s

V ( x ¯ * ( t ) ) ≤ V ( x ¯ * ( t k ) ) , ∀ t ∈ [ t k , t k + 1 ] (43)

The inequalities above indicate that the observer (7) can reach Ω ρ s , if it starts from Ω ρ / Ω ρ s and Δ is sufficiently small. Applying the inequalities recursively, there exists k 1 > 0 such that x ¯ * ( t k 1 ) ∈ Ω ρ s , x ¯ * ( t k ) ∈ Ω ρ / Ω ρ s for ∀ k ≤ k 1 and V ( x ¯ * ( t k ) ) ≤ V ( x ¯ * ( t 0 ) ) − k ε s , if x ¯ * ( t 0 ) ∈ Ω ρ / Ω ρ s . Once the estimated state converges to Ω ρ s ⊂ Ω ρ m (or starts there), it stays inside Ω ρ m for all times. This statement holds because of the definition of ρ m . If x ¯ * ( t k ) ∈ Ω ρ s , x ¯ * ( t k + 1 ) ∈ Ω ρ m . This indicates that the conclusion in Proposition 1 is true.

Proposition 1 guarantees that the observer (7) is ultimately bounded in Ω ρ m , if it is under the control law u 1 = g ( x ^ ) , u 2 = 0 and starts from Ω ρ .

Remark 4: Compared with literature [

Proposition 2 [

V ( x ^ ) ≤ V ( x ^ * ) + G ( | x ^ − x ^ * | ) (44)

for ∀ x ^ , x ^ * ∈ Ω ρ , G ( x ) = α 4 ( α 1 − 1 ( ρ ) ) x + M x 2 and M > 0 .

Proposition 2 bounds the difference between the magnitudes of Lyapunov function of nominal estimated states and actual estimated states in Ω ρ .

In Theorem 2 below, we prove the distributed MPC design of (31)-(33) guarantees that the estimated states of observer (10) is ultimately bounded.

Theorem 2: Consider observer (10) with the output feedback distributed MPC of (30)-(31) based on controller g ( x ^ ) that satisfy the condition (26). The conditions (33), (34) and the following inequality

− N s ε s + G ( δ 2 ( N s Δ ) ) < 0 (45)

is satisfied with N s being the smallest integer satisfying N s Δ ≥ T m . If x ^ ( t 0 ) ∈ Ω ρ , then x ^ is ultimately bounded in Ω ρ n ⊆ Ω ρ where

ρ n = ρ m + G ( δ 2 ( N s Δ ) ) (46)

Proof: In order to prove that the closed-loop system is ultimately bounded in a region that contains the origin, we need to prove that V ( x ^ ( t k ) ) is a decreasing sequence of values with a lower bound.

First, we prove the stability results of Theorem 2 when t k + 1 − t k = T m , T m = N s Δ for all k. The case is the worst situation that LMPC1 and LMPC2 need to operate in an open-loop for the maximum amount of time. x ¯ * ( t k + 1 ) is obtained from the nominal observer (7) starting from x ^ ( t k ) under the Lyapunov-based controller u 1 = g ( x ^ * ) applied in a sample-and-hold fashion and u 2 = 0 . From proposition 1 and t k + 1 = t k + N s Δ , it is obtained that

V ( x ¯ * ( t k + 1 ) ) ≤ m a x { V ( x ¯ * ( t k ) ) − N s ε s , ρ m } (47)

From the constraints of (30g) and (31g), we can get

V ( x ⌣ * ( t ) ) ≤ V ( x ˜ * ( t ) ) ≤ V ( x ¯ * ( t ) ) , ∀ t ∈ [ t k , t k + N s Δ ] (48)

From the inequalities (47) and (48) and x ¯ * ( t k ) = x ˜ * ( t k ) = x ⌣ * ( t k ) = x ^ ( t k ) , it is derived that when x ^ ( t ) ∈ Ω ρ (this point will be proved below), the following inequality is true

V ( x ⌣ * ( t k + 1 ) ) ≤ m a x { V ( x ^ ( t k ) ) − N s ε s , ρ m } (49)

Based on Proposition 2, we obtain the following inequality

V ( x ^ ( t k + 1 ) ) ≤ V ( x ⌣ * ( t k + 1 ) ) + G ( | x ⌣ * ( t k + 1 ) − x ^ ( t k + 1 ) | ) (50)

The following upper bound of the error between x ^ ( t ) and x ⌣ * ( t ) is obtained by applying the inequality (24)

| x ⌣ * ( t k + 1 ) − x ^ ( t k + 1 ) | ≤ δ 2 ( N s Δ ) (51)

From inequalities (49), (50) and (51), V ( x ^ ( t k + 1 ) ) can be written as

V ( x ^ ( t k + 1 ) ) ≤ V ( x ⌣ * ( t k + 1 ) ) + G ( δ 2 ( N s Δ ) ) ≤ m a x { V ( x ^ ( t k ) ) − N s ε s , ρ m } + G ( δ 2 ( N s Δ ) ) = m a x { V ( x ^ ( t k ) ) − N s ε s + G ( δ 2 ( N s Δ ) ) , ρ m + G ( δ 2 ( N s Δ ) ) } (52)

From the condition (45) and inequality (52), there exists δ > 0 satisfying the following inequality

V ( x ^ ( t k + 1 ) ) ≤ m a x { V ( x ^ ( t k ) ) − δ , ρ n } (53)

This indicates that V ( x ^ ( t k + 1 ) ) ≤ V ( x ^ ( t k ) ) if x ^ ( t k ) ∈ Ω ρ / Ω ρ n , and V ( x ^ ( t k + 1 ) ) ≤ ρ n if x ^ ( t k ) ∈ Ω ρ n .

The upper bound of the error between the Lyapunov function of the actual observer state x ^ ( t ) and nominal observer state x ⌣ * ( t ) is a strictly increasing function of time (due to the definition of δ 2 and G in the inequality (24) and Proposition 2), so inequality (53) indicates that

V ( x ^ ( t ) ) ≤ m a x { V ( x ^ ( t k ) ) , ρ n } , ∀ t ∈ [ t k , t k + 1 ] (54)

Using the inequality (54), the closed-loop trajectories of observer (10) are proved always staying in Ω ρ by using the proposed output feedback distributed MPC when x ^ ( t 0 ) ∈ Ω ρ . Furthermore, using the inequality (53), when x ^ ( t 0 ) ∈ Ω ρ , the estimated states of observer (10) satisfy

lim sup t → ∞ V ( x ^ ( t ) ) ≤ ρ n (55)

So x ^ ( t ) ∈ Ω ρ , ∀ t , and x ^ ( t ) is ultimately bounded in Ω ρ n .

Second, we extend the results to the common case, that is t k + 1 − t k ≤ T m and T m ≤ N s Δ , which indicates that t k + 1 − t k ≤ N s Δ . Because δ 2 and G are strictly increasing function and G is a convex function. Similarly, it can be proved that the inequality (53) is still true. This implies that the stability results of Theorem 2 hold.

Corollary: Because of x ^ ( t ) is ultimately bounded, that is x ^ ( t ) ∈ Ω ρ n when x ^ ( t 0 ) ∈ Ω ρ , for ∀ ρ n < ρ . Since e ( t ) ∈ Ω e and x ( t ) = x ^ ( t ) + e ( t ) , it can be obtained that

x ^ ( t 0 ) + e ( t 0 ) ∈ Ω ρ + Ω e ⇒ x ^ ( t ) + e ( t ) ∈ Ω ρ n + Ω e , ∀ ρ n < ρ (56)

That is

x ( t 0 ) ∈ Ω ρ + Ω e ⇒ x ( t ) ∈ Ω ρ n + Ω e , ∀ ρ n < ρ (57)

So the state x ( t ) of the system is ultimately bounded.

Remark 5: The proposed output feedback distributed MPC can be extended to multiple LMPC controllers using one direction sequential communication strategy (that is LMPCk sends information to LMPCk − 1, k = 1, 2, 3, ∙∙∙). By letting each LMPC send its trajectory, all the trajectories received from previous controllers are sent to their successor LMPC (that is LMPCk sends both its trajectory and the trajectories received from LMPCk + 1 to LMPCk − 1).

Remark 6: The implementation strategy of the output feedback distributed model predictive control proposed in this paper is as follows

1) The observer is used to estimate the current state x ( t k ) .

2) LMPC2 computes the optimal input trajectory of u 2 based on the estimated state x ^ ( t k ) and sends the optimal input trajectory to its actuators and LMPC1.

3) Once LMPC1 receives the optimal input trajectory of u 2 , it evaluates the optimal input trajectory of u 1 based on x ^ ( t k ) and the optimal input trajectory of u 2 . If the optimal input trajectory of u 2 cannot be received by LMPC1, a zero trajectory for u 2 is used in the evaluation of LMPC1.

4) LMPC1 sends the optimal input trajectory to its actuators.

5) At next time, let k + 1 → k and return step (1).

In order to verify the effectiveness of the proposed output feedback distributed model predictive control method, we apply it into a three vessel consisting of two continuously stirred tank reactors and a flash tank separator [

d x A 1 d t = F 10 V 1 ( x A 10 − x A 1 ) + F r V 1 ( x A r − x A 1 ) − k 1 e − E 1 R T 1 x A 1 + F 3 V 1 ( x A 3 − x A 1 ) d x B 1 d t = F 10 V 1 ( x B 10 − x B 1 ) + F r V 1 ( x B r − x B 1 ) + k 1 e − E 1 R T 1 x A 1 − k 2 e − E 2 R T 1 x B 1 + F 3 V 1 ( x B 3 − x B 1 ) d T 1 d t = F 10 V 1 ( T 10 − T 1 ) + F r V 1 ( T 3 − T 1 ) + − Δ H 1 C p k 1 e − E 1 R T 1 x A 1 + − Δ H 2 C p k 2 e − E 2 R T 1 x B 1 + Q 1 ρ C p V 1 + F 3 V 1 ( T 3 − T 1 )

d x A 2 d t = F 1 V 2 ( x A 1 − x A 2 ) + F 20 V 2 ( x A 20 − x A 2 ) − k 1 e − E 1 R T 2 x A 2 d x B 2 d t = F 1 V 2 ( x B 1 − x B 2 ) + F 20 V 2 ( x B 20 − x B 2 ) + k 1 e − E 1 R T 2 x A 2 − k 2 e − E 2 R T 2 x B 2 d T 2 d t = F 1 V 2 ( T 1 − T 2 ) + F 20 V 2 ( T 20 − T 2 ) + − Δ H 1 C p k 1 e − E 1 R T 2 x A 2 + − Δ H 2 C p k 2 e − E 2 R T 2 x B 2 + Q 2 ρ C p V 2

(58)

d x A 3 d t = F 2 V 3 ( x A 2 − x A 3 ) − F r + F p V 3 ( x A r − x A 3 ) d x B 3 d t = F 2 V 3 ( x B 2 − x B 3 ) − F r + F p V 3 ( x B r − x B 3 ) d T 3 d t = F 2 V 3 ( T 2 − T 3 ) + Q 3 ρ C p V 3

where y = F 3 is the output sampled asynchronously, x T = [ x A 1 − x A 1 s , x B 1 − x B 1 s , T 1 − T 1 s , x A 2 − x A 2 s , x B 2 − x B 2 s , T 2 − T 2 s , x A 3 − x A 3 s , x B 3 − x B 3 s , T 3 − T 3 s ] is the state of the system. u 1 T = [ Q 1 − Q 1 s , Q 2 − Q 2 s , Q 3 − Q 3 s ] , u 2 = F 20 − F 20 s are the manipulated inputs, where Q 1 s = 1.49 × 10 6 kJ / h , Q 2 s = 1.46 × 10 6 kJ / h , Q 3 s = 1.55 × 10 6 kJ / h , F 20 s = 5.1 m 3 / h and | u 1 | ≤ 10 6 kJ / h , | u 2 | ≤ 3 m 3 / h . The process above can be writing as follows:

x ˙ ( t ) = f ( x ( t ) ) + g 1 ( x ( t ) ) u 1 ( t ) + g 2 ( x ( t ) ) u 2 ( t ) + w ( t ) (59)

The objective is to guide the process from the initial x 0 T = [ 0.7998 0.1 378 0.8517 0.2012 363 0.67 0.22 368 ] to the steady state x s T = [ 0.4995 0.39 423 0.59 0.4751 434 0.35 0.6491 436 ] . We design a Lyapunov-based controller u 1 = g ( x ) which stabilize the closed-loop system as follows:

g ( x ) = ( − L f V + L f V 2 + ( L g 1 V ) 4 ( L g 1 V ) 2 L g 1 V ≠ 0 0 L g 1 V = 0 (60)

Consider a Lyapunov function

V ( x ) = x T P x

where P = diag ( 5.2 × 10 12 [ 4,4,10 − 4 ,4,4,10 − 4 ,4,4,10 − 4 ] ) and diag ( v ) denotes a diagonal matrix with its diagonal elements being the elements of vector v. The sampling time is chosen to be Δ = 0.02 h = 1.2 min . Suppose the measured output is obtained asynchronously at time instants t k ≥ 0 . The maximum interval between two successive asynchronous measured output is T m = 3 Δ . The prediction horizon is chosen to be N = 6 and control horizon is N s = 3 such that N s Δ ≥ T m . The weight matrices are Q c = diag ( 10 3 [ 2 , 2 , 0.0025 , 2 , 2 , 0.0025 , 2 , 2 , 0.0025 ] ) , R c 1 = diag ( [ 5 × 10 − 12 , 5 × 10 − 12 , 5 × 10 − 12 ] ) , R c 2 = 100 respectively. Computer the inputs of LMPC1 and LMPC2. The simulation results are as follows:

input reduce and tend to be stable gradually. From the results, we know the proposed output feedback distributed model predictive control architecture guarantees the ultimately boundedness of the system’s states, and then the reactor-separator process is stable.

For a class of nonlinear systems whose states are immeasurable, an output feedback distributed model predictive control algorithm is proposed. The main idea is: For the considered system, when the outputs are sampled asynchronously, by introducing a state observer, the estimated states of the original system are obtained. It is proved that the error is bounded and the estimated states are ultimately bounded. The stability of closed-loop system is guaranteed and the performance of the closed-loop system is improved. The simulation results verify the effectiveness of the method proposed in this paper.

This research was supported by the Natural Science Foundation of China (61374004, 61773237, 61473170) and Key Research and Development Programs of Shandong Province 2017GSF18116.

Su, B.L. and Wang, Y.Z. (2017) The Design of Output Feedback Distributed Model Predictive Controller for a Class of Nonlinear Systems. Applied Mathematics, 8, 1832-1850. https://doi.org/10.4236/am.2017.812131