Applied Mathematics
Vol.08 No.12(2017), Article ID:81500,19 pages
10.4236/am.2017.812131

The Design of Output Feedback Distributed Model Predictive Controller for a Class of Nonlinear Systems

Baili Su, Yingzhi Wang

College of Engineering, Qufu Normal University, Rizhao, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 1, 2017; Accepted: December 26, 2017; Published: December 29, 2017

ABSTRACT

For a class of nonlinear systems whose states are immeasurable, when the outputs of the system are sampled asynchronously, by introducing a state observer, an output feedback distributed model predictive control algorithm is proposed. It is proved that the errors of estimated states and the actual system's states are bounded. And it is guaranteed that the estimated states of the closed-loop system are ultimately bounded in a region containing the origin. As a result, the states of the actual system are ultimately bounded. A simulation example verifies the effectiveness of the proposed distributed control method.

Keywords:

Nonlinear Systems, Distributed Model Predictive Control, State Observer, Output Feedback, Asynchronous Measurements

1. Introduction

Traditional process control systems just simply combine the measurement sensors with control actuators to ensure the stability of closed-loop systems. Although this paradigm to process control has been successful, the calculation burden of this kind of control is large and the performance of the system is not good enough [1] . So far the stability of closed-loop systems has been guaranteed and at the same time, the performance of the closed-loop systems has been improved if the control systems are divided into local control systems (LCS) and networked control systems (NCS). And it can reduce the burden of calculation. But this kind of transformation needs to redesign LCS and NCS to ensure the stability of closed-loop systems. As a result, the control strategy is changed [2] .

Model predictive control (MPC) is receding horizon control which can deal with the constraints of systems’ inputs and states during the design of optimization control. It adopts feedback correction, rolling optimization, and has strong ability to deal with constraints and dynamic performance [3] [4] [5] . Therefore, it can be more effective to solve the optimal control problem for distributed systems. That is distributed model predictive control [6] . The distributed model predictive control takes into account the actions of the local controller in the calculation of its optimal input trajectories. At the same time, LCS and NCS are designed via Lyapunov-based model predictive control (LMPC). But when the LCS is a model predictive control system for which there is no explicit control formula to complete its future control actions, it is necessary to redesign both the NCS and LCS and establish some communication between them so that they can coordinate their actions. We refer to the trajectories of u 1 and u 2 as LMPC1 and LMPC2. The structure of the system is as Figure 1.

There are many research results about distributed MPC design at present. In literature [7] , a novel partition method of distributed model predictive control for a class of large-scale systems is presented. Literature [8] presents a cooperative distributed model predictive control algorithm for a team of linear subsystems with the coupled cost and coupled constraints. A distributed model predictive control architecture of nonlinear systems is studied in literature [2] . Based on literature [2] , literature [9] considers a distributed model predictive control method subject to asynchronous and delayed measurements. A distributed model predictive control strategy for interconnected process systems is proposed in reference [10] . In literature [11] , a design approach of robust distributed model predictive control is proposed for polytopic uncertain networked control systems with time delays. Reference [12] presents that the distributed model predictive control method is applied for an accurate model of an irrigation canal. For a hybrid system that comprises wind and photovoltaic generation subsystems, a battery bank and an ac load, a distributed model predictive control method is designed to ensure the closed-loop system stable in reference [13] .

These references are obtained on the assumption that the systems’ states can be measured continuously. The systems whose states are immeasurable are not taken into account in these references. However, immeasurable states often

Figure 1. Distributed LMPC control architecture.

happen in practice. In literature [14] , under the condition that the states are not measured, a distributed model predictive control algorithm for interconnected systems based on neighbor-to-neighbor communication is presented. Literature [15] considers the design of robust output feedback distributed model predictive control when the dynamics and measurements of systems are affected by bounded noise. But both literatures are studied for the linear systems. An output-feedback approach for nonlinear model predictive control with moving horizon state estimation is proposed in reference [16] . Reference [17] considers output feedback model predictive control of stochastic nonlinear systems. Yet these two references are centralized model predictive control methods. The computational complexity grows significantly.

On the basis of the above references, this paper considers a class of nonlinear systems whose states are immeasurable. By introducing a state observer, and using output feedback, under the assumption that the outputs of the system are sampled of asynchronous measurements, an output feedback distributed model predictive control algorithm is designed. Therefore, the ultimately boundedness of the estimated states and the boundedness of the error between estimated states and the actual system’s states are proved, and then it is proved that the states of the actual system are ultimately bounded. And the stability of the closed-loop system is guaranteed. The performance of the system is improved and the burden of calculation is reduced.

This paper is arranged as follows. The second section is the preparation work. In the third section, the state observer is designed, and its stability is analyzed. The fourth section designs a controller based on Lyapunov function to make sure the asymptotic stability of the nominal observer. In the fifth section, an output feedback distributed model predictive control algorithm is proposed and the stability of the closed-loop system is proved. The instance simulation is provided in the sixth section. Conclusion is given in Section 7.

2. Preliminaries

2.1. Definitions and Lemmas

In this paper, the operator | | denotes Euclidean norm of variates. The symbol x ˙ ( t ) denotes the derivative of x ( t ) . The symbol Ω r denotes the set Ω r = { x R n x : V ( x ) r } where V is a scalar positive definite, continuous differentiable function and V ( 0 ) = 0 and r is a positive constant. Definitions and lemmas used in this paper are as follows:

Definition 1 [18] : A function f ( x ) is said to be locally Lipschitz if there exists a constant L x such that | f ( x 1 ) f ( x 2 ) | L x | x 1 x 2 | for all x 1 and x 2 in a given region of x and L x is the associated Lipschitz constant.

Definition 2 [18] : A continuous function γ : [ 0, a ) [ 0, ) belongs to class K if it is strictly increasing and γ ( 0 ) = 0 . A continuous function β ( r , s ) is said to belong to class K L if, for fixed s, β ( r , s ) belongs to class K with respect to r and, for fixed r, β ( r , s ) is decreasing with respect to s and β ( r , s ) 0 as s 0 .

Lemma 1 [18] : Let [ 0, ,0 ] T be an equilibrium point for the nonlinear system x ˙ = f ( t , x ) where f : [ 0, ) × D R n is continuous differentiable, D = { x R n | | x | < r } where r is a positive constant and the Jacobian matrix [ f / x ] is bounded on D, uniformly in t. Let β be a class K L function and r 0 be a positive constant such that β ( r 0 , 0 ) < r . Let D 0 = { x R n | | x | < r 0 } . Assume that the trajectory of the system satisfies

| x ( t ) | β ( | x ( t 0 ) | , t t 0 ) , x ( t 0 ) D 0 , t t 0 0 (1)

Then, there is a continuously differentiable function V : [ 0, ) × D 0 R that satisfies the inequalities

α 1 ( | x | ) V ( t , x ) α 2 ( | x | ) V t + V x f ( t , x ) α 3 ( | x | ) | V x | α 4 ( | x | ) (2)

where α 1 , α 2 , α 3 and α 4 are class K functions defined on [ 0, r 0 ] . If the system is autonomous, V can be chosen independent of t.

Lemma 2 [18] : Let [ 0, ,0 ] T be an equilibrium point for the nonlinear system x ˙ = f ( t , x ) . The equilibrium point is uniformly asymptotically stable if and only if there exist a class K L function β and a positive constant c, independent of t 0 , such that

| x ( t ) | β ( | x ( t 0 ) | , t t 0 ) , t t 0 0, | x ( t 0 ) | < c (3)

2.2. Problem Formulation

Consider a class of nonlinear systems described as follows:

x ˙ ( t ) = f ( x ( t ) , u 1 ( t ) , u 2 ( t ) , w ( t ) ) y ( t ) = h ( x ) + υ ( t ) (4)

where x ( t ) R n x denotes the state vector which is immeasurable. u 1 ( t ) R n u 1 , u 2 ( t ) R n u 2 are control inputs. u 1 and u 2 are restricted to be in two nonempty convex sets U 1 R n u 1 , U 2 R n u 2 . w ( t ) R n w denotes the disturbance vector. y ( t ) R n y is the measured output and v ( t ) R n v is a measurement noise vector. The disturbance vector and noise vector are bounded such as w W , v V where

W = { w R n w : | w | c 1 , c 1 > 0 } V = { v R n v : | v | c 2 , c 2 > 0 } (5)

with c 1 and c 2 are known positive real numbers. We assume that f and h are locally Lipschitz vector functions and f ( 0 , 0 , 0 , 0 ) = 0 , h ( 0 ) = 0 . This means that the origin is an equilibrium point for system (4). And we assume that the output of system (4), y, is sampled asynchronously and measured time is denoted by { t k 0 } such that t k = t 0 + k Δ , k = 0 , 1 , with t 0 being the initial time, Δ being a fixed time interval. Generally, there exists a possibility of arbitrarily large periods of time in which the output cannot be measured, then the stability properties of the system is not guaranteed. In order to study the stability properties in a deterministic framework, we assume that there exists an upper bound T m on the interval between two successive measured outputs such that m a x { t k + 1 t k } T m . This assumption is reasonable from a process control perspective.

Remark 1: Generally, distributed control systems are formulated on account of the controlled systems being decoupled or partially decoupled. However, we consider a seriously coupled process model with two sets of control inputs. This is a common phenomenon in process control.

The objective of this paper is to propose an output feedback control architecture using a state observer when the states are immeasurable. The state observer has the potential to maintain the closed-loop stability and improve the closed-loop performance. We design two LMPCs to compute u 1 and u 2 . The structure of the system is as follows:

Remark 2: The procedure of the system shown in Figure 2 is as follows

1) When the states are immeasurable, the observer is used to estimate the current state x.

2) LMPC2 computes the optimal input trajectory of u 2 based on the estimated state x ^ and sends the optimal input trajectory to process and LMPC1.

3) Once LMPC1 receives the optimal input trajectory of u 2 , it evaluates the optimal input trajectory of u 1 based on x ^ and the optimal input trajectory of u 2 .

4) LMPC1 sends the optimal input trajectory to process.

5) At next time, return step (1).

3. Observers and Property

3.1. The Design of Observers

Define the nominal system of system (4) as following:

Figure 2. Distributed LMPC architecture where the states are immeasurable.

x ˙ * ( t ) = f ( x * ( t ) , u 1 ( t ) , u 2 ( t ) , 0 ) y * ( t ) = h ( x * ) (6)

where x * R n x denotes the state vector of nominal systems, y * R n y is the noise free output.

Assume that there exists a deterministic nonlinear observer for the nominal system (6):

x ^ ˙ * = F ( x ^ * ( t ) , u 1 ( t ) , u 2 ( t ) , y * ( t ) ) (7)

such that x ^ * asymptotically converges x * for all the states x * , x ^ * R n x , where x ^ * R n x indicates the state vector of nominal observer. From Lemma 2, there exists a class K L function β such that:

| x * ( t ) x ^ * ( t ) | β ( | x * ( t 0 ) x ^ * ( t 0 ) | , t t 0 ) (8)

We assume that F is a locally Lipschitz vector function. Note that the convergence property of observer (7) is obtained based on nominal system (6) with continuous measured output.

From the Lipschitz property of f and Definition 1, there exists a positive constant M 1 such that:

| f ( x * , u 1 , u 2 ,0 ) | M 1 (9)

for all x * R n x .

The actual observer of the system is obtained when the deterministic observer is applied to system (4). The observer of system (4) is described as follows with the state disturbance and measurement noise:

x ^ ˙ = F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) (10)

where y ( t k ) is the actual sampled measurement at t k , for t t k .

3.2. The Property of Observers

In this subsection, the error between the actual system's states and estimated states will be studied under the condition of state disturbance and measurement noise when observer (10) is applied to system (4).

Theorem 1: Consider observer (10) with output measurement y ( t k ) starting from the initial condition x ^ ( t k ) , the error of estimated state x ^ ( t ) and actual state x ( t ) is bounded:

| e ( t ) | = | x ( t ) x ^ ( t ) | β ( | e ( t k ) | , t t k ) + δ 1 ( t t k ) + δ 2 ( t t k ) (11)

for t t k where e ( t k ) = x ( t k ) x ^ ( t k ) is the initial error of the states, and

δ 1 ( τ ) = l 2 c 1 l 1 ( e l 1 τ 1 ) δ 2 ( τ ) = q 2 q 1 ( b M 1 N Δ + c 2 ) ( e q 1 τ 1 ) (12)

where l 1 and l 2 , q 1 and q 2 , b are Lipschitz constants associated with f, F and h, respectively, and N is the predictive horizon.

Proof: For t t k , from (8) and x ^ * ( t k ) = x ^ ( t k ) , x * ( t k ) = x ( t k ) , it can be obtained that:

| x * ( t ) x ^ * ( t ) | β ( | x * ( t k ) x ^ * ( t k ) | , t t k ) = β ( | x ( t k ) x ^ ( t k ) | , t t k ) = β ( | e ( t k ) | , t t k ) (13)

Based on the Lipschitz property of f and Definition 1, there exist constants l 1 , l 2 , such that:

| x ˙ ( t ) x ˙ * ( t ) | = | f ( x ( t ) , u 1 ( t ) , u 2 ( t ) , w ( t ) ) f ( x * ( t ) , u 1 ( t ) , u 2 ( t ) ,0 ) | l 1 | x ( t ) x * ( t ) | + l 2 | w ( t ) | (14)

Because of x * ( t k ) = x ( t k ) (that is to say x * ( t k ) x ( t k ) = 0 ), and | w ( t ) | c 1 , the following inequality can be got by integrating the above inequality from t k to t :

| x ( t ) x * ( t ) | l 2 c 1 l 1 ( e l 1 ( t t k ) 1 ) = δ 1 ( t t k ) (15)

From the triangle inequality and inequalities (13), (15), it can be written as:

| x ( t ) x ^ * ( t ) | | x ( t ) x * ( t ) | + | x * ( t ) x ^ * ( t ) | β ( | e ( t k ) | , t t k ) + δ 1 ( t t k ) , t t k (16)

From the Lipschitz property of F and Definition 1, there exist constants q 1 , q 2 satisfying the following inequality:

| x ^ ˙ * ( t ) x ^ ˙ ( t ) | = | F ( x ^ * ( t ) , u 1 ( t ) , u 2 ( t ) , y * ( t ) ) F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) | q 1 | x ^ * ( t ) x ^ ( t ) | + q 2 | y * ( t ) y ( t k ) | (17)

for t t k . Note that y * ( t ) = h ( x * ( t ) ) , y ( t k ) = h ( x ( t k ) ) + υ ( t k ) , hence:

| y * ( t ) y ( t k ) | | h ( x * ( t ) ) h ( x ( t k ) ) | + | υ ( t k ) | (18)

Due to the Lipschitz property of h and Definition 1, there exists a constant b such that:

| y * ( t ) y ( t k ) | b | x * ( t ) x ( t k ) | + | υ ( t k ) | (19)

Because of x * ( t k ) = x ( t k ) and the boundedness of υ , we can get:

| y * ( t ) y ( t k ) | b | x * ( t ) x * ( t k ) | + c 2 (20)

From (9) and the dynamics of x * , it can be derived that:

| x * ( t ) x * ( t k ) | M 1 ( t t k ) (21)

From (20) and (21), we can get:

| y * ( t ) y ( t k ) | b M 1 ( t t k ) + c 2 (22)

From (17) and (22) and | t t k | N Δ , it can be obtained that:

| x ^ ˙ * ( t ) x ^ ˙ ( t ) | q 1 | x ^ * ( t ) x ^ ( t ) | + q 2 b M 1 N Δ + q 2 c 2 , t t k (23)

Integrating the above inequality from t k to t and taking into account of x ^ * ( t k ) = x ^ ( t k ) , the following inequality can be got:

| x ^ * ( t ) x ^ ( t ) | q 2 q 1 ( b M 1 N Δ + c 2 ) ( e q 1 ( t t k ) 1 ) = δ 2 ( t t k ) (24)

As a result, based on the triangle inequality and the inequalities (16) and (24), it can be written that:

| e ( t ) | | x ( t ) x ^ * ( t ) | + | x ^ * ( t ) x ^ ( t ) | β ( | e ( t k ) | , t t k ) + δ 1 ( t t k ) + δ 2 ( t t k ) (25)

That finishes the proof of the theorem.

Theorem 1 indicates that, the upper bound of the estimated error depends on several factors including initial error of the states e ( t k ) , Lipschitz properties of the system and observer dynamics, sampling time of measurements Δ and the predictive horizon N, the bounds c 1 and c 2 of magnitudes of disturbances and noise, as well as open-loop operation time of the observer t t k .

Remark 3: Because the bound of e ( t ) is the function of the observer's open-loop operation time and the observer’s open-loop operation time is finite, the function can be restricted to a region. We assume the region is Ω e . It can be derived that e ( t ) Ω e .

4. Lyapunov-Based Controller

We assume that there exists a Lyapunov-based controller u 1 ( t ) = g ( x ^ ( t ) ) which satisfies the input constraints on u 1 for all x ^ inside a given stability region. And the origin of the nominal observer is asymptotically stable with u 2 = 0 . From Lemma 1, this assumption indicates that there exist class K functions α i ( ) and a continuous Lyapunov function V for the nominal observer, which satisfy the following inequalities:

α 1 ( | x ^ * | ) V ( x ^ * ) α 2 ( | x ^ * | ) V ( x ^ * ) x ^ * F ( x ^ * , g ( x ^ * ) , 0 , y * ) α 3 ( | x ^ * | ) | V ( x ^ * ) x ^ * | α 4 ( | x ^ * | ) g ( x ^ * ) U 1 (26)

for x ^ * D R n x where D is an open neighborhood of the origin. We denote the region Ω ρ D as the stability region of the nominal observer under the control law u 1 = g ( x ^ * ) and u 2 = 0 .

By continuity and the local Lipschitz property of F, it is obtained that there exists a positive constant M 2 such that:

| F ( x ^ ( t ) , u 1 ( t ) , u 2 ( t ) , y ( t k ) ) | M 2 (27)

In addition, due to the Lipschitz property of F, there exist positive constants d 1 , d 2 such that

| F ( x ^ * , u 1 , u 2 , y * ( t ) ) F ( x ^ 1 * , u 1 , u 2 , y * ( t k ) ) | d 1 | x ^ * x ^ 1 * | + d 2 | y * ( t ) y * ( t k ) | (28)

Because of y * ( t k ) = h ( x * ( t k ) ) , y ( t k ) = h ( x ( t k ) ) + v ( t k ) and x * ( t k ) = x ( t k ) , it can be written that y * ( t k ) = y ( t k ) v ( t k ) . As a result,

| F ( x ^ * , u 1 , u 2 , y * ( t ) ) F ( x ^ 1 * , u 1 , u 2 , y * ( t k ) ) | d 1 | x ^ * x ^ 1 * | + d 2 | y * ( t ) y ( t k ) + v ( t k ) | d 1 | x ^ * x ^ 1 * | + d 2 ( | y * ( t ) y ( t k ) | + | v ( t k ) | ) d 1 | x ^ * x ^ 1 * | + d 2 ( b M 1 N Δ + c 2 + c 2 ) = d 1 | x ^ * x ^ 1 * | + d 2 b M 1 N Δ + 2 d 2 c 2 (29)

5. Output Feedback Distributed Model Predictive Control

5.1. Distributed Model Predictive Control

LMPC2 and LMPC1 what are needed in this article are obtained through solving the following optimization problems.

First we define the optimization problem of LMPC2, which depends on the latest state estimation x ^ ( t k ) . However, LMPC2 has no information about the value of u 1 , so LMPC2 must assume a trajectory for u 1 along the prediction horizon. Therefore, the Lyapunov-based controller u 1 = g ( x ^ * ) is used. It is used to define a contractive constraint in order to guarantee a given minimum decrease rate of the Lyapunov function V to inherit the stability properties. LMPC2 is used to obtain the optimal input trajectory u 2 based on the following optimization problem:

min u c2 P( Δ ) t k t k +NΔ ( x ˜ * ( t ) T Q x ˜ * ( t )+ u c1 ( t ) T Q c1 u c1 ( t )+ u c2 ( t ) T Q c2 u c2 ( t ) )dt (30a)

s .t . x ˜ ˙ * ( t ) = F ( x ˜ * ( t ) , u c 1 ( t ) , u c 2 ( t ) , y * ( t ) ) , t [ t k , t k + N Δ ] (30b)

u c 1 ( t ) = g ( x ˜ * ( t k + j Δ ) ) , t [ t k + j Δ , t k + ( j + 1 ) Δ ] , j = 0 , , N 1 (30c)

u c 2 ( t ) U 2 , t [ t k , t k + N Δ ] (30d)

x ¯ ˙ * ( t ) = F ( x ¯ * ( t ) , g ( x ¯ * ( t k + j Δ ) ) ,0, y * ( t ) ) , t [ t k + j Δ , t k + ( j + 1 ) Δ ] (30e)

x ˜ * ( t k ) = x ¯ * ( t k ) = x ^ ( t k ) (30f)

V ( x ˜ * ( t ) ) V ( x ¯ * ( t ) ) , t [ t k , t k + N s Δ ] (30g)

where P ( Δ ) is the family of piece-wise constant functions. Q , Q c 1 and Q c 2 are positive definite weight matrices. N s is the control horizon which is the smallest integer that satisfies the inequality T m N s Δ . To take full advantage of the nominal model in the computation of the control action, we take N N s . The optimal solution of optimization problem (30) is denoted by u c 2 ( t | t k ) , t [ t k , t k + N Δ ] . Once the optimal input trajectory of LMPC2 is computed, it is sent to LMPC1 and its corresponding actuators.

Note that the constraints (30e)-(30f) generate a reference state trajectory (namely, a reference Lyapunov function trajectory). The constraint (30g) guarantees that the constrained decrease of the Lyapunov function from t k to t k + N s Δ , if u 1 = g ( x ^ * ) , u 2 = u c 2 ( t ) are applied.

The optimization problem of LMPC1 depends on x ^ ( t k ) and the the optimal solution u c 2 . LMPC1 is used to obtain the optimal input trajectory u 1 based on the following optimization problem:

min u c1 P( Δ ) t k t k +NΔ ( x * ( t ) T Q x * ( t )+ u c1 ( t ) T Q c1 u c1 ( t )+ u c2 ( t ) T Q c2 u c2 ( t ) )dt (31a)

s .t . x ˙ * ( t ) = F ( x * ( t ) , u c 1 ( t ) , u c 2 ( t ) , y * ( t ) ) , t [ t k , t k + N Δ ] (31b)

x ˜ ˙ * ( t ) = F ( x ˜ * ( t ) , g ( x ˜ * ( t k + j Δ ) ) , u c 2 ( t ) , y * ( t ) ) , t [ t k + j Δ , t k + ( j + 1 ) Δ ] , j = 0 , , N 1 (31c)

u c 2 ( t ) = u c 2 ( t | t k ) , t [ t k , t k + N Δ ] (31d)

u c 1 ( t ) U 1 , t [ t k , t k + N Δ ] (31e)

x * ( t k ) = x ˜ * ( t k ) = x ^ ( t k ) (31f)

V ( x * ( t ) ) V ( x ˜ * ( t ) ) , t [ t k , t k + N s Δ ] (31g)

The optimal solution to this optimization problem is denoted by u c 1 ( t | t k ) , t [ t k , t k + N Δ ] . By imposing the constraint (30g) and (31g), we can prove that the proposed distributed model predictive control architecture inherits the stability properties of Lyapunov-based controller g ( x ^ ) . The control inputs are defined as follows

u i = u c i ( t | t k ) , t [ t k , t k + 1 ] , i = 1 , 2 (32)

Note that, the actuators apply the last computed optimal input trajectories between two successive estimated states.

5.2. Stability Analysis

In this subsection, we will prove that the proposed distributed control architecture inherits the stability of the Lyapunov-based controller g ( x ^ ) . This property is described by Theorem 2 below. In order to present the theorem, we need the following propositions.

Proposition 1: Consider the trajectory x ¯ * of nominal observer (7) with the Lyapunov-based controller g ( x ^ * ) applied in a sample-and-hold fashion and u 2 = 0 . Let N , Δ , ε s > 0 and ρ > ρ s > 0 satisfy

α 3 ( α 2 1 ( ρ s ) ) + α 4 ( α 1 1 ( ρ ) ) ( d 1 M 2 N Δ + d 2 b M 1 N Δ + 2 d 2 c 2 ) ε s / Δ (33)

Then, if ρ m < ρ where

ρ m = max { V ( x ¯ * ( t + Δ ) ) : V ( x ¯ * ( t ) ) ρ s } (34)

and x ¯ * ( t 0 ) Ω ρ , we can obtain such result:

V ( x ¯ * ( t k ) ) m a x { V ( x ¯ * ( t 0 ) ) k ε s , ρ m } (35)

Proof: The derivative of the Lyapunov function along the trajectory x ¯ * ( t ) of nominal observer is:

V ˙ ( x ¯ * ( t ) ) = V x ¯ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) , t [ t k , t k + 1 ] (36)

Taking into account (26), it is obtained that:

V ˙ ( x ¯ * ( t ) ) = V x ¯ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) + V x ¯ F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) V x ¯ F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) α 3 ( x ¯ * ( t k ) ) + V x ¯ [ F ( x ¯ * ( t ) , g ( x ¯ * ( t k ) ) ,0, y * ( t ) ) F ( x ¯ * ( t k ) , g ( x ¯ * ( t k ) ) ,0, y * ( t k ) ) ] (37)

From (26) and ρ > ρ s > 0 we have

α 3 ( x ¯ * ( t k ) ) α 3 ( α 2 1 ( ρ s ) ) V x ¯ α 4 ( α 1 1 ( ρ ) ) (38)

for x ¯ * ( t k ) Ω ρ / Ω ρ s . Substituting (29) and (38) into (37), it can be written as:

V ˙ ( x ¯ * ( t ) ) α 3 ( α 2 1 ( ρ s ) ) + α 4 ( α 1 1 ( ρ ) ) × [ d 1 | x ¯ * ( t ) x ¯ * ( t k ) | + d 2 b M 1 N Δ + 2 d 2 c 2 ] (39)

From (27) and the continuity of x ¯ * ( t ) , the following inequality can be gotten:

| x ¯ * ( t ) x ¯ * ( t k ) | M 2 N Δ , t [ t k , t k + 1 ] (40)

In consequence, for all initial states x ¯ * ( t k ) Ω ρ / Ω ρ s , the bound of the derivative of Lyapunov function is derived as:

V ˙ ( x ¯ * ( t ) ) α 3 ( α 2 1 ( ρ s ) ) + α 4 ( α 1 1 ( ρ ) ) × ( d 1 M 2 N Δ + d 2 b M 1 N Δ + 2 d 2 c 2 ) , t [ t k , t k + 1 ] (41)

If condition (33) is satisfied, the following inequality is true:

V ˙ ( x ¯ * ( t ) ) ε s / Δ , x ¯ * ( t k ) Ω ρ / Ω ρ s (42)

Integrating the above inequality on t [ t k , t k + 1 ] , we get:

V ( x ¯ * ( t k + 1 ) ) V ( x ¯ * ( t k ) ) ε s

V ( x ¯ * ( t ) ) V ( x ¯ * ( t k ) ) , t [ t k , t k + 1 ] (43)

The inequalities above indicate that the observer (7) can reach Ω ρ s , if it starts from Ω ρ / Ω ρ s and Δ is sufficiently small. Applying the inequalities recursively, there exists k 1 > 0 such that x ¯ * ( t k 1 ) Ω ρ s , x ¯ * ( t k ) Ω ρ / Ω ρ s for k k 1 and V ( x ¯ * ( t k ) ) V ( x ¯ * ( t 0 ) ) k ε s , if x ¯ * ( t 0 ) Ω ρ / Ω ρ s . Once the estimated state converges to Ω ρ s Ω ρ m (or starts there), it stays inside Ω ρ m for all times. This statement holds because of the definition of ρ m . If x ¯ * ( t k ) Ω ρ s , x ¯ * ( t k + 1 ) Ω ρ m . This indicates that the conclusion in Proposition 1 is true.

Proposition 1 guarantees that the observer (7) is ultimately bounded in Ω ρ m , if it is under the control law u 1 = g ( x ^ ) , u 2 = 0 and starts from Ω ρ .

Remark 4: Compared with literature [19] , under the condition of output feedback, the trajectory that Proposition 1 considers is the nominal observer rather than nominal system.

Proposition 2 [19] : Consider the Lyapunov function V ( ) of observer (10). There exists a quadratic function G ( ) such that

V ( x ^ ) V ( x ^ * ) + G ( | x ^ x ^ * | ) (44)

for x ^ , x ^ * Ω ρ , G ( x ) = α 4 ( α 1 1 ( ρ ) ) x + M x 2 and M > 0 .

Proposition 2 bounds the difference between the magnitudes of Lyapunov function of nominal estimated states and actual estimated states in Ω ρ .

In Theorem 2 below, we prove the distributed MPC design of (31)-(33) guarantees that the estimated states of observer (10) is ultimately bounded.

Theorem 2: Consider observer (10) with the output feedback distributed MPC of (30)-(31) based on controller g ( x ^ ) that satisfy the condition (26). The conditions (33), (34) and the following inequality

N s ε s + G ( δ 2 ( N s Δ ) ) < 0 (45)

is satisfied with N s being the smallest integer satisfying N s Δ T m . If x ^ ( t 0 ) Ω ρ , then x ^ is ultimately bounded in Ω ρ n Ω ρ where

ρ n = ρ m + G ( δ 2 ( N s Δ ) ) (46)

Proof: In order to prove that the closed-loop system is ultimately bounded in a region that contains the origin, we need to prove that V ( x ^ ( t k ) ) is a decreasing sequence of values with a lower bound.

First, we prove the stability results of Theorem 2 when t k + 1 t k = T m , T m = N s Δ for all k. The case is the worst situation that LMPC1 and LMPC2 need to operate in an open-loop for the maximum amount of time. x ¯ * ( t k + 1 ) is obtained from the nominal observer (7) starting from x ^ ( t k ) under the Lyapunov-based controller u 1 = g ( x ^ * ) applied in a sample-and-hold fashion and u 2 = 0 . From proposition 1 and t k + 1 = t k + N s Δ , it is obtained that

V ( x ¯ * ( t k + 1 ) ) m a x { V ( x ¯ * ( t k ) ) N s ε s , ρ m } (47)

From the constraints of (30g) and (31g), we can get

V ( x * ( t ) ) V ( x ˜ * ( t ) ) V ( x ¯ * ( t ) ) , t [ t k , t k + N s Δ ] (48)

From the inequalities (47) and (48) and x ¯ * ( t k ) = x ˜ * ( t k ) = x * ( t k ) = x ^ ( t k ) , it is derived that when x ^ ( t ) Ω ρ (this point will be proved below), the following inequality is true

V ( x * ( t k + 1 ) ) m a x { V ( x ^ ( t k ) ) N s ε s , ρ m } (49)

Based on Proposition 2, we obtain the following inequality

V ( x ^ ( t k + 1 ) ) V ( x * ( t k + 1 ) ) + G ( | x * ( t k + 1 ) x ^ ( t k + 1 ) | ) (50)

The following upper bound of the error between x ^ ( t ) and x * ( t ) is obtained by applying the inequality (24)

| x * ( t k + 1 ) x ^ ( t k + 1 ) | δ 2 ( N s Δ ) (51)

From inequalities (49), (50) and (51), V ( x ^ ( t k + 1 ) ) can be written as

V( x ^ ( t k+1 ) )V( x * ( t k+1 ) )+G( δ 2 ( N s Δ ) ) max{ V( x ^ ( t k ) ) N s ε s , ρ m }+G( δ 2 ( N s Δ ) ) =max{ V( x ^ ( t k ) ) N s ε s +G( δ 2 ( N s Δ ) ), ρ m +G( δ 2 ( N s Δ ) ) } (52)

From the condition (45) and inequality (52), there exists δ > 0 satisfying the following inequality

V ( x ^ ( t k + 1 ) ) m a x { V ( x ^ ( t k ) ) δ , ρ n } (53)

This indicates that V ( x ^ ( t k + 1 ) ) V ( x ^ ( t k ) ) if x ^ ( t k ) Ω ρ / Ω ρ n , and V ( x ^ ( t k + 1 ) ) ρ n if x ^ ( t k ) Ω ρ n .

The upper bound of the error between the Lyapunov function of the actual observer state x ^ ( t ) and nominal observer state x * ( t ) is a strictly increasing function of time (due to the definition of δ 2 and G in the inequality (24) and Proposition 2), so inequality (53) indicates that

V ( x ^ ( t ) ) m a x { V ( x ^ ( t k ) ) , ρ n } , t [ t k , t k + 1 ] (54)

Using the inequality (54), the closed-loop trajectories of observer (10) are proved always staying in Ω ρ by using the proposed output feedback distributed MPC when x ^ ( t 0 ) Ω ρ . Furthermore, using the inequality (53), when x ^ ( t 0 ) Ω ρ , the estimated states of observer (10) satisfy

lim sup t V ( x ^ ( t ) ) ρ n (55)

So x ^ ( t ) Ω ρ , t , and x ^ ( t ) is ultimately bounded in Ω ρ n .

Second, we extend the results to the common case, that is t k + 1 t k T m and T m N s Δ , which indicates that t k + 1 t k N s Δ . Because δ 2 and G are strictly increasing function and G is a convex function. Similarly, it can be proved that the inequality (53) is still true. This implies that the stability results of Theorem 2 hold.

Corollary: Because of x ^ ( t ) is ultimately bounded, that is x ^ ( t ) Ω ρ n when x ^ ( t 0 ) Ω ρ , for ρ n < ρ . Since e ( t ) Ω e and x ( t ) = x ^ ( t ) + e ( t ) , it can be obtained that

x ^ ( t 0 ) + e ( t 0 ) Ω ρ + Ω e x ^ ( t ) + e ( t ) Ω ρ n + Ω e , ρ n < ρ (56)

That is

x ( t 0 ) Ω ρ + Ω e x ( t ) Ω ρ n + Ω e , ρ n < ρ (57)

So the state x ( t ) of the system is ultimately bounded.

Remark 5: The proposed output feedback distributed MPC can be extended to multiple LMPC controllers using one direction sequential communication strategy (that is LMPCk sends information to LMPCk − 1, k = 1, 2, 3, ∙∙∙). By letting each LMPC send its trajectory, all the trajectories received from previous controllers are sent to their successor LMPC (that is LMPCk sends both its trajectory and the trajectories received from LMPCk + 1 to LMPCk − 1).

Remark 6: The implementation strategy of the output feedback distributed model predictive control proposed in this paper is as follows

1) The observer is used to estimate the current state x ( t k ) .

2) LMPC2 computes the optimal input trajectory of u 2 based on the estimated state x ^ ( t k ) and sends the optimal input trajectory to its actuators and LMPC1.

3) Once LMPC1 receives the optimal input trajectory of u 2 , it evaluates the optimal input trajectory of u 1 based on x ^ ( t k ) and the optimal input trajectory of u 2 . If the optimal input trajectory of u 2 cannot be received by LMPC1, a zero trajectory for u 2 is used in the evaluation of LMPC1.

4) LMPC1 sends the optimal input trajectory to its actuators.

5) At next time, let k + 1 k and return step (1).

6. Example

In order to verify the effectiveness of the proposed output feedback distributed model predictive control method, we apply it into a three vessel consisting of two continuously stirred tank reactors and a flash tank separator [5] which react A B , B C where A is the reactant and B is the product which is asked and C is the secondary product. The mathematical model of this process under standard modeling assumptions are given as follows:

d x A 1 d t = F 10 V 1 ( x A 10 x A 1 ) + F r V 1 ( x A r x A 1 ) k 1 e E 1 R T 1 x A 1 + F 3 V 1 ( x A 3 x A 1 ) d x B 1 d t = F 10 V 1 ( x B 10 x B 1 ) + F r V 1 ( x B r x B 1 ) + k 1 e E 1 R T 1 x A 1 k 2 e E 2 R T 1 x B 1 + F 3 V 1 ( x B 3 x B 1 ) d T 1 d t = F 10 V 1 ( T 10 T 1 ) + F r V 1 ( T 3 T 1 ) + Δ H 1 C p k 1 e E 1 R T 1 x A 1 + Δ H 2 C p k 2 e E 2 R T 1 x B 1 + Q 1 ρ C p V 1 + F 3 V 1 ( T 3 T 1 )

d x A 2 d t = F 1 V 2 ( x A 1 x A 2 ) + F 20 V 2 ( x A 20 x A 2 ) k 1 e E 1 R T 2 x A 2 d x B 2 d t = F 1 V 2 ( x B 1 x B 2 ) + F 20 V 2 ( x B 20 x B 2 ) + k 1 e E 1 R T 2 x A 2 k 2 e E 2 R T 2 x B 2 d T 2 d t = F 1 V 2 ( T 1 T 2 ) + F 20 V 2 ( T 20 T 2 ) + Δ H 1 C p k 1 e E 1 R T 2 x A 2 + Δ H 2 C p k 2 e E 2 R T 2 x B 2 + Q 2 ρ C p V 2

(58)

d x A 3 d t = F 2 V 3 ( x A 2 x A 3 ) F r + F p V 3 ( x A r x A 3 ) d x B 3 d t = F 2 V 3 ( x B 2 x B 3 ) F r + F p V 3 ( x B r x B 3 ) d T 3 d t = F 2 V 3 ( T 2 T 3 ) + Q 3 ρ C p V 3

where y = F 3 is the output sampled asynchronously, x T = [ x A 1 x A 1 s , x B 1 x B 1 s , T 1 T 1 s , x A 2 x A 2 s , x B 2 x B 2 s , T 2 T 2 s , x A 3 x A 3 s , x B 3 x B 3 s , T 3 T 3 s ] is the state of the system. u 1 T = [ Q 1 Q 1 s , Q 2 Q 2 s , Q 3 Q 3 s ] , u 2 = F 20 F 20 s are the manipulated inputs, where Q 1 s = 1.49 × 10 6 kJ / h , Q 2 s = 1.46 × 10 6 kJ / h , Q 3 s = 1.55 × 10 6 kJ / h , F 20 s = 5.1 m 3 / h and | u 1 | 10 6 kJ / h , | u 2 | 3 m 3 / h . The process above can be writing as follows:

x ˙ ( t ) = f ( x ( t ) ) + g 1 ( x ( t ) ) u 1 ( t ) + g 2 ( x ( t ) ) u 2 ( t ) + w ( t ) (59)

The objective is to guide the process from the initial x 0 T = [ 0.7998 0.1 378 0.8517 0.2012 363 0.67 0.22 368 ] to the steady state x s T = [ 0.4995 0.39 423 0.59 0.4751 434 0.35 0.6491 436 ] . We design a Lyapunov-based controller u 1 = g ( x ) which stabilize the closed-loop system as follows:

g ( x ) = ( L f V + L f V 2 + ( L g 1 V ) 4 ( L g 1 V ) 2 L g 1 V 0 0 L g 1 V = 0 (60)

Consider a Lyapunov function

V ( x ) = x T P x

where P = diag ( 5.2 × 10 12 [ 4,4,10 4 ,4,4,10 4 ,4,4,10 4 ] ) and diag ( v ) denotes a diagonal matrix with its diagonal elements being the elements of vector v. The sampling time is chosen to be Δ = 0.02 h = 1.2 min . Suppose the measured output is obtained asynchronously at time instants t k 0 . The maximum interval between two successive asynchronous measured output is T m = 3 Δ . The prediction horizon is chosen to be N = 6 and control horizon is N s = 3 such that N s Δ T m . The weight matrices are Q c = diag ( 10 3 [ 2 , 2 , 0.0025 , 2 , 2 , 0.0025 , 2 , 2 , 0.0025 ] ) , R c 1 = diag ( [ 5 × 10 12 , 5 × 10 12 , 5 × 10 12 ] ) , R c 2 = 100 respectively. Computer the inputs of LMPC1 and LMPC2. The simulation results are as follows:

Figure 3 is the output of the system, and Figures 4-6 are states of the system, and Figure 7 and Figure 8 are inputs of the system. From the figures, outputs of the system tend towards stability finally, the reduction of reactant x A = [ x A 1 , x A 2 , x A 3 ] makes product x B = [ x B 1 , x B 2 , x B 3 ] to become more and more and stable gradually, and temperature rise and tend to be stable gradually, the rate of heat

Figure 3. The trajectory of F 3 .

Figure 4. The trajectories of x A 1 , x A 2 and x A 3 .

Figure 5. The trajectories of x B 1 , x B 2 and x B 3 .

Figure 6. The trajectories of T 1 , T 2 and T 3 .

Figure 7. The trajectories of Q 1 , Q 2 and Q 3 .

Figure 8. The trajectory of F 20 .

input reduce and tend to be stable gradually. From the results, we know the proposed output feedback distributed model predictive control architecture guarantees the ultimately boundedness of the system’s states, and then the reactor-separator process is stable.

7. Conclusion

For a class of nonlinear systems whose states are immeasurable, an output feedback distributed model predictive control algorithm is proposed. The main idea is: For the considered system, when the outputs are sampled asynchronously, by introducing a state observer, the estimated states of the original system are obtained. It is proved that the error is bounded and the estimated states are ultimately bounded. The stability of closed-loop system is guaranteed and the performance of the closed-loop system is improved. The simulation results verify the effectiveness of the method proposed in this paper.

Acknowledgements

This research was supported by the Natural Science Foundation of China (61374004, 61773237, 61473170) and Key Research and Development Programs of Shandong Province 2017GSF18116.

Cite this paper

Su, B.L. and Wang, Y.Z. (2017) The Design of Output Feedback Distributed Model Predictive Controller for a Class of Nonlinear Systems. Applied Mathematics, 8, 1832-1850. https://doi.org/10.4236/am.2017.812131

References

  1. 1. Wang, W.L., Rivera, D.E. and Kempf, K.G. (2003) Centralized Model Predictive Control Strategies for Inventory Management in Semiconductor Manufacturing Supply Chains. American Control Conference, Denver, 4-6 June 2003, 585-590.

  2. 2. Liu, J.F., Munoz de la Pena, D. and Christofides, P.D. (2009) Distributed Model Predictive Control of Nonlinear Process Systems. AiChE Journal, 55, 1171-1184. https://doi.org/10.1002/aic.11801

  3. 3. Su, B.L., Li, S.Y. and Zhu, Q.M. (2009) Predictive Control of the Initial Stable Region for Constrained Switched Nonlinear Systems. Science in China, 39, 994-1003.

  4. 4. Zhu, J. (2002) Intelligent Predictive Control and Its Application. Zhejiang University Press, Hangzhou.

  5. 5. Kong, X.B. and Liu, X.J. (2014) Nonlinear Model Predictive Control for DFIG-Based Wind Power Generation. IEEE Transactions on Automation Science & Engineering, 11, 1046-1055. https://doi.org/10.1109/TASE.2013.2284066

  6. 6. Camponogara, E., Jia, D., Krogh, B.H. and Talukdar, S. (2002) Distributed Model Predictive Control. IEEE Control Systems, 22, 44-52. https://doi.org/10.1109/37.980246

  7. 7. Zhang, L.W. and Wang, J.C. (2012) Distributed Model Predictive Control with a Novel Partition Method. Proceedings of 31st Chinese Control Conference, Hefei, 25-27 July 2012, 4108-4113.

  8. 8. Gao, Y.L., Xia, Y.Q. and Dai, L. (2015) Cooperative Distributed Model Predictive Control of Multiple Coupled Linear Systems. IET Control Theorem & Applications, 9, 2561-2567. https://doi.org/10.1049/iet-cta.2015.0096

  9. 9. Liu, J.F., Munoz de la Pena, D. and Christofides, P.D. (2010) Distributed Model Predictive Control of Nonlinear Process Systems Subject to Asynchronous and Delayed Measurements. Automatica, 46, 52-61. https://doi.org/10.1016/j.automatica.2009.10.033

  10. 10. Tran, T. and Quang, N.K. (2013) Distributed Model Predictive Control with Receding-Horizon Stability Constraints. International Conference on Control, 80, 85-90.

  11. 11. Zhang, L.W., Wang, J.C., Ge, Y. and Wang, B.H. (2014) Robust Distributed Model Predictive Control for Uncertain Networked Control System. IET Control Theorem & Applications, 8, 1843-1851. https://doi.org/10.1049/iet-cta.2014.0311

  12. 12. álvarez, A., Ridao, M.A., Ramirez, D.R. and Sánchez, L. (2013) Distributed Model Predictive Control Techniques Applied to an Irrigation Cannal. European Control Conference (ECC), 415, 3276-3281.

  13. 13. Jia, Y.B. and Liu, X.J. (2014) Distributed Model Predictive Control of Wind and Solar Generation System. Proceedings of the 33rd Chinese Control Conference, Nanjing, 28-30 July 2014, 7795-7799.

  14. 14. Farina, M. and Scattolini, R. (2011) An Output Feedback Distributed Predictive Control Algorithm. 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, 12-15 December 2011, 8139-8144. https://doi.org/10.1109/CDC.2011.6160366

  15. 15. Giselsson, P. (2013) Output Feedback Distributed Model Predictive Control with Inherent Robustness Properties. American Control Conference, Washington DC, 17-19 June 2013, 1691-1696. https://doi.org/10.1109/ACC.2013.6580079

  16. 16. Copp, D.A. and Hespanha, J.P. (2014) Nonlinear Output-Feedback Model Predictive Control with Moving Horizon Estimation. 53rd IEEE Conference on Decision and Control, Los Angeles, 15-17 December 2014, 3511-3517. https://doi.org/10.1109/CDC.2014.7039934

  17. 17. Homer, T. and Mhaskar, P. (2015) Output Feedback Model Predictive Control of Stochastic Nonlinear Systems. American Control Conference, Chicago, 1-3 July 2015, 793-798. https://doi.org/10.1109/ACC.2015.7170831

  18. 18. Khalil, H.K. (2011) Nonlinear Systems. 3rd Edition, Publishing House of Electronics Industry, Beijing.

  19. 19. Munoz de la Pena, D. and Christofides, P.D. (2008) Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses. IEEE Transactions on Automatic Control, 53, 2076-2089. https://doi.org/10.1109/TAC.2008.929401