On the Generalization of Integrator and Integral Control Action

This paper provides a solution to generalize the integrator and the integral control action. It is achieved by defining two function sets to generalize the integrator and the integral control action, respectively, resorting to a stabilizing controller and adopting Lyapunov method to analyze the stability of the closed-loop system. By originating a powerful Lyapunov function, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established by some bounded information. Consequently, the justification of two propositions on the generalization of integrator and integral control action is verified. Moreover, the conditions used to define the function sets can be viewed as a class of sufficient conditions to design the integrator and the integral control action, respectively.


Introduction
Integral control [1] plays an important role in control system design because it ensures asymptotic tracking and disturbance rejection.In general, integral controller comprises three components: the stabilizing controller, the integral control action and the integrator.In the presence of the parametric uncertainties and the unknown constant disturbances, the stabilizing controller is used to guarantee the stability of the closed-loop system, and the integrator and the integral control action are utilized to create a steady-state control action at the equilibrium point such that the tracking error is zero.This shows that the integrator and the integral control action are two indispensable components to design an integral controller.Therefore, it is of important significance to generalize the integrator and the integral control action such that for a particular application, the engineers can choose the most appropriate integrator and integral control action to design their own integral controller.As a result, it also leads to a challenging trouble because the stability of the closed-loop system depends on not only the uncertain parameters and the unknown disturbances but also the general integrator and integral control action.

Traditional Integrator and Integral Control Action
Before the idea of general integral control appeared, all of the integrators were called the traditional integrator.Thus, traditional integrators can be classified into four kinds of integrators: 1) the simplest integrator, which is achieved by integrating the error; 2) the conditional integrator [2]- [7], in which the integrator value is frozen or restricted when certain conditions are verified; 3) the back-calculation integrator [8]- [11], in which the difference between the controller output and the actual plant input is fed back to the integrator; 4) the nonlinear integrator [12]- [16], whose output is shaped by a nonlinear error function before it enters the integrator.In addition, a class of special conditional integrator was proposed by [7], in which the integrator was shaped by integrating the linear combination of the error and its derivative, but its value can be changed only inside the boundary layer.All these integrators, except for the one proposed by [7], were designed by using the error as the indispensable element.All these traditional integral control actions are almost shaped by multiplying the output of the integrator by a gain coefficient.

General Integrator and Integral Control Action
In 2009, the idea of general integral control, which uses all available state variables to design the integrator, was proposed by [17], where presented some linear and nonlinear general integrators.However, their justification was not proved by strictly mathematical analysis.In 2012, the rationality of the linear integrator [18] on all the states of the system was proved by using linear system theory.The results, however, were local.The regionally as well as semi-globally results were proposed in [19], where the sliding mode manifold was used as the integrator, and then general integral control design was achieved by using sliding mode technique and linear system theory.In 2013, based on feedback linearization technique, a class of nonlinear integrator, which was shaped by a linear combination of the diffeomorphism, was presented by [20].General concave function gain integrator was proposed in [21], where the partial derivative of Lyapunov function was introduced into the integrator.General convex function gain integrator was presented in [22], where a systematic method to construct the convex function gain integrator, whose output is bounded in time domain, was proposed.Except for general convex and concave integral control, all the general integral control actions above are shaped by multiplying the output of the integrator by a gain coefficient.However, the integral control actions of general convex and concave integral control are generalized by two function sets, respectively, and the indispensable element used to design the concave and convex function gain integrators is only extended to the partial derivative of Lyapunov function.
All these integrators along with the integral control actions above constitute only a minute portion of the integrators and the integral control actions, and therefore lack generalization.Moreover, in consideration of the complexity of nonlinear system, it is clear that we cannot expect that a particular integrator or a particular integral control action has the high control performance for all nonlinear system.For these reasons above, the generalization of the integrator and the integral control action appears naturally because for all nonlinear system, we cannot enumerate all the categories of the integrators and the integral control actions with high control performance.It is not hard to know that this is a very valuable and challenging problem and the new theorem could be needed to solve this trouble.
Motivated by the cognition above, the aim of this paper is to generalize the integrator and the integral control action such that for a particular application, the engineers can choose the most appropriate stabilizing controller, integrator and integral control action to design their own integral controller.The main contributions are as follows: 1) two function sets, which are used to generalize the integrator and the integral control action, respectively, are defined; 2) the integrator can be taken as any integrable function, which passes through the origin and whose partial derivative, induced by mean value theorem, is positive-define and bounded.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design an integrator; 3) the integral control action can be taken as any continuous differential increasing function with the positive-define bounded derivative.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design the integral control action; 4) by originating a powerful Lyapunov function, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established by some bounded information.Consequently, the justification of two propositions on the generalization of the integrator and the integral control action is verified.Moreover, Lyapunov function proposed here has an important significant, and even could lay the foundation of the stability analysis of the complex nonlinear system with general integral control.
Throughout this paper, we use the notation ( ) to indicate the smallest and the largest eigenvalues, respectively, of a symmetric positive-define bounded matrix ( ) , and that of matrix A is defined as the corresponding induced norm ( ) The remainder of the paper is organized as follows: Section 2 describes the system under consideration, assumption and definition; Section 3 addresses the method to generalize the integrator and the integral control action.Conclusions are presented in Section 4.

Problem Formulation
Consider the following nonlinear system, where unknown constant parameters and disturbance.The function f, g and h are continuous in ( ) , , x w u on the con- trol domain In this study, the function ( ) , f x w does not necessarily vanish at the origin; i.e., ( ) We want to design a feedback control law u such that ( ) ϑ ∈ , there is a unique pair ( ) 0 0 , x u that depends continuously on ϑ and satis- fies the equations,

(
) ( ) ( ) so that 0 x is the desired equilibrium point and 0 u is the steady-state control that is needed to maintain equilibrium at 0 x , where y r = .For convenience, we state all definitions, assumptions and theorems for the case when the equilibrium point is at the origin of n R , that is, 0 0 x = .
Assumption 2: No loss of generality, suppose that the function ( ) , g x w satisfies the following inequalities, ( ) for all and there exists a Lyapunov function ( ) x V x such that the following inequalities, ( ) ( ) , 0, ,  .where • stands for the absolute value.Figure 1 depicts the example curves of one component of the functions belonging to the function set F φ .For instance, for all x R ∈ , the functions, ( ) arcsin h x and so on, all belong to function set Definition 2: ( ) denotes the set of all integrable function ( ) where z is a point on the line segment connecting x to the origin.Figure 2 depicts the example curves of one component of the functions belonging to the function set v F .For instance, for all ( ) ( ) and so on, all belong to the function set v F . Discussion 1: The condition for Definition 2 is induced by mean value theorem.It seems to be difficulty to construct such a multivariable function, in fact, it can be designed by the following method: for each component of x , we can design a single variable function ( ) i i v x , which satisfies the conditions of Definition 2, such as ( )  v x can be created by using these single variable functions ( ) and so on.

Generalization Method
In general, integral controller comprises three components: the stabilizing controller, the integral control action and the integrator.Therefore, for achieving the generalization of the integral control action and the integrator, we resort to the control law ( ) x u x given by Assumption 3 as the stabilizing controller, and then the integral controller can be given as,  ( ) ( ) ( ) where K σ is a positive-define diagonal matrix; the functions ( ) φ • and ( ) v • belong to the function sets F φ and v F , respectively.Thus, substituting (10) into (1), obtain the augmented system, By Assumption 1 and choosing K σ to be nonsingular and large enough, and then setting 0 x =  and 0 x = of the Equation ( 11), we obtain, Therefore, we ensure that there is a solution 0 σ , and then ( ) 0 0,σ is a unique equilibrium point of the closed-loop system (11) in the domain of interest.At the equilibrium point, y r = , irrespective of the value of w.Now, the design task is to provide the conditions on the control parameters such that ( ) 0 0,σ is an asymptotically stable equilibrium point of the closed-loop system (11) in the control domain of interest, which is not a trivial task because the closed-loop system depends on not only the unknown vector w but also two general functions ( ) φ • and ( ) v • .This needs to establish a universal theorem.Theorem 1: Under Assumptions 1 -3, if there exists a positive-define diagonal matrix K σ such that the following inequality, and the inequality (20) hold, and then ( ) 0 0,σ is an exponentially stable equilibrium point of the closed-loop system (11).Moreover, if all assumptions hold globally, and then it is globally exponentially stable.
Proof: To carry out the stability analysis, we consider the following Lyapunov function candidate, where 12 21 x z )  14) is positive-define.Therefore, our task is to show that its time derivative along the trajectories of the closed-loop system (11) is negative define, which is given by, ( x x P x x P x x P x P x P P By using ( 12), the closed-loop system (11) can be rewritten as, x f x w g x w u x g x w K By Definition 2, we have, and then, using Definition Substituting ( 16) into (15), and using (3), ( 4), ( 5), ( 8), ( 9) and ( 16), we have, ( ) ) and then inequality ( 17) can be rewritten as, where ( ) ( ) .
The right-hand side of the inequality ( 18) is a quadratic form, which is negative define when, 0.25 0 Using the fact that Lyapunov function ( 14) is positive-define function and its time derivative is a negative define function if the inequalities ( 13) and ( 20) hold, we conclude that the closed-loop system (11) is stable.In fact, 0 V =  means 0 x = and 0 σ σ = .By invoking LaSalle's invariance principle [23], it is easy to know that the closed-loop system ( 11) is asymptotically stable.
Discussion 2: Although Theorem 1 is proved by resorting to a stabilizing controller ( ) x u x along with a Lyapunov function ( ) x V x , the rationality of the integrator and the integral control action in (10) still can be ve- rified because the stabilizing controller can designed by using linear system theory, feedback linearization technique by taking the diffeomorphism as the variable of ( ) v • , sliding mode technique by taking the sliding mode manifold as the variable of ( ) v • and so on.Therefore, the justification of the following two propositions can be verified.
Proposition 1: The integrator can be taken as any integrable function, which passes through the origin and whose partial derivative, induced by mean value theorem, is positive-define and bounded.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design an integrator.
Proposition 2: The integral control action can be taken as any continuous differential increasing function with the positive-define bounded derivative.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design an integral control action.
Discussion 3: Compared with the integrators and the integral control actions proposed by [1]- [23], it is obvious that: 1) the integrator is not confined to the several forms and can be taken any function, which belongs to the function set v F ; 2) the integral control action is not confined to the several forms, too, and can be taken as any function, which belongs to the function set F φ .Moreover, there is great freedom in the choice of ( ) x u x and ( ) x V x such that for a particular application, the engineers can choose the most appropriate stabilizing con- troller, integrator and integral control action to design their own integral controller.
Discussion 4: From the stability analysis procedure above, it is obvious that: 1) Lyapunov function plays the most key role in the stability analysis because it is the start point and the foundation of Lyapunov method; 2) just Lyapunov function ( 14) is founded, and then the theorem to ensure regionally as well as semiglobally asymptotic stability is established.Therefore, two propositions can be verified; 3) just the time derivatives of Lyapunov function (14) can be transformed into a quadratic form, and then the very tedious trouble, that is, how deal with the coupling terms on x and ( ) ( ) 0 φ σ φ σ − , is solved; 4) just Lyapunov function ( 14) is shaped by addition of a class of general Lyapunov function and a positive-define quadratic function, and then it can be used to solve the wider problem of the stability analysis of the integral control system.Moreover, it is well known that it is very difficult to find such a powerful Lyapunov function because there is no systematic method for finding Lyapunov function, which is basically a matter of trial and error.Therefore, in consideration of these reasons above and the universality of Lyapunov method, it is easy to know that Lyapunov function proposed here not only lays the foundation of the stability analysis of this paper but also could become the foundation of the stability analysis of the complex nonlinear system with general integral control.

Conclusions
In consideration of the complexity of nonlinear system, this paper provided a solution to generalize the integrator and the integral control action such that for a particular application, the engineers can choose the most appropriate stabilizing controller, integrator and integral control action to design their own integral controller.The main contributions are as follows: 1) two function sets, which are used to generalize the integrator and the integral control action, respectively, are defined; 2) the integrator can be taken as any integrable function, which passes through the origin and whose partial derivative, induced by mean value theorem, is positive-define and bounded.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design an integrator; 3) the integral control action can be taken as any continuous differential increasing function with the positive-define bounded derivative.Moreover, the conditions on the function above can be viewed as a class of sufficient conditions to design the integral control action; 4) by originating a powerful Lyapunov function, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established by some bounded information.Consequently, the justification of two propositions on the generalization of the integrator and the integral control action is verified.Moreover, Lyapunov function proposed here has an important significant, and even could lay the foundation of the stability analysis of the complex nonlinear system with general integral control.

Assumption 3 :
Suppose there is a control law ( )x u x such that the inequality (5) holds and 0 x = is an exponentially stable equilibrium point of the system (6), the functions shown in Figure2and so on, and then the function ( )

Figure 1 .
Figure 1.Example curves of one component of the functions belonging to the function set F φ .

Figure 2 .
Figure 2. Example curves of one component of the functions belonging to the function set v F .