Stable Adaptive Neural Control of a Robot Arm ()

Salem Zerkaoui, Saeed M. Badran

Al Baha University, Al Baha, Saudi Arabia.

Université Le Havre—GREAH, Lebon, Le Havre, France.

**DOI: **10.4236/ica.2012.32016
PDF
HTML
5,494
Downloads
8,634
Views
Citations

Al Baha University, Al Baha, Saudi Arabia.

Université Le Havre—GREAH, Lebon, Le Havre, France.

In this paper, stable indirect adaptive control with recurrent neural networks (RNN) is presented for square multivariable non-linear plants with unknown dynamics. The control scheme is made of an adaptive instantaneous neural model, a neural controller based on fully connected “Real-Time Recurrent Learning” (RTRL) networks and an online parameters updating law. Closed-loop performances as well as sufficient conditions for asymptotic stability are derived from the Lyapunov approach according to the adaptive updating rate parameter. Robustness is also considered in terms of sensor noise and model uncertainties. This control scheme is applied to the manipulator robot process in order to illustrate the efficiency of the proposed method for real-world control problems.

Share and Cite:

Zerkaoui, S. and Badran, S. (2012) Stable Adaptive Neural Control of a Robot Arm. *Intelligent Control and Automation*, **3**, 140-145. doi: 10.4236/ica.2012.32016.

1. Introduction

Research in non-linear control theory has been motivated by the inherent characteristics of the dynamical systems to control. Many systems are non-linear, their dynamics are not perfectly known and therefore not exactly modelled. Control engineers have hardly worked to improve the usual control methods as PID in order to guarantee closed loop stability in the presence of unmodeled dynamics and external disturbances. But, despite these efforts, conventional linear control techniques cannot meet all requirements to satisfy system performances, and adaptive control seems today an efficient strategy to study the stabilization and tracking of highly uncertain dynamical systems.

Since neural networks (NNs) have the advantages of inherent approximation capabilities and learning ability, they have been successfully implemented for identification and control of non-linear systems [1-10]. Especially, RNNs are suitable for dynamic mappings and lead to good control performance in the presence of unmodeled dynamics [10-12].

It is well known that FFN and RNN could be used as components in feedback systems [13]. The control system must satisfy three main conditions: boundedness of the NN weights, boundedness of the tracking error and stability of the global system under control. In an attempt to guarantee these criterions, a considerable research effort has concerned the design of neural networks control with high accurate tracking performances and strong robustness [14]. A major design technique has emerged with the use of the Lyapunov theory. The main advantage of this approach is that the parameter adaptation laws ensure the asymptotic stability of the closed-loop system.

The adaptive neural controllers can be classified according to three structures, such as, inverse model [15], direct [1,5,14] and indirect [9,10,13] control design. Neural control schemes can also be divided into “pure” neural controllers [1,11,13] and neural controllers combined with other conventional control strategies such as back stepping and sliding mode [5,14,16]. In that case, the role of neural networks is to approximate the nonlinear part of the input-output relation.

For multivariable nonlinear systems, due to the couplings among various inputs and outputs, the control problem is more complex and few results are available in the literature. In [8,9] the authors presented a stable adaptive control scheme for a multivariable nonlinear systems with a triangular structure using multilayer neural networks. The control design is based on integral type Lyapunov function and the block-triangular structure properties. These control schemes, however, cannot be extended to the general class of MIMO nonlinear systems.

The preceding works suffer from one or more of the following drawbacks: 1) the NNs used are linearly parameterised, whereas a non-linear model is necessary in order to control non-linear systems. 2) The system to be controlled is of a special known structure, (e.g., a traingular structure). Such results cannot be directly applied to an entirely unknown system. 3) The performances, for example stability and robustness, are locally guaranteed. 4) The model and/or the controller are fixed at the end of the training phase and the operational phase performances can be strongly degraded. It is often difficult to obtain the data which cover the whole range of the process excitation (problem of sufficient excitation).

In contrast to the above, the result presented in this paper avoids these drawbacks by establishing a stable adaptive control scheme based on the use of recurrent neural networks. In order to prove the industrial interest of the method for engineers, the application and validation of IDNC have been applied to robot manipulator which is a nonlinear system, multivariable with timevarying and/or inaccurately-known parameters. The controlled process is considered as a “black box” whose operating model is completely unknown. The main advantage of the proposed method is that models of the systems do not have to be known. Starting from zero values, weights, updating rates and time parameters of both adaptive instantaneous neural model and controller one, adapt themselves in order to track continuously the plant dynamics.

This paper is organized as follows. Section 2 describes IDNC structure, adaptation algorithm, stability and robustness conditions. Section 3 deals with the manipulator robot definition. In section 4 the experiment results are presented. Finally section 5 concerns our conclusions and perspectives.

2. Adaptive NN Control

The proposed control scheme is an Indirect Neural network Controller (IDNC) composed of two separate fully connected recurrent neural networks: the Neural Controller (NC) and the Adaptive instantaneous Monitoring Network (AMN) [13]. The aim of AMN is to provide an estimation of the process output(s) during a short time window in order to drive NC. The subscripts m and c are used to distinguish the AMN and NC respectively. The updating of NC and AMN is synchronous (Figure 1, the dash lines show the RTRL paths to update the parameters of AMN and NC). Discrete time is considered and for simplicity, let us refer to instant t = kΔT by using the integer k, with ΔT the sampling period.

Let us define N_{IN} and N_{OUT} respectively as the number of plant inputs and outputs and assume that N_{IN} = N_{OUT}, where IN and OUT represent the set of input and output indexes.

When the process is running, the neural networks continuously adapt. Comparison of the physical measurements with the neural network inputs and outputs is realised at each time. All the measured and manipulated

Figure 1. Structure of the indirect neural network control, the dimension of Y(t), (t) and G(t) is N_{OUT}, the dimension of U(t) is N_{IN}.

variables X correspond to physical signals (angular positions, spherical-coordinates, etc.) and have positive values constrained into an interval.

The activation function of the neurons are hyperbolic functions, therefore the network outputs evolve into the interval.

The algebraic expression of this transformation is given by:

(1)

Let us notice that there is no pre or post-training phase but only an on-line updating of the weights, time parameters and adaptation parameters.

To avoid the problem of persistent excitation and to provide an instantaneous model that adapts itself when plant or environment changes, AMN parameters are updated in real time. Since adaptation continues as long as the controller drives the process (it is not our intention to memorize the dynamics of the controlled system). The idea is to compute an instantaneous behavioural model from input and output data of the plant. This instantaneous model is used to automatically update the controller parameters in order to track the process variations. The real time adaptation provides an efficient compensation of the unpredictable process disturbances and sensor.

The autonomous evolution of AMN and NC starts from zero values. It results in a compact structure with a small number of nodes (N_{m} = N_{IN} + N_{OUT} neurons for AMN and Nc = 2 × N_{OUT} neurons for NC) [13,17]. The dynamics of N_{m} and Nc neurons are given according to “Equations (2) and (3)” respectively.

(2)

(3)

where B(k) = (B_{i}(k)) Î IR^{Nm} and D(k) = (D_{i}(k)) Î IR^{Nc} are considered as constant during each sampling period with:

(4)

(5)

if and if

G_{i}(k) corresponds to the i^{th} plant desired output to be tracked. ŷ_{i}(k), W_{ij}(k) (resp. U_{i}(k), f_{ij}(k)) and 1/t represent respectively the i^{th} neuron state, weights value from j^{th} to i^{th} neuron and the adaptive time parameter of NC (resp. AMN).

The development of autonomous adaptation algorithm parameters are given by “Equations (6), (7) and (8)”.

(6)

(7)

(8)

where and.

The notation indicates a vector of the 3D matrix P. The parameters |h| represent the learning rate of both neural networks.,

, and are the network sensitivity functions.

Stability and robustness properties of the closed loop system are important issues to be addressed. Indeed, small parameter uncertainties and external disturbances can have an unfavourable impact on performance as well as stability. In addition, the dynamic behaviour of the networks can lead to instability of the plant.

The application of the following theorems [11,17-20], guarantee the stability and robustness properties of the closed loop system.

Theorem 1

Let |h| be the adaptive updating rate of both the model and control networks, suppose the Lyapunov function is defined by:

Then the variation of the Lyapunov function could be expressed as, where the three parameters a, b and c depend on the process dynamics.

The sufficient stability condition for the IDNC in the sense of Lyapunov should satisfy the following updating rate bounds:

(9)

where depends on the updating procedure and must be non-negative.

Theorem 2

Let |h| be the adaptive learning rate of both the monitoring and control networks. Suppose that the sensor noise quickly varies compared to the process dynamics. The IDNC compensates the sensor noise, if the following sufficient condition holds:

(10)

where a is an arbitrary threshold depends on the tolerate noise ratio and the usual Euclidean norm.

Theorem 3

A sufficient condition on the uncertainty parameters (DW, Dt) for robust stability is given by:

(11)

where is the infinite norm of a matrix and K a confidence band of uncertainties which depends on the nominal parameters of AMN.

Figure 2 sums up different conditions on the networks parameters that perform the robustness and the stability of the control system. Theorems 1 and 2 constrain explicitly the evolution of the adaptation parameter h and indirectly the evolution of other parameters. Theorem 3 describe robustness stability performance

3. Control Design for the Manipulator Robot

Our proposed control algorithm is applied on a medical robot designed for dental implantation (Figure 3) [21]. The robot is a semi-active mechanical device. It has a passive arm and a motorised wrist with three degrees of freedom that are not convergent. The base of the medical robot is passive, i.e. it is not motorised and can be manipulated by the surgeon in any direction.

The aim of the controller is to guide the surgeon so that it will respect the scheduled orientation.

The difficulty of the robot control lies in the calculation of the inverse model that connects the geometric variables to joint variables. Indeed, for a given direction of the robot end-effector, joint variables can be calculated by solving the inverse model equations which permits multiple solutions. According to the actual position of the actuators and the calculated joint variables, the controller must select the optimal solution which prevents large rotation of the actuators. The IDNC approximates both the direct and inverse model of the robot by

Figure 2. Updating including the different conditions in the network parameters space evolution.

Figure 3. Medical robot.

AMN and NC. The IDNC parameters are adapted in real time in order to minimise a quadratic cost function. This cost function is defined according to the difference between the orientation of the “robot end-effector” vector and the orientation of the “patient” vector and also its variation. Maintain the same orientation between the two vectors are considered as the control objective.

The angular positions of both actuators are considered as plant inputs. The process outputs and desired ones of the “robot end effector” and “patient” vectors are given by the spherical-coordinates (θ and β) (Figure 4).

The nonlinear process is completely unknown for our algorithm which only needs the input and output data and does not require any knowledge about the process model. The autonomous evolution starts from zero initial conditions. It results in a compact structure: the NC has six neurons and the AMN has four neurons [20].

Figure 5 depicts the tracking performance and angular positions of both actuators obtained with our adaptation algorithm. It can be observed from (Figure 5(d)) that, after a short adaptation stage, due to |η| and |τ| parameters start from zero values, the proposed IDNC achieves good results (nmse = 3.2 e-004). This effect resulted in smoothing action of the learning rate stabilisation constraint, which assures that the closed loop system would

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | W. D. Chang, “Robust Adaptive Single Neural Control for a Class of Uncertain Nonlinear Systems with Input Nonlinearity,” Information Sciences, Vol. 171, No. 1-3, 2005, pp. 261-271. doi:10.1016/j.ins.2004.05.001 |

[2] | J. Wang, “Sensitivity Identification Enhanced Control Strategy for Nonlinear Process Systems,” Computers & Chemical Engineering, Vol. 27, No.11 , 2003, pp. 16311640. doi:10.1016/S0098-1354(03)00117-0 |

[3] | F. Fourati, M. Chtourou and M. Kamoun, “Stabilization of Unknown Nonlinear Systems Using Neural Networks,” Applied Soft Computing, Vol. 8, No. 2, 2008, pp. 11211130. doi:10.1016/j.asoc.2007.04.002 |

[4] | D. Wang and P. Bao, “Enhancing the Estimation of Plant Jacobian for Adaptive Neural Inverse Control,” Neurocomputing, Vol. 34, No. 1-4, 2000, pp. 99-115. doi:10.1016/S0925-2312(00)00319-2 |

[5] | S. S. Ge and C. Wang, “Direct Adaptive NN Control of a Class of Nonlinear Systems,” IEEE Transactions on Neural Networks, Vol. 13, No. 1, 2002, pp. 214-221. |

[6] | W. Gao and R. R. Selmic, “Neural Network Control of a Class of Nonlinear Systems With Actuator Saturation,” IEEE Transactions on Neural Networks, Vol. 17, No. 1, 2003, pp. 147-156. |

[7] | J. M. Renders, M. Saerens and H. Bersini, “Adaptive Neurocontrol of MIMO Systems Based on Stability Theory,” IEEE Colloquium on Advances in Neural Networks for Control and Systems, Orlando, 25-27 May 1994, pp. 2476-2481. |

[8] | S. S. Ge, C. C. Hang and T. Zhang, “Stable Adaptive Control for Multivariable Nonlinear Systems with a Triangular Control Structure,” IEEE Transactions on Automatic Control, Vol. 45, No. 6, 2000, pp. 1221-1225. |

[9] | S. S. Ge, C. Wang and Y. H. Tan, “Adaptive Control of Partially Known Nonlinear Multivariable Systems Using Neural Networks,” IEEE International Symposium on Proceedings of the Intelligent Control, Mexico City, 5-7 September 2001, pp. 292-297. |

[10] | L. Tian and C. Collins, “A Dynamic Recurrent Neural Network-Based Controller for a Rigid-Flexible Manipulator System,” Mechatronics, 2004, Vol. 14, No. 5, pp. 471-490. doi:10.1016/j.mechatronics.2003.10.002 |

[11] | S. Zerkaoui, F. Druaux, E. Leclercq and D. Lefebvre, “Indirect Neural Control for Plant-wide Systems: Application to the Tennessee Eastman Challenge Process,” Computers and Chemical Engineering, Vol. 34, No. 2, 2009, pp. 232-243. doi:10.1016/j.compchemeng.2009.08.003 |

[12] | R. J. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Computation, Vol. 1, No. 2, 1989, pp.270-280. doi:10.1162/neco.1989.1.2.270 |

[13] | E. Leclercq, F. Druaux, D. Lefebvre and S. Zerkaoui, “Autonomous Learning Algorithm for Fully Connected Recurrent Networks,” Neurocomputing, Vol. 63, 2005, pp. 25-44. doi:10.1016/j.neucom.2004.04.007 |

[14] | R. M., Sanner and J.-J. E., Slotine, “Gaussian Networks for Direct Adaptive Control,” IEEE Transactions on Neural Networks, Vol. 3, No. 6, 1992, pp. 837-863. |

[15] | D. Wang and P. Bao, “Enhancing the Estimation of Plant Jacobian for Adaptive Neural Inverse Control,” Neurocomputing, Vol. 34, No. 1-4, 2000, pp. 99-115. doi:10.1016/S0925-2312(00)00319-2 |

[16] | H. R. Wu and M. Palaniswami, “An Adaptive Tracking Controller Using Neural Networks for a Class of Nonlinear Systems,” IEEE Transactions on Neural Networks, Vol. 9, No. 5, 1998, pp. 947-955. doi:10.1109/72.712168 |

[17] | S. Zerkaoui, F. Druaux, E. Leclercq and D. Lefebvre, “Commande Adaptative par Réseau de Neurones HyperConnectés: Etude de la Stabilité et de la Robustesse,” Journées Doctorales en Modélisation, Analyse et Conduite des Systèmes Dynamiques, Lyon, 5-7 September 2005, Article ID: 024. |

[18] | S. Zerkaoui, F. Druaux, E. Leclercq and D. Lefebvre, “Stable Adaptive Control with Recurrent Neural Networks,” International Federation of Automatic Control, Prague, 4-8 July 2005, Article ID: 02103. |

[19] | S. Zerkaoui, F. Druaux, E. Leclercq and D. Lefebvre, “Robust Stability Analysis for Indirect Neural Adaptive Control,” International Control Conference, Glasgow, 30 August-1 September 2006, Article ID: 105. |

[20] | S. Zerkaoui, F. Druaux, E. Leclercq and D. Lefebvre, “Multivariable Adaptive Control for Non-Linear Systems: Application to the Tennessee Eastman Challenge Process,” European Control Conference, Kos, 2007. |

[21] | R. Chaumont, E. Vasselin, M. Gorka and D. Lefebvre, “Forward Kinematics and Geometric Control of a Medical Robot: Application to Dental Implants,” International Conference on Informatics in Control, Automation and Robotics, Angers, 9-11 May 2007, pp. 110-115. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.