Lyapunov-Based Dynamic Neural Network for Adaptive Control of Complex Systems

Abstract

In this paper, an adaptive neuro-control structure for complex dynamic system is proposed. A recurrent Neural Network is trained-off-line to learn the inverse dynamics of the system from the observation of the input-output data. The direct adaptive approach is performed after the training process is achieved. A lyapunov-Base training algorithm is proposed and used to adjust on-line the network weights so that the neural model output follows the desired one. The simulation results obtained verify the effectiveness of the proposed control method.

Share and Cite:

F. Zouari, K. Ben Saad and M. Benrejeb, "Lyapunov-Based Dynamic Neural Network for Adaptive Control of Complex Systems," Journal of Software Engineering and Applications, Vol. 5 No. 4, 2012, pp. 225-248. doi: 10.4236/jsea.2012.54028.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] L. J. Chen and K. S. Narendra, “Nonlinear Adaptive Control Using Neural Networks and Multiple Models,” Automatica, Vol. 37, No. 8, 2001, pp. 1245-1255. doi:10.1016/S0005-1098(01)00072-3
[2] A. Bagheri, T. Karimi and N. Amanifard, “Tracking Performance Control of a Cable Communicated Underwater Vehicle Using Adaptive Neural Network Controllers,” Applied Soft Computing, Vol. 10, No. 3, 2001, pp. 908918. doi:10.1016/j.asoc.2009.10.008
[3] D. L. Yu, T. K. Chang and D. W. Yu, “Adaptive Neural Model-Based Fault Tolerant Control for Multi-Variable Processes,” Engineering Applications of Artificial Intelligence, Vol. 18, No. 4, 2005, pp. 393-411. doi:10.1016/j.engappai.2004.10.003
[4] J. M. Renders, “Algorithmes Génétiques et Réseaux de Neurones,” Hermes Sciences Publicat, Paris, 1995.
[5] Y.-K. Choi, M.-J. Lee, S. Kim and Y.-C. Kay, “Design and Implementation of an Adaptive Neural Network Compensator for Control Systems,” IEEE Transactions on Industrial Electronics, Vol. 48, No. 2, 2001, pp. 416423. doi:10.1109/41.915421
[6] N. Magnus, et al., “Neural Networks for Modelling and Control of Dynamic Systems,” Springer Berlin, Heidelberg, 2000.
[7] I. Kar and L. Behera, “Direct Adaptive Neural Control for Affine Nonlinear Systems,” Applied Soft Computing, Vol. 9, No. 2, 2009, pp. 756-764. doi:10.1016/j.asoc.2008.10.001
[8] F. N. Koumboulis and N. D. Kouvakas, “Indirect Adaptive Neural Control for Precalcination in Cement Plants,” Mathematics and Computers in Simulation, Vol. 60, No. 3-5, 2002, pp. 325-334. doi:10.1016/S0378-4754(02)00024-1
[9] Z. Nagy, S. Agachi and L. Bodizs, “Adaptive Neural Network Model Based Nonlinear Predictive Control of a Fluid Catalytic Cracking Unit,” Computer Aided Chemical Engineering, Vol. 8, 2000, pp. 235-240. doi:10.1016/S1570-7946(00)80041-3
[10] T. T. Hu, J. H. Zhu and Z. Q. Sun, “Robust Adaptive Neural Control of a Class of MIMO Nonlinear Systems,” Tsinghua Science & Technology, Vol. 12, No. 1, 2007, pp. 14-21. doi:10.1016/S1007-0214(07)70003-2
[11] H. Deng, H. X. Li and Y. H. Wu, “Feedback-LinearizationBased Neural Adaptive Control for Unknown Nonaffine Nonlinear Discrete-Time Systems,” IEEE Transactions on Neural Networks, Vol. 19, No. 9, 2008, pp. 1615-1625.
[12] C. J. Yu, J. H. Zhu and Z. Q. Sun, “Adaptive Neural Network Internal Model Control for Tilt Rotor Aircraft Platform,” Advances in Natural Computation, Vol. 3611, 2005, pp. 262-265. doi:10.1007/11539117_38
[13] S. Yang, W. Q. Qian, W. S. Yan and J. Li, “Adaptive Depth Control for Autonomous Underwater Vehicles Based on Feedforward Neural Networks,” International Journal of Computer Science & Applications, Vol. 4, No. 3, 2007, pp. 107-118.
[14] M. Jalali-Heravi, M. Asadollahi-Baboli and P. Shahbazikhah, “QSAR Study of Heparanase Inhibitors Activity Using Artificial Neural Networks and Levenberg-Marquardt Algorithm,” European Journal of Medicinal Chemistry, Vol. 43, No. 3, 2008, pp. 548-556. doi:10.1016/j.ejmech.2007.04.014
[15] K.-I. Funahashi, “On the Approximate Realization of Continuous Mapping by Neural Networks,” Neural networks, Vol. 2, No. 3, 1989, pp. 183-192. doi:10.1016/0893-6080(89)90003-8
[16] G. Cybenko, “Approximation by Superposition of a Sigmoidal Function,” Mathematics of Control, Signal, and Systems, Vol. 2, 1989, pp. 303-314.
[17] D. Psaltis, A. Sideris and A. A. Yamamura, “A Multilayered Neural Network Control,” IEEE Control Systems Magazine, Vol. 8, No. 2, 1988, pp. 17-21. doi:10.1109/37.1868
[18] J. Baltersee and J. A. Chambers, “Nonlinear Adaptive Prediction of Speech with a Pipelined Recurrent Neural Network,” IEEE Transactions on Signal Processing, Vol. 46, No. 8, 1998, pp. 2207-2216. doi:10.1109/78.705435
[19] D. G. Stavrakoudis and J. B. Theocharis, “Pipelined Recurrent Fuzzy Neural Networks for Nonlinear Adaptive Speech Prediction,” IEEE Transactions on Systems, Man, and Cybernetics, (Part B): Cybernetics, Vol. 37, No. 5, 2007, pp. 1305-1320. doi:10.1109/TSMCB.2007.900516
[20] H. Q. Zhao and J. S. Zhang, “A Novel Adaptive Nonlinear Filter Based Pipelined Feedforward Second-Order Volterra Architecture,” IEEE Transactions Signal Processing, Vol. 57, No. 1, 2009, pp. 237-246. doi:10.1109/TSP.2008.2007105
[21] P.-R. Chang and J.-T. Hu, “Optimal Nonlinear Adaptive Prediction and Modeling of MPEG Video in ATM Networks Using Pipelined Recurrent Neural Networks,” IEEE Journal of Selected Areas in Communications, Vol. 15, No. 6, 1997, pp. 1087-1100. doi:10.1109/49.611161
[22] Y.-S. Chen, C.-J. Chang and Y.-L. Hsieh, “A Channel Effect Prediction-Based Power Control Scheme Using PRNN/ERLS for Uplinks in DS-CDMA Cellular Mobile Systems,” IEEE Transactions on Wireless Communications, Vol. 5, No. 1, 2006, pp. 23-27. doi:10.1109/TWC.2006.1576521
[23] D. P. Mandic and J. A. Chambers, “Toward an Optimal PRNN-Based Nonlinear Prediction,” IEEE Transactions on Neural Networks, Vol. 10, No. 6, 1999, pp. 1435-1442. doi:10.1109/72.809088
[24] D. P. Mandic and J. A. Chambers, “On the Choice of Parameters of the Cost Function in Nested Modular RNN’s,” IEEE Transactions on Neural Networks, Vol. 11, No. 2, 2000, pp. 315-322. doi:10.1109/72.839003
[25] H. Q. Zhao and J. S. Zhang, “Pipelined Chebyshev Functional Link Artificial Recurrent Neural Network for Nonlinear Adaptive Filter,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, Vol. 40, No. 1, 2010, pp. 162-172. doi:10.1109/TSMCB.2009.2024313
[26] R. J. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Computation, Vol. 1, No. 2, 1989, pp. 270-280. doi:10.1162/neco.1989.1.2.270
[27] B. A. Pearlmutter, “Dynamic Recurrent Neural Networks,” Technical Report CMU-CS-88-191, Information Science and Technology Office, 1990.
[28] B. A. Pearlmutter, “Gradient Calculations for Dynamic Recurrent Neural Networks: A Survey,” IEEE Transactions on Neural networks, Vol. 6, No. 5, 1995, pp. 12121228. doi:10.1109/72.410363
[29] H. Al-Duwaish, M. N. Karim and V. Chandrasekar, “Use of Multilayer Feedforward Neural Networks in Identification and Control of Wiener Model,” IEEE Proceedings—Control Theory and Applications, Vol. 143, No. 3, 1996, pp. 255-258. doi:10.1049/ip-cta:19960376
[30] K. S. Narendra and K. Parthasarathy, “Identification and Control of Dynamical Systems Using Neural Networks,” IEEE Transactions on Neural Networks, Vol. 1, No. 1, 1990, pp. 4-27. doi:10.1109/72.80202
[31] C.-C. Ku and K. Y. Lee, “Diagonal Recurrent Neural Networks for Dynamic Systems Control,” IEEE Transactions on Neural Networks, Vol. 6, No. 1, 1995, pp. 144-156. doi:10.1109/72.363441
[32] H. Chaoui and P. Sicard, “Adaptive Lyapunov-Based Neural Network Sensorless Control of Permanent Magnet Synchronous Machines,” Neural Computing & Applications, Vol. 20, No. 5, 2010, pp. 717-727. doi:10.1007/s00521-010-0412-6
[33] R. A. Hooshmand and G. Isazadeh, “Application of Adaptive Lyapunov-Based UPFC Supplementary Controller by Neural Network Algorithm in Multi-Machine Power System,” Electrical Engineering (Archiv fur Elektrotechnik), Vol. 91, No. 4-5, 2009, pp. 187-195. doi:10.1007/s00202-009-0132-z
[34] T. Yabuta and T. Yamada, “Learning Control Using Neural Networks,” Proceedings of 1991 IEEE International conference on Robotics and Automation, Sacramento, 911 April 1991, 1991, pp. 740 745. doi:10.1109/ROBOT.1991.131673
[35] Y. H. Tan and A. van Cauwenberghe, “Nonlinear OneStep-Ahead Control Using Neural Networks: Control Strategy and Stability Design,” Automatica, Vol. 32, No. 12, 1996, pp. 1667-1667. doi:10.1016/S0005-1098(96)80006-9
[36] T. Denoeux and R. Lengellé, “Initializing Back Propagation Networks with Prototypes,” Neural Networks, Vol. 6, No. 3, 1993, pp. 351-363. doi:10.1016/0893-6080(93)90003-F
[37] G. P. Drago and S. Ridella, “Statistically Controlled Activation Weight Initialization (SCAWI),” IEEE Transactions on Neural Networks, Vol. 3, No. 4, 1992, pp. 627631. doi:10.1109/72.143378
[38] J.-P. Martens, “A Stochastically Motivated Random Initialization of Pattern Classifying MLPs,” Neural Processing Letters, Vol. 3, No. 1, 1996, pp. 23-29. doi:10.1007/BF00417786
[39] T. Masters, “Practical Neural Network Recipes in C++,” Academic Press, Boston, 1993.
[40] D. Nguyen and B. Widrow, “Improving the Learning Speed of 2-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights,” 1990 IJCNN International Joint Conference on Neural Networks, San Diego, 17-21 June 1990, pp. 21-26. doi:10.1109/IJCNN.1990.137819
[41] S. Osowski, “New approach to selection of initial values of weights in neural function approximation,” Electronics Letters, Vol. 29, No. 3, 1993, pp. 313-315. doi:10.1049/el:19930214
[42] J. F. Shepanski, “Fast Learning in Artificial Neural Systems: Multilayer Perceptron Training Using Optimal Estimation,” 1998 IEEE International Conference on Neural Networks, San Diego, 24-27 July 1988, pp. 465-472. doi:10.1109/ICNN.1988.23880
[43] H. Shimodaira, “A Weight Value Initialization Method for Improving Learning Performance of the Back Propagation Algorithm in Neural Networks,” 1994 Proceedings of Sixth International Conference on Tools with Artificial Intelligence, New Orleans, 6-9 November 1994, pp. 672675. doi:10.1109/TAI.1994.346429
[44] Y. F. Yam and T. W. S. Chow, “Determining Initial Weights of Feedforward Neural Networks Based on Least Squares Method,” Neural Processing Letters, Vol. 2, No. 2, 1995, pp. 13-17. doi:10.1007/BF02312350
[45] Y. F. Yam, T. W. S. Chow and C. T. Leung, “A New Method in Determining Initial Weights of Feedforward Neural Networks for Training Enhancement,” Neurocomputing, Vol. 16, No. 1, 1997, pp. 23-32. doi:10.1016/S0925-2312(96)00058-6
[46] L. F. A. Wessels and E. Barnard, “Avoiding False Local Minima by Proper Initialization of Connections,” IEEE Transactions on Neural Networks, Vol. 3, No. 6, 1992, pp. 899-905. doi:10.1109/72.165592
[47] N. Weymaere and J.-P. Martens, “On the Initialization and Optimization of Multilayer Perceptrons,” IEEE Transactions on Neural Networks, Vol. 5, No. 5, 1994, pp. 738751. doi:10.1109/72.317726
[48] J. Y. F. Yam and T. W. S. Chow, “A Weight Initialization Method for Improving Training Speed in Feedforward Neural Network,” Neurocomputing, Vol. 30, No. 1-4, 2000, pp. 219-232. doi:10.1016/S0925-2312(99)00127-7
[49] J. E. Nash and J. V. Sutcliffe, “River Flow Forecasting through Conceptual Models Part I—A Discussion of Principles,” Journal of Hydrology, Vol. 10, No. 3, 1970, pp. 282-290. doi:10.1016/0022-1694(70)90255-6
[50] S. A. Billings and Q. M. Zhu, “Nonlinear Model Validation Using Correlation Tests,” International Journal of Control, Vol. 60, No. 6, 1994, pp. 1107-1120. doi:10.1080/00207179408921513
[51] S. A. Billings, H. B. Jamaluddin and S. Chen, “Properties of Neural Networks with Applications to Modelling NonLinear Dynamical Systems,” International Journal of Control, Vol. 55, No. 1, 1992, pp. 193-224. doi:10.1080/00207179208934232
[52] F. Zouari, K. B. Saad and M. Benrejeb, “Adaptive Internal Model Control of DC-Motor Drive System Using Dynamic Neural Network,” Journal of Software Engineering and Applications, Vol. 5, No. 3, 2012, pp. 168-189. doi:10.4236/jsea.2012.53024

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.