Two Agent Paths Planning Collaboration Based on the State Feedback Stackelberg Dynamic Game

DOI: 10.4236/ojop.2013.23009   PDF   HTML   XML   2,484 Downloads   5,223 Views   Citations


Autonomous Navigation Modules are capable of driving a robotic platform without human direct participation. It is usual to have more than one Autonomous Navigation Modules in the same work space. When an emergency situation occurs, these modules should achieve a desired formation in order to efficiently escape and avoid motion deadlock. We address the collaboration problem between two agents such as Autonomous Navigation Modules. A new approach for team collaborative control based on the incentive Stackelberg game theory is presented. The procedure to find incentive matrices is provided for the case of geometric trajectory planning and following. A collaborative robotic architecture based on this approach is proposed. Simulation results performed with two virtual robotic platforms show the efficiency of this approach.

Share and Cite:

S. Kelouwani, "Two Agent Paths Planning Collaboration Based on the State Feedback Stackelberg Dynamic Game," Open Journal of Optimization, Vol. 2 No. 3, 2013, pp. 61-71. doi: 10.4236/ojop.2013.23009.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] M. Pantic, A. Pentland, A. Nijholt and T. S. Huang, “Human Computing and Machine Understanding of Human Behavior: A Survey,” Lecture Notes in Computer Science, 4451 NAI, Hyderabad, 2007, pp. 47-71.
[2] M. Chatterjee, “Design Research: Building Human-Centered System,” IEEE International Professional Communication Conference (IPCC 2007), Seattle, 1-3 October 2007, pp. 453-458.
[3] J.-M. Hoc, “Towards a Cognitive Approach to HumanMachine Cooperation in Dynamic Situations,” International Journal of Human Computer Studies, Vol. 54, 2001, pp. 509-540.
[4] H. Liu, F. Lin and H. B. Zha, “Fuzzy Decision Method for Motion Deadlock Resolving in Robot Soccer Games,” Advanced Intelligent Computing Theories and Applications. With Aspects of Theoretical and Methodological Issues, Vol. 4681, Springer, Berlin/Heidelberg, 2007, pp. 1337-1346.
[5] H. L. Sng, G. Sen Gupta and C. H. Messom, “Strategy for Collaboration in Robot Soccer,” The First IEEE International Workshop on Electronic Design, Test and Applications, Christchurch, 29-31 January 2002, pp. 347-351.
[6] H. Gakuhari, S. M. Jia, K. Takase and Y. Hada, “RealTime Deadlock-Free Navigation for Multiple Mobile Robots,” International Conference on Mechatronics and Automation (ICMA 2007), Harbin, 5-8 August 2007, pp. 2773-2778.
[7] G. P. Papavassilopoulos, “Solution of Some Stochastic Quadratic Nash and Leader-Follower,” SIAM Journal on Control and Optimization, Vol. 19, No. 5, 1981, pp. 651-666.
[8] I. Harmati and K. Skrzypczyk, “Robot Team Coordination for Target Tracking Using Fuzzy Logic Controller in Game Theoretic Framework,” Robotics and Autonomous Systems, Vol. 57, No. 1, 2009, pp. 75-86.
[9] V. Isler, D. Sun and S. Sastry, “Roadmap Based PursuitEvasion and Collision Avoidance,” Proceedings of Robotics: Science and Systems, 2005.
[10] L. Ming, B. Jose Jr., S. Cruz and A. Marwan, “An Approach to Discrete-Time Incentive Feedback Stackelberg Games,” IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, Vol. 32, No. 4, 2002, pp. 472-481.
[11] H. Gakuhari, S. Jia, Y. Hada and K. Takase, “RealTime Navigation for Multiple Mobile Robots in a Dynamic Environment,” IEEE Conference on Robotics, Automation and Mechatronics, Vol. 1, 2004, pp. 113-118.
[12] S. Krysztof, “Control of a Team of Mobile Robots Based on Non-Coperative Equilibra with Partial Coordination,” International Journal Applied Mathematic Computer Science, Vol. 15, No. 1, 2005, pp. 89-97.
[13] Q. Zeng, B. Rebsamen, E. Burdet and L. T. Chee, “A Collaborative Wheelchair System,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 16, No. 2, 2008, pp. 161-170. doi:10.1109/TNSRE.2008.917288
[14] C. Urdiales, A. Poncela, I. Sanchez-Tato, F. Galluppi, M. Olivetti and F. Sandoval, “Efficiency Based Reactive Shared Control for Collaborative Human/Robot Navigation,” Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, 29 October-2 November 2007.
[15] T. Hamagami and H. Hirata, “Development of Intelligent Wheelchair Acquiring Autonomous, Cooperative, and Collaborative Behavior,” IEEE International Conference on Systems, Man and Cybernetics, The Hague, 10-13 October 2004, pp. 3525-3530.
[16] T. Taha, J. V. Miro and G. Dissanayake, “Wheelchair Driver Assistance and Intention Prediction Using POMDPs,” Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, Melbourne, 3-6 December 2007, pp. 449-454. doi:10.1109/ISSNIP.2007.4496885
[17] A. Huntemann, E. Demeester, et al., “Bayesian Plan Recognition and Shared Control under Uncertainty: Assisting Wheelchair Drivers by Tracking Fine Motion Paths,” Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, 29 October-2 November 2007, pp. 3360-3366.
[18] J. B. Cruz Jr., “Leader-Follower Strategies for Multilevel Systems,” IEEE Transactions on Automatic Control, Vol. 23, No. 2, 1978, pp. 244-255. doi:10.1109/TAC.1978.1101716
[19] M. Simaan and J. B. Cruz Jr., “On the Stackelberg Strategy in Nonzero-Sum Games,” Journal of Optimization Theory and Applications, Vol. 11, No. 5, 1973, pp. 533-555. doi:10.1007/BF00935665
[20] Y. C. Ho, P. Luh and G. Olsder, “Control-Theoretic View on Incentives,” Automatica, Vol. 18, No. 2, 1982, pp. 167-179. doi:10.1016/0005-1098(82)90106-6
[21] H. von Stackelberg, “The Theory of the Market Economy,” Oxford University Press, London, 1952.
[22] F. Lewis and V. Syrmos, “Optimal Control,” John Willey & Son, New York, 1995.
[23] E. Gat, R. P. Bonnasso, R. Murphy and A. Press, “On Three-Layer Architectures,” Artificial Intelligence and Mobile Robots, 1998, pp. 195-210.

comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.