A Genetic Based Fuzzy Q-Learning Flow Controller for High-Speed Networks

Abstract

For the congestion problems in high-speed networks, a genetic based fuzzy Q-learning flow controller is proposed. Because of the uncertainties and highly time-varying, it is not easy to accurately obtain the complete information for high-speed networks. In this case, the Q-learning, which is independent of mathematic model, and prior-knowledge, has good performance. The fuzzy inference is introduced in order to facilitate generalization in large state space, and the genetic operators are used to obtain the consequent parts of fuzzy rules. Simulation results show that the proposed controller can learn to take the best action to regulate source flow with the features of high throughput and low packet loss ratio, and can avoid the occurrence of congestion effectively.

Share and Cite:

X. LI, Y. JING, N. JIANG and S. ZHANG, "A Genetic Based Fuzzy Q-Learning Flow Controller for High-Speed Networks," International Journal of Communications, Network and System Sciences, Vol. 2 No. 1, 2009, pp. 84-89. doi: 10.4236/ijcns.2009.21010.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] R. G. Cheng, C. J. Chang, and L. F. Lin, “A QoS- provisioning neural fuzzy connection admission controller for multimedia high-speed networks,” IEEE/ ACM Transactions on Networking, Vol. 7, No. 1, pp. 111-121, 1999.
[2] M. Lestas, A. Pitsillides, P. Ioannou, and G. Hadjipollas, “Adaptive congestion protocol: A congestion control protocol with learning capability,” Computer Networks: The International Journal of Computer and Telecommunications Networking, Vol. 51, No. 13. pp. 3773-3798, September 2007.
[3] R. S. Sutton and A. G. Barto, “Reinforcement learning an introduction,” Cambridge, MA: MIT Press, 1998.
[4] A. Chatovich, S. Okug, and G. Dundar, “Hierarchical neuro-fuzzy call admission controller for ATM networks,” Computer Communications, Vol. 24, No. 11, pp. 1031- 1044, June 2001.
[5] M. C. Hsiao, S. W. Tan, K. S. Hwang, and C. S. Wu, “A reinforcement learning approach to congestion control of high-speed multimedia networks,” Cybernetics and Systems, Vol. 36, No. 2, pp. 181-202, January 2005.
[6] K. S. Hwang, S. W. Tan, M. C. Hsiao, and C. S. Wu, “Cooperative multiagent congestion control for high- speed networks,” IEEE Transactions on System, Man, and Cybernetics-Part B: Cybernetics, Vol. 35, No. 2, pp. 255-268, April 2005.
[7] X. Li, X. J. Shen, Y. W. Jing, and S. Y. Zhang, “Simulated annealing-reinforcement learning algorithm for ABR traffic control of ATM networks,” in Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, pp. 5716-5721, December 2007.
[8] X. Li, Y. W. Jing, G. M. Dimirovski, and S. Y. Zhang, “Metropolis criterion based Q-learning flow control for high-speed networks,” in 17th International Federation of Automatic Control (IFAC) World Congress, Seoul, Korea, pp. 11995-12000, July 2008.
[9] C. J. C. H. Watkins, and P. Dayan, “Q-learning,” Machine Learning, Vol. 8, No. 3, pp. 279-292, May 1992.
[10] M. L. Littman, “Value-function reinforcement learning in Markov games,” Journal of Cognitive System Research, Vol. 2, No.1, pp. 55-66, 2001.
[11] D. B. Gu, and E. F. Yang, “A policy gradient reinforcement learning algorithm with fuzzy function approximation,” in Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics, Shenyang, China, pp. 934-940, August 2004.
[12] Y. Zhou, M. J. Er, and Y. Wen, “A hybrid approach for automatic generation of fuzzy inference systems without supervised learning,” in Proceedings of the 2007 American Control Conference, New York City, USA, pp. 3371-3376, July 2007.
[13] S. W. Wilson, “Classifier fitness based on accuracy,” Evolutionary Computation, Vol. 3, No. 2, pp. 145-179, 1994.
[14] P. Gevros, J. Crowcoft, P. Kirstein, and S. Bhatti, “Congestion control mechanisms and the best effort service model,” IEEE Network, Vol. 15, No. 3, pp. 16-26, May-June 2001.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.