American Journal of Operations Research

Volume 3, Issue 6 (November 2013)

ISSN Print: 2160-8830   ISSN Online: 2160-8849

Google-based Impact Factor: 0.84  Citations  

Adaptive Strategies for Accelerating the Convergence of Average Cost Markov Decision Processes Using a Moving Average Digital Filter

HTML  XML Download Download as PDF (Size: 384KB)  PP. 514-520  
DOI: 10.4236/ajor.2013.36050    3,590 Downloads   5,844 Views  

ABSTRACT

This paper proposes a technique to accelerate the convergence of the value iteration algorithm applied to discrete average cost Markov decision processes. An adaptive partial information value iteration algorithm is proposed that updates an increasingly accurate approximate version of the original problem with a view to saving computations at the early iterations, when one is typically far from the optimal solution. The proposed algorithm is compared to classical value iteration for a broad set of adaptive parameters and the results suggest that significant computational savings can be obtained, while also ensuring a robust performance with respect to the parameters.

Share and Cite:

E. Arruda and F. Ourique, "Adaptive Strategies for Accelerating the Convergence of Average Cost Markov Decision Processes Using a Moving Average Digital Filter," American Journal of Operations Research, Vol. 3 No. 6, 2013, pp. 514-520. doi: 10.4236/ajor.2013.36050.

Cited by

No relevant information.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.