Convergence of Invariant Measures of Truncation Approximations to Markov Processes

Abstract

Let Q be the Q-matrix of an irreducible, positive recurrent Markov process on a countable state space. We show that, under a number of conditions, the stationary distributions of the n × n north-west corner augmentations of Q converge in total variation to the stationary distribution of the process. Two conditions guaranteeing such convergence include exponential ergodicity and stochastic monotonicity of the process. The same also holds for processes dominated by a stochastically monotone Markov process. In addition, we shall show that finite perturbations of stochastically monotone processes may be viewed as being dominated by a stochastically monotone process, thus extending the scope of these results to a larger class of processes. Consequently, the augmentation method provides an attractive, intuitive method for approximating the stationary distributions of a large class of Markov processes on countably infinite state spaces from a finite amount of known information.

Share and Cite:

A. Hart and R. Tweedie, "Convergence of Invariant Measures of Truncation Approximations to Markov Processes," Applied Mathematics, Vol. 3 No. 12A, 2012, pp. 2205-2215. doi: 10.4236/am.2012.312A301.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] W. Anderson, “Continuous-Time Markov Chains: An Applications-Oriented Approach,” Springer Series in Statistics, Springer-Verlag, New York, 1991.
[2] E. Seneta, “Finite Approximations to Infinite Nonnegative Matrices I,” Proceedings of the Cambridge Philosophical Society, Vol. 63, No. 4, 1967, pp. 983-992. doi:10.1017/S0305004100042006
[3] E. Seneta, “Finite Approximations to Infinite Nonnega tive Matrices II: Refinements and Applications,” Proceedings of the Cambridge Philosophical Society, Vol. 64, No. 2, 1968, pp. 465-470. doi:10.1017/S0305004100043061
[4] E. Seneta, “Computing the Stationary Distribution for Infinite Markov Chains,” Linear Algebra and Its Applications, Vol. 34, 1980, pp. 259-267. doi:10.1016/0024-3795(80)90168-8
[5] D. Gibson and E. Seneta, “Monotone Infinite Stochastic Matrices and Their Augmented Truncations,” Stochastic Processes and Their Applications, Vol. 24, No. 2, 1987, pp. 287-292. doi:10.1016/0304-4149(87)90019-6
[6] D. Gibson and E. Seneta, “Augmented Truncations of Infinite Stochastic Matrices,” Journal of Applied Probability, Vol. 24, No. 3, 1987, pp. 600-608. doi:10.2307/3214092
[7] R. Tweedie, “Truncation Approximations of Invariant Measures for Markov Chains,” Journal of Applied Probability, Vol. 35, No. 3, 1998, pp. 517-536. doi:10.1239/jap/1032265201
[8] R. Tweedie, “Truncation Procedures for Nonnegative Matrices,” Journal of Applied Probability, Vol. 8, No. 2, 1971, pp. 311-320. doi:10.2307/3211901
[9] R. Tweedie, “The Calculation of Limit Probabilities for Denumerable Markov Processes from Infinitesimal Properties,” Journal of Applied Probability, Vol. 10, No. 1, 1973, pp. 84-99. doi:10.2307/3212497
[10] J. Kingman, “The Exponential Decay of Markov Transition Probabilities,” Proceedings of the London Mathematical Society, Vol. 13, No. 1, 1963, pp. 337-358. doi:10.1112/plms/s3-13.1.337
[11] D. Down, S. Meyn and R. Tweedie, “Exponential and Uniform Ergodicity of Markov Processes,” Annals of Probability, Vol. 23, No. 4, 1995, pp. 1671-1691. doi:10.1214/aop/1176987798
[12] R. Tweedie, “Criteria for Ergodicity, Exponential Ergodicity and Strong Ergodicity of Markov Processes,” Journal of Applied Probability, Vol. 18, No. 1, 1981, pp. 122-130. doi:10.2307/3213172
[13] S. Meyn and R. Tweedie, “Markov Chains and Stochastic Stability,” Springer-Verlag, London, 1993. doi:10.1007/978-1-4471-3267-7
[14] S. Meyn and R. Tweedie, “Computable Bounds for Convergence Rates of Markov Chains,” Annals of Applied Probability, Vol. 4, No. 4, 1994, pp. 981-1011. doi:10.1214/aoap/1177004900
[15] B. Kirstein, “Monotonicity and Comparability of Time-Homogeneous Markov Processes with Discrete State Space,” Statistics, Vol. 7, No. 1, 1976, pp. 151-168.
[16] A. Hart, “Quasistationary Distributions for Continuous-Time Markov Chains,” Ph.D. thesis, University of Queensland, Brisbane, 1997.
[17] L. Breyer and A. Hart, “Approximations of Quasistationary Distributions for Markov Chains,” Mathematical and Computer Modelling, Vol. 31, No. 10-12, 2000, pp. 69-79. doi:10.1016/S0895-7177(00)00073-X
[18] M. Kijima and E. Seneta, “Some Results for Quasistationary Distributions of Birth-Death Processes,” Journal of Applied Probability, Vol. 28, No. 3, 1991, pp. 503-511. doi:10.2307/3214486

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.