Bias of the Random Forest Out-of-Bag (OOB) Error for Certain Input Parameters
Matthew W. Mitchell
DOI: 10.4236/ojs.2011.13024   PDF    HTML     13,281 Downloads   23,353 Views   Citations


Random Forest is an excellent classification tool, especially in the –omics sciences such as metabolomics, where the number of variables is much greater than the number of subjects, i.e., “n << p.” However, the choices for the arguments for the random forest implementation are very important. Simulation studies are performed to compare the effect of the input parameters on the predictive ability of the random forest. The number of variables sampled, m-try, has the largest impact on the true prediction error. It is often claimed that the out-of-bag error (OOB) is an unbiased estimate of the true prediction error. However, for the case where n << p, with the default arguments, the out-of-bag (OOB) error overestimates the true error, i.e., the random forest actually performs better than indicated by the OOB error. This bias is greatly reduced by subsampling without replacement and choosing the same number of observations from each group. However, even after these adjustments, there is a low amount of bias. The remaining bias occurs because when there are trees with equal predictive ability, the one that performs better on the in-bag samples will perform worse on the out-of-bag samples. Cross-validation can be performed to reduce the remaining bias.

Share and Cite:

M. Mitchell, "Bias of the Random Forest Out-of-Bag (OOB) Error for Certain Input Parameters," Open Journal of Statistics, Vol. 1 No. 3, 2011, pp. 205-211. doi: 10.4236/ojs.2011.13024.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] L. Breiman, “Random Forests,” Machine Learning, Vol. 45, No. 1, 2001, pp. 5-32. doi:10.1023/A:1010933404324
[2] C. Strobl, A. L. Boulesteix, A. Zeileis and T. Hothorn, “Bias in Random Forest Variable Importance Measures: Illustrations, Sources, and Solution,” BMC Bioinformatics, Vol. 8, Article 25, 2007. doi:10.1186/1471-2105-8-25 8/25
[3] R Development Core Team, “R: A Language and Environment for Statistical Computing,” R Foundation for Statistical Computing, Vienna, 2008.
[4] A. Liaw and M. L. Weiner, “Classification and Regression trees by RandomForest,” R News 2002, Vol. 2, No. 3, 2002, pp. 18-22. rF.pdf
[5] D. Amaratunga, J. Cabrera and Y.-S. Lee, “Enriched Random Rorests,” Bioinformatics, Vol. 24, No. 18, 2008, pp. 2010-2014. doi:10.1093/bioinformatics/btn356
[6] T. Hothorn, P. Buehlmann, S. Dudoit, A. Molinaro and M. Van Der Laan, “Survival Ensembles,” Biostatistics, Vol. 7, No. 3, 2006, pp. 355-373. doi:10.1093/biostatistics/kxj011
[7] C. Strobl, A. L. Boulesteix, T. Kneib, T. Augustin and A. Zeileis, “Conditional Variable Importance for Random Forests,” BMC Bioinformatics, Vol. 9, Article 307, 2008. doi:10.1186/1471-2105-9-307
[8] T. MacDonald, “Human Glioblastoma,” 2001. cted=65
[9] M. Chan, X. Lu, F. Merchant, J. D. Iglehart and P. Miron, “Gene Expression Profiling of NMU-Induced Rat Mammary Tumors: Cross Species Comparison with Human Breast Cancer,” Carcinogenesis, Vol. 26, No. 8, 2005, pp. 1343-1353. doi:10.1093/carcin/bgi100
[10] D. N. Wilson, H. Chung, R. C. Elliott, E. Bremer, D. George and S. Koh, “Microarray Analysis of Postictal Transcriptional Regulation of Neuropeptides,” Journal of Molecular Neuroscience, Vol. 25, No. 3, 2005, pp. 285- 298. doi:10.1385/JMN:25:3:285
[11] E. Masliah, E. S. Roberts, D. Langford, I. Everall, L. Crews, A. Adame, E. Rockenstein and H. S. Fox, “Patterns of Gene Dysregulation in the Frontal Cortex of Patients with HIV Encephalitis,” Journal of Neuroimmunology, Vol. 157, No. 1-2, 2004, pp. 163-175. doi:10.1016/j.jneuroim.2004.08.026

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.