Tree Model Optimization Criterion without Using Prediction Error

Abstract

The use of prediction error to optimize the number of splitting rules in a tree model does not control the probability of the emergence of splitting rules with a predictor that has no functional relationship with the target variable. To solve this problem, a new optimization method is proposed. Using this method, the probability that the predictors used in splitting rules in the optimized tree model have no functional relationships with the target variable is confined to less than 0.05. It is fairly convincing that the tree model given by the new method represents knowledge contained in the data.

Share and Cite:

K. Takezawa, "Tree Model Optimization Criterion without Using Prediction Error," Open Journal of Statistics, Vol. 2 No. 5, 2012, pp. 478-483. doi: 10.4236/ojs.2012.25061.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] L. Breiman, F. Friedman, C. J. Stone and R. A. Olshen, “Classification and Regression Trees,” Chapman and Hall/CRC, London, 1984.
[2] K. Takezawa, “Introduction to Nonparametric Regression,” Wiley, Hoboken, 2006.
[3] J. M. Chambers and T. J. Hastie, “Statistical Models in S,” Chapman and Hall/CRC, London 1991.
[4] K. Takezawa, “Flexible Model Selection Criterion for Multiple Regression,” Open Journal of Statistics, Vol. 2, No. 4, 2012, pp. 401-407.
[5] M. Verbeek, “A Guide to Modern Econometrics,” 2nd Edition, Wiley, Hoboken, 2004.
[6] T. Hastie, R. Tibshirani and J. Friedman. “Elements of Statistical Learning,” 1st Edition, Springer, New York, 2001.
[7] R. O. Duda, P. E. Hart and D. G. Stork, “Pattern Classification,” 2nd Edition, Wiley-Interscience, Hoboken, 2000.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.