A Novel Operational Partition between Neural Network Classifiers on Vulnerability to Data Mining Bias
Charles Wong
Theta Rhythms, LLC, Cambridge, USA.
DOI: 10.4236/jsea.2014.74027   PDF    HTML   XML   6,349 Downloads   7,860 Views   Citations

Abstract

It is difficult if not impossible to appropriately and effectively select from among the vast pool of existing neural network machine learning predictive models for industrial incorporation or academic research exploration and enhancement. When all models outperform all the others under disparate circumstances, none of the models do. Selecting the ideal model becomes a matter of ill-supported opinion ungrounded on the extant real world environment. This paper proposes a novel grouping of the model pool grounded along a non-stationary real world data line into two groups: Permanent Data Learning and Reversible Data Learning. This paper further proposes a novel approach towards qualitatively and quantitatively demonstrating their significant differences based on how they alternatively approach dynamic and raw real world data vs static and prescient data mining biased laboratory data. The results across 2040 separate simulation runs using 15,600 data points in realistically operationally controlled data environments show that the two-group division is effective and significant with clear qualitative, quantitative and theoretical support. Results across the empirical and theoretical spectrum are internally and externally consistent yet demonstrative of why and how this result is non-obvious.

Share and Cite:

Wong, C. (2014) A Novel Operational Partition between Neural Network Classifiers on Vulnerability to Data Mining Bias. Journal of Software Engineering and Applications, 7, 264-272. doi: 10.4236/jsea.2014.74027.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Rosenblatt, F. (1958) The Perceptron: A Probabalistic Model for Information Storage and Organization in the Brain. Psychological Review, 65, 386-408. http://dx.doi.org/10.1037/h0042519
[2] Cortes, C. and Vapnik, V. (1995) Support-Vector Networks. Machine Learning, 20, 273-297.
http://dx.doi.org/10.1007/BF00994018
[3] Rumelhart, D., Hinton, G. and Williams, R. (1986) Learning Internal Representations by Error Propagation, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1, Foundations, MIT Press, Cambridge.
[4] Jordan, M.I. (1986) Serial Order: A Parallel Distributed Processing Approach. Institute for Cognitive Science Report 8604, University of California, San Diego.
[5] Elman, J. (1990) Finding Structure in Time. Cognitive Science, 14, 179-211.
http://dx.doi.org/10.1207/s15516709cog1402_1
[6] Jaeger, H. and Haas, H, (2004) Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science, 304, 78-80. http://dx.doi.org/10.1126/science.1091277
[7] Fahlman, S. and Lebiere, C., (1990) The Cascade-Correlation Learning Architecture. In: Duda, R., Hart, P. and Stork, D., Eds., Advances in Neural Information Processing Systems 2, Morgan Kaufmann, Pattern Classification, Wiley-Interscience, Hoboken, 524-532.
[8] Duda, R., Hart, P. and Stork, D. (2001) Pattern Classification. Wiley-Interscience, Hoboken.
[9] Rabiner, L. and Juang, B. (1993) Fundamentals of Speech Recognition. Prentice Hall, Upper Saddle River.
[10] Albus, J. (1975) A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME Journal of Dynamic Systems, Measurement, and Control, 97, 220-227.
[11] Carpenter, G. and Grossberg, S. (2003) Adaptive Resonance Theory. In: Michael, A.A., Ed., The Handbook of Brain Theory and Neural Networks, 2nd Edition, MIT Press, Cambridge, 87-90.
[12] Wong, C. and Versace, M., (2011b) Context Sensitivity with Neural Networks in Financial Decision Processes. Global Journal of Business Research, 5, 27-43.
[13] Hinton, G., Osindero, S. and Teh, Y. (2006) A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18, 1527-1554. http://dx.doi.org/10.1162/neco.2006.18.7.1527
[14] Versace, M., Bhatt, R., Hinds, O. and Schiffer, M. (2004) Predicting the Exchange Traded Fund DIA with a Combination of Genetic Algorithms and Neural Networks. Expert Systems with Applications, 27, 417-425. http://dx.doi.org/10.1016/j.eswa.2004.05.018
[15] Zhang, D., Jiang, Q. and Li, X. (2005) A Heuristic Forecasting Model for Stock Decision Making. Mathware and Soft Computing, 12, 33-39.
[16] Kim, K. (2006) Artificial Neural Networks with Evolutionary Instance Selection for Financial Forecasting. Expert Systems with Applications, 30, 519-526. http://dx.doi.org/10.1016/j.eswa. 2005.10.007
[17] Medeiros, M., Terasvirta, T. and Rech, G. (2006) Building Neural Networks for Time Series: A Statistical Approach. Journal of Forecasting, 25, 49-75. http://dx.doi.org/10.1002/for.974
[18] West, D., Dellana, S. and Qian, J. (2005) Neural Network Ensembles for Financial Decision Applications. Computers and Operations Research, 32, 2543-2559. http://dx.doi.org/10.1016/j.cor. 2004.03.017
[19] Saad, E., Prokhorov, E. and Wunsch, D. (1998) Comparative Study of Stock Trend Prediction Using Time Delay, Recurrent and Probabilistic Neural Networks. IEEE Transactions on Neural Networks, 9, 1456-1470. http://dx.doi.org/10.1109/72.728395
[20] West, D. (2000) Neural Network Credit Scoring Models. Computers and Operations Research, 27, 1131-1152. http://dx.doi.org/10.1016/S0305-0548(99)00149-5
[21] Ng, G., Quek, C. and Jiang, H. (2008) FCMAC-EWS: A Bank Failure Early Warning System Based on a Novel Localized Pattern Learning and Semantically Associative Fuzzy Neural Network. Expert Systems with Applications, 34, 989-1003. http://dx.doi.org/10.1016/j.eswa.2006.10.027
[22] Carpenter, G., Grossberg, S., Markuzon, N., Reynolds, J. and Rosen, D. (1992) Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervised Learning of Analog Multidimensional Maps. IEEE Transactions on Neural Networks, 3, 698-713. http://dx.doi.org/10.1109/72.159059
[23] Wong, C. and Versace, M. (2012) CARTMAP: A Neural Network Method for Automated Feature Selection in Financial Time Series Forecasting. Neural Computing and Applications, 21, 969-977. http://dx.doi.org/10.1007/s00521-012-0830-8
[24] Sharpe, W. (1994) The Sharpe Ratio. Journal of Portfolio Management, 21, 49-58.
http://dx.doi.org/10.3905/jpm.1994.409501
[25] Witten, I. and Frank, E. (2002) Data Mining. Morgan Kaufman Publishers, San Francisco.
[26] LeBaron, B. (2002) Short-Memory Traders and Their Impact on Group Learning in Financial Markets. Proceedings of the National Academy of Sciences, 99, 7201-7206.
http://dx.doi.org/10.1073/pnas.072079699
[27] Hussain, A., Knowles, A., Lisboa, P. and El-Deredy, W. (2008) Financial Time Series Prediction Using Polynomial Pipelined Neural Networks. Expert Systems with Applications, 35, 1186-1199. http://dx.doi.org/10.1016/j.eswa.2007.08.038
[28] Bodyanskiy, Y. and Popov, S. (2006) Neural Network Approach to Forecasting Quasi Periodic Financial Time Series. European Journal of Operational Research, 175, 1357-1366.
http://dx.doi.org/10.1016/j.ejor.2005.02.012

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.