NeuroEvolutionary Feature Selection Using NEAT


The larger the size of the data, structured or unstructured, the harder to understand and make use of it. One of the fundamentals to machine learning is feature selection. Feature selection, by reducing the number of irrelevant/redundant features, dramatically reduces the run time of a learning algorithm and leads to a more general concept. In this paper, realization of feature selection through a neural network based algorithm, with the aid of a topology optimizer genetic algorithm, is investigated. We have utilized NeuroEvolution of Augmenting Topologies (NEAT) to select a subset of features with the most relevant connection to the target concept. Discovery and improvement of solutions are two main goals of machine learning, however, the accuracy of these varies depends on dimensions of problem space. Although feature selection methods can help to improve this accuracy, complexity of problem can also affect their performance. Artificialneural networks are proven effective in feature elimination, but as a consequence of fixed topology of most neural networks, it loses accuracy when the number of local minimas is considerable in the problem. To minimize this drawback, topology of neural network should be flexible and it should be able to avoid local minimas especially when a feature is removed. In this work, the power of feature selection through NEAT method is demonstrated. When compared to the evolution of networks with fixed structure, NEAT discovers significantly more sophisticated strategies. The results show NEAT can provide better accuracy compared to conventional Multi-Layer Perceptron and leads to improved feature selection.

Share and Cite:

Sohangir, S. , Rahimi, S. and Gupta, B. (2014) NeuroEvolutionary Feature Selection Using NEAT. Journal of Software Engineering and Applications, 7, 562-570. doi: 10.4236/jsea.2014.77052.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Lam, H.K., Ling, S.H., Tam, P.K.S. and Leung, F.H.F. (2003) Tuning of the Structure and Parameters of a Neural Network Using an Improved Genetic Algorithm. IEEE Transactions on Neural Networks, 14, 79-88.
[2] Stanley, K.O. and Miikkulainen, R. (2002) The Dominance Tournament Method of Monitoring Progress in Coevolution. Proceedings of the Bird of a Feather Workshops, Genetic and Evolutionary Computation Conference, AAAI, New York, 242-248.
[3] Cliff, D. (1993) Explorations in Evolutionary Robotics. Adaptive Behavior, 2, 73-110.
[4] Harvey, I. (1993) The Artificial Evolution of Adaptive Behavior. Ph.D. Thesis, University of Sussex, Brighton.
[5] Force, A., Yan, Y.L., Joly, L., Amemiya, C., Fritz, A., Ho, R.K., Langeland, J., Prince, V., Wang, Y.L., Westerfield, M., Ekker, M., Postlethwait, J.H. and Amores, A. (1998) Zebrafish HOX Clusters and Vertebrate Genome Evolution. Science, 282, 1711-1714.
[6] Lynch, M., Bryan Pickett, F., Amores, A., Yan, Y., Postlethwait, J. and Force, A. (1999) Preservation of Duplicate Genes by Complementary, Degenerative Mutations. Genetics Society of America, 151, 1531-1545.
[7] Koza, J.R. (1995) Gene Duplication to Enable Genetic Programming to Concurrently Evolve both the Architecture and Work-Performing Steps of a Computer Program. Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montréal, 1995, 734-740.
[8] Radding, C.M. (1982) Homologous Pairing and Strand Exchange in Genetic Recombination. Annual Review of Genetics, 16, 405-437.
[9] Sigal, N. and Alberts, B. (1972) Genetic Recombination: The Nature of a Crossed Strand-Exchange between Two Homologous DNA Molecules. Journal of Molecular Biology, 71, 789-793.
[10] Goldberg, D.E. (1987) Genetic Algorithms with Sharing for Multimodal Function Optimization. Proceedings of the 2nd International Conference on Genetic Algorithms, Pittsburg, 1987, 41-49.
[11] Mahfoud, S.W. (1995) Niching Methods for Genetic Algorithms. Ph.D. thesis, University of Illinois at Urbana-Champaign, Urbana.
[12] Stanley, K.O. and Miikkulainen, R. (2002) Evolving Neural Networks through Augmenting Topologies. The MIT Press Journals, 10, 99-127.
[13] Battiti, R. (1994) Using Mutual Information for Selection Features in Supervised Neural Net Learning. IEEE Transactions on Neural Networks, 5, 537-550.
[14] Bauer Jr., K.W. and Belue, L.M. (1995) Determining Input Features for Multilayer Perceptron. Neurocomputing, 7, 111-121.
[15] Villa, A.E.P., Livingstone, D.J. and Tetko, L.V. (1996) Neural Network Studies. 2. Variable Selection. Journal of Chemical Information and Computer Sciences, 36, 794-803.
[16] Moody, J., Rehfuss, S. and Utans, J. (1995) Input Variable Selection for Neural Networks; Application to Predicting the US Business Cycle. Proceedings of IEEE/IAFE 1995 Computational Intelligence for Financial Engineering, New York City, 1995, 118-122.
[17] Redondo, M.F. and Espinosa, C.H. (1999) A Comparison among Feature Selection Methods Based on Trained Networks. Proceedings of the IEEE Signal Processing Society Workshop, Madison, 1999, 205-214.
[18] Bache, K. and Lichman, M. (2013) Machine Learning Repository. University of California, Irvine.
[19] Alelyani, S., Liu, H. and Tang, J. (1997) Feature Selection for Classification. Intelligent Data Analysis, 1, 131-156.
[20] Jain, A. and Zongker, D. (1997) Feature Selection: Evaluation, Application, and Small Sample Performance. IEEE Transactions on Machine Intelligence, 19, 153-158.

Copyright © 2022 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.