Identification of Textile Defects Based on GLCM and Neural Networks


In modern textile industry, Tissue online Automatic Inspection (TAI) is becoming an attractive alternative to Human Vision Inspection (HVI). HVI needs a high level of attention nevertheless leading to low performance in terms of tissue inspection. Based on the co-occurrence matrix and its statistical features, as an approach for defects textile identification in the digital image, TAI can potentially provide an objective and reliable evaluation on the fabric production quality. The goal of most TAI systems is to detect the presence of faults in textiles and accurately locate the position of the defects. The motivation behind the fabric defects identification is to enable an on-line quality control of the weaving process. In this paper, we proposed a method based on texture analysis and neural networks to identify the textile defects. A feature extractor is designed based on Gray Level Co-occurrence Matrix (GLCM). A neural network is used as a classifier to identify the textile defects. The numerical simulation showed that the error recognition rates were 100% for the training and 100%, 91% for the best and worst testing respectively.

Share and Cite:

Azim, G. (2015) Identification of Textile Defects Based on GLCM and Neural Networks. Journal of Computer and Communications, 3, 1-8. doi: 10.4236/jcc.2015.312001.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Abdel Azim, G. and Nasir, S. (2013) Textile Defects Identification Based on Neural Networks and Mutual Information. International Conference on Computer Applications Technology (ICCAT), Sousse Tunisia, 20-22 January 2013, 98.
[2] Davis, L.S. (1975) A Survey on Edge Detection Techniques. Computer Graphics and Image Processing, 4, 248-270.
[3] Huang, D.-S., Wunsch, D.C., Levine, D.S. and Jo, K.-H., Eds. (2008) Advanced Intelligent Computing Theories and Applications—With Aspects of Theoretical and Methodological Issues. Proceedings of the 4th International Conference on Intelligent Computing (ICIC), 15-18 China, September 2008, 701-708.
[4] Walker, R.F., Jackway, P.T. and Longstaff, I.D. (1997) Recent Developments in the Use of Co-Occurrence Matrix for Texture Recognition. Proceedings of the 13th International Conference on Digital Signal Processing, Brisbane—Queensland University, 1997.
[5] Sahoo, M. (2011) Biomedical Image Fusion and Segmentation Using GLCM. International Journal of Computer Application Special Issue on “2nd National Conference—Computing, Communication and Sensor Network” CCSN, 34-39.
[6] Gonzalez, R.C. and Woods, R.E. (2002) Digital Image Processing. 2nd Edition, Prentice Hall, India.
[7] Kekre, H.B., Sudeep, D.T., Taneja, K.S. and Suryawanshi, S.V. (2010) Image Retrieval Using Texture Features Extracted from GLCM, LBG and KPE. International Journal of Computer Theory and Engineering, 2, 1793-8201.
[8] Haddon, J.F. and Boyce, J.F. (1993) Co-Occurrence Matrices for Image Analysis. IEEE Electronics & Communication Engineering Journal, 5, 71-83.
[9] de Almeida, C.W.D., de Souza, R.M.C.R. and Candeias, A.L.B. (2010) Texture Classification Based on a Co-Occurrence Matrix and Self-Organizing Map. IEEE International Conference on Systems Man & Cybernetics, University of Pernambuco, Recife, 2010.
[10] Haralick, R.M., Shanmugam, S. and Dinstein, I. (1973) Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, 3, 610-621.
[11] Flusser, J. and Suk, T. (1993) Pattern Recognition by Affine Moment Invariants. Pattern Recognition, 26, 167-174.
[12] Lo, C.H. and Don, H.S. (1989) 3D Moment Forms: Their Construction and Application to Object Identification and Positioning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 1053-1064.
[13] Srinivasan, G.N. and Shobha, G. (2008) Segmentation Techniques for ATDR. Proceedings of the World Academy of Science, Engineering, and Technology, 36, 2070-3740.
[14] Tuceryan, M. (1994) Moment Based Texture Segmentation. Pattern Recognition Letters, 15, 659-667.
[15] Gonzalez, R.C. and Woods, R.E. (2008) Digital Image Processing. 3rd Edition, Prentice Hall, India.
[16] Eleyan, A. and Demirel, H. (2011) Co-Occurrence Matrix and Its Statistical Features as a New Approach for Face Recognition. Turkish Journal of Electrical Engineering and Computer Science, 19, No.1.
[17] Krose, B. and Van Der Smagt, P. (1996) An Introduction to Neural Networks. 8th Edition.
[18] Bishop, C. (1995) Neural Networks for Pattern Recognition. Clarendon Press, Oxford, UK.
[19] Freeman, J.A. and Skapura, D.M. (1991) Neural Networks: Algorithms, Applications and Programming Techniques. Addison-Wesley, Reading.
[20] Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Internal Representations by Error Propagation. In: Rumelheart, D.E. and McClelland, J.L., Eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, MIT Press, Cambridge, 318-362.
[21] Rumelhart, D.E., Durbin, R. Golden, R. and Chauvin, Y. (1995) Back Propagation: The Basic Theory. In: Chauvin, Y. and Rumelheart, D.E., Eds., Back Propagation: Theory, Architectures and Applications, Lawrence Erlbaum, Hillsdale, 1-34.
[22] Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Representations by Back-Propagating Errors. Nature, 323, 533-536.
[23] Patterson, D. (1996) Artificial Neural Networks. Prentice Hall, Singapore.
[24] Haykin, S. (1994) Neural Networks: A Comprehensive Foundation. Macmillan Publishing, New York.
[25] Azim, G.A. and Sousow, M.K. (2008) Multi-Layer Feed Forward Neural Networks for Olive Trees Identification. IASTED, Conference on Artificial Intelligence and Application, Austria, 11-13 February 2008, 420-426.
[26] Kattmah, G. and Azim, G.A. (2013) Fig (Ficus Carica L.) Identification Based on Mutual Information and Neural Networks. International Journal of Image Graphics and Signal Processing (IJIGSP), 5, No. 9.

Copyright © 2022 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.