Gaussian Mixture Models for Human Face Recognition under Illumination Variations

Abstract

The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identification technique built in the Fourier or frequency domain that is robust to illumination changes and does not require illumination normalization (removal of illumination effects) prior to application unlike many existing methods. The importance of the Fourier domain phase in human face identification is a well-established fact in signal processing. A maximum a posteriori (or, MAP) estimate based on the posterior likelihood is used to perform identification, achieving misclassification error rates as low as 2% on a database that contains images of 65 individuals under 21 different illumination conditions. Furthermore, a misclassification rate of 3.5% is observed on the Yale database with 10 people and 64 different illumination conditions. Both these sets of results are significantly better than those obtained from traditional PCA and LDA classifiers. Statistical analysis pertaining to model selection is also presented.

Share and Cite:

S. Mitra, "Gaussian Mixture Models for Human Face Recognition under Illumination Variations," Applied Mathematics, Vol. 3 No. 12A, 2012, pp. 2071-2079. doi: 10.4236/am.2012.312A286.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] D. Voth, “In the News: Face Recognition Technology,” IEEE Magazine on Intelligent Systems, Vol. 18, No. 3, 2003, pp. 4-7.
[2] M. A. Turk and A. P. Pentland, “Face Recognition Using Eigenfaces,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Maui, 3-6 June 1991, pp. 586-591.
[3] A. Yuille, “Deformable Templates for Face Recognition,” Journal of Cognitive Neuroscience, Vol. 3, No. 1, 1991, pp. 59-70. doi:10.1162/jocn.1991.3.1.59
[4] C. Liu, S. C. Zhu and H. Y. Shum, “Learning Inhomogeneous Gibbs Model of Faces by Minimax Entropy,” IEEE International Conference on Computer Vision, Vancouver, July 2001, pp. 281-287.
[5] G. McLachlan and D. Peel, “Finite Mixture Models,” John Wiley and Sons, Hoboken, 2000. doi:10.1002/0471721182
[6] S. Zhu, Y. Wu and D. Mumford, “Minimax Entropy Principle and Its Application to Texture Modeling,” Neural Computation, Vol. 9, No. 8, 1997, pp. 1627-1660. doi:10.1162/neco.1997.9.8.1627
[7] A. S. Georghiades, P. N. Belhumeur and D. J. Kreigman, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 6, 2001, pp. 643-660. doi:10.1109/34.927464
[8] K. Lee, J. Ho and D. Kreigman, “Nine Points of Light: Acquiring Subspaces for Face Recognition under Variable Lighting,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, 2001, pp. 519-526.
[9] Z. Lian and M. Joo Er, “Face Recognition under Varying Illumination,” New Trends in Technologies: Control, Ma- nagement, Computational Intelligence and Network Sys- tems, 2010, pp. 209-226. doi:10.5772/10413
[10] X. Zou, J. Kittler and K. Messer, “Illumination Invariant Face Recognition: A Survey,” Proceedings of the First IEEE International Conference on Biometrics: Theory, Applications and Systems, Washington DC, September 2007, pp. 1-8.
[11] R. Gross, I. Matthews and S. Baker, “Appearance Based Face Recognition and Light-Fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 4, 2004, pp. 449-465. doi:10.1109/TPAMI.2004.1265861
[12] J. Lee, B. Moghaddam, H. Pfister and R. Machiraju, “A Bilinear Illumination Model for Robust Face Recognition,” IEEE International Conference on Computer Vision, Vol. 2, 2005, pp. 1177-1184.
[13] L. Zhang and D. Samaras, “Face Recognition under Variable Lighting Using Harmonic Image Exemplars,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, 2003, pp. 19-25.
[14] A. V. Oppenheim and R. W. Schafer, “Discrete-Time Signal Processing,” Prentice Hall, 1989.
[15] M. H. Hayes, “The Reconstruction of a Multidimensional Sequence from the Phase or Magnitude of Its Fourier Transform,” IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. 30, No. 2, 1982, pp. 140-154. doi:10.1109/TASSP.1982.1163863
[16] M. Savvides, B. V. K. Vijaya Kumar and P. Khosla, “Face Verification Using Correlation Filters,” 3rd IEEE Automatic Identification Advanced Technologies, Tarry-town, 2002, pp. 56-61.
[17] M. Savvides and B. V. K. Vijaya Kumar, “Eigenphases vs. Eigenfaces,” Proceedings of International Conference on Pattern Recognition, Vol. 3, 2004, pp. 810-813.
[18] M. Savvides, B. V. K. Vijaya Kumar and P. K. Khosla, “Corefaces—Robust Shift Invariant PCA Based Correlation Filter for Illumination Tolerant Face Recognition,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2, 2004, pp. 834-841.
[19] W. Chen, M. J. Er and S. Wu, “Illumination Compensation and Normalization for Robust Face Recognition Using Discrete Cosine Transform in Logarithm Domain,” IEEE Transactions on Systems, Man and Cybernetics, Vol. 36, No. 2, 2006, pp. 458-466. doi:10.1109/TSMCB.2005.857353
[20] V. P. Vishwakarma, S. Pandey and M. N. Gupta, “A Novel Approach for Face Recognition Using DCT Coefficients Re-Scaling for Illumination Normalization,” International Conference on Advanced Computing and Communications, Guwahati, 18-21 December 2007, pp. 535-539.
[21] C. A. Perez and L. E. Castillo, “Genetic Improvements in Illumination Compensation by the Discrete Cosine Transform and Local Normalization for Face Recognition,” The International Society for Optical Engineering, International Symposium on Optomechatronic Technologies, Vol. 7266, 2008, pp. 1-8.
[22] X. F. Nie, Z. F. Tan and J. Guo, “Face Illumination Compensation Based on Wavelet Transform,” Optics and Precision Engineering, Vol. 16, No. 1, 2008, pp. 971-987.
[23] H. Han, S. G. Shan, X. L. Chen and W. Gao, “Illumination Transfer Using Homomorphic Wavelet Filtering and Its Application to Light-Insensitive Face Recognition,” International Conference on Automatic Face and Gesture Recognition, Amsterdam, 17-19 September 2008, pp. 1-6. doi:10.1109/AFGR.2008.4813380
[24] W. G. Gong, L. P. Yang, X. H. Gu and W. H. Li, “Illumination Compensation Based on Multi-Level Wavelet Decomposition for Face Recognition,” Optics and Precision Engineering, Vol. 16, No. 8, 2008, pp. 1459-1464.
[25] T. Chen, W. Yin, X. S. Zhou, D. Comaniciu and T. S. Huang, “Illumination Normalization for Face Recognition and Uneven Background Correction Using Total Variation Based Image Models,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2, 2005, pp. 532-539.
[26] D. H. Liu, L. S. Shen and K. M. Lam, “Illumination Invariant Face Recognition,” Pattern Recognition, Vol. 38, No. 10, 2005, pp. 1705-1716. doi:10.1016/j.patcog.2005.03.009
[27] X. Xie and K. M. Lam, “Face Recognition under Varying Illumination Based on a 2D Face Shape Model,” Pattern Recognition, Vol. 38, No. 2, 2005, pp. 221-230. doi:10.1016/j.patcog.2004.07.002
[28] S. Mitra and M. Savvides, “Gaussian Mixture Models Based on Phase Spectra for Human Identification and Illumination Classification,” Proceedings of 14th IEEE Workshop on Automatic Identification Advanced Technologies, Buffalo, October 2005.
[29] A. E. Gelfand, S. E. Hills, A. Racine-Poon and A. F. M. Smith, “Illustration of Bayesian Inference in Normal Data Models Using Gibbs Sampling,” Journal of the American Statistical Association, Vol. 85, No. 412, 1990, pp. 972-985. doi:10.1080/01621459.1990.10474968
[30] S. K. Sahu and G. O. Roberts, “On Convergence of the EM Algorithm and the Gibbs Sampler,” Statistics and Computing, Vol. 9, No. 1, 1999, pp. 55-64. doi:10.1023/A:1008814227332
[31] H. Bensmail, G. Celeux, A. Raftery and C. P. Robert, “Inference in Model-Based Cluster Analysis,” Statistics and Computing, Vol. 7, No. 1, 1997, pp. 1-10. doi:10.1023/A:1018510926151
[32] T. Sim, S. Baker and M. Bsat, “The CMU Pose, Illumination, and Expression (PIE) Database,” Proceedings of the 5th International Conference on Automatic Face and Gesture Recognition, Washington DC, 20-21 May 2002, pp. 46-51. doi:10.1109/AFGR.2002.1004130
[33] Y. Liu, K. Schmidt, J. Cohn and S. Mitra, “Facial Asymmetry Quantification for Expression-Invariant Human Identification,” Computer Vision and Image Understanding, Vol. 91, No. 1-2, 2003, pp. 138-159. doi:10.1016/S1077-3142(03)00078-X
[34] H. Wang, S. Z. Li and Y. Wang, “Face Recognition under Varying Lighting Conditions Using Self Quotient Image,” IEEE Conference on Automatic Face and Gesture Recognition, 17-19 May 2004, pp. 819-824. doi:10.1109/AFGR.2004.1301635
[35] R. E. Kass and A. E. Raftery, “Bayes Factors,” Journal of the American Statistical Association, Vol. 90, No. 430, 1995, pp. 773-795. doi:10.1080/01621459.1995.10476572

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.