Journal of Software Engineering and Applications, 2013, 6, 519-525
http://dx.doi.org/10.4236/jsea.2013.610062 Published Online October 2013 (http://www.scirp.org/journal/jsea)
519
Enhanced Face Detection Technique Based on Color
Correction Approach and SMQT Features
Mohamed A. El-Sayed1,2, Nora G. Ahmed3
1Department of Mathematics, Faculty of Science, Fayoum University, Al Fayoum, Egypt; 2Department of Computer Science, Taif
University, Al Hawiyah, KSA; 3Department of Mathematics, Faculty of Science, Sohag University, Sohag, Egypt.
Email: mas06@fayoum.edu.eg
Received August 2nd, 2013; revised September 1st, 2013; accepted September 8th, 2013
Copyright © 2013 Mohamed A. El-Sayed, Nora G. Ahmed. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
ABSTRACT
Face detection is considered as a challenging problem in the field of image analysis and computer vision. There are
many researches in this area, but because of its importance, it needs to be further developed. Successive Mean Quanti-
zation Transform (SMQT) for illumination and sensor insensitive operation and Sparse Network of Winnow (SNoW) to
speed up the original classifier based face detection technique presented such a good result. In this paper we use the
Mean of Medians of CbCr (MMCbCr) color correction approach to enhance the combined SMQT features and SNoW
classifier face detection technique. The proposed technique is applied on color images gathered from various sources
such as Internet, and Georgia Database. Experimental results show that the face detection performance of the proposed
method is more effective and accurate compared to SFSC method.
Keywords: Face Detection; Color Correction; MMCbCr; SMQT Features
1. Introduction
Face detection is a computer technology that determines
the locations and sizes of human faces in digital images.
It detects facial features and ignores anything else, such
as buildings, trees and bodies [1-3]. In recent years, face
recognition has attracted much attention and its research
has rapidly expanded by not only engineers but also
neuroscientists, since it has many potential applications
in computer vision communication and automatic access
control system. Especially, face detection is an important
part of face recognition as the first step of automatic face
recognition. However, face detection is not straightfor-
ward because it has lots of variations of image appear-
ance, such as pose variation (front, non-front), occlusion,
image orientation, illuminating condition and facial ex-
pression [4,5].
Up to now, much work has been done in detecting and
locating faces in images and there are many face detec-
tion methods, such as SMQT Features and SNoW Classi-
fier Method (SFSC) [6], Efficient and Rank Deficient
Face Detection Method (ERDFD) [7], Gabor-Feature Ex-
traction and Neural Network Method (GFENN) [8], an
efficient face candidates selector Features Method
(EFCSF) [9] and Neural network based [10].
Colors of the images provide useful information for
many vision applications. As a result, different cameras
typically produce different color values for the same ob-
jects or scenes, as illustrated in Figure 1. These differ-
ences complicate the task of computer vision applications
involving the use of more than one camera. A color cor-
rection approach is thus required to correct the images so
that colors of the same object appear to be similar in the
output from each camera [11]. There were a number of
Color Correction approaches, including GW approach,
WP approach, MGWWP approach, Stretch approach and
MMCbCr approach [12].
In this paper, for face detection we use (SFSC) method:
local SMQT features which can be used as feature ex-
traction for object and SNoW classifier require for train-
ing. But we found that we can enhance this method by
applying MMCbCr Color Correction approach on the in-
put images that make the process of face detection better.
The outline of the paper is as follows. Introduction
about face detection methods is presented in Section 1.
Section 2 discusses the challenges on face detection
techniques. Section 3 explains the proposed method that
uses color correction approach to enhance SFSC face
Copyright © 2013 SciRes. JSEA
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features
520
detection method. Section 4 describes the stage of local
SMQT features. Section 5 presents the concept of split
up SNoW classifier. Section 6 explains the face detection
training and classification. In Section 7, we have pre-
sented the effectiveness of proposed algorithm. The pro-
posed technique is applied on color images gathered
from various sources such as Internet, UCD Face Image
Database and Georgia Database. Also, we compare the
results of the algorithm with SFSC method. Conclusions
are presented in Section 8.
2. Challenges on Face Detection Techniques
The problem is further complicated by differing lighting
conditions, image qualities and geometries, as well as the
possibility of partial occlusion and disguise. An ideal
face detector would therefore be able to detect the pres-
ence of any face under any set of lighting conditions,
orientation, and camera distance upon any background.
Ming-Hsuan, et al. [1] Summarize the challenges as-
sociated with face detection in the following factors:
1) Pose: the images of a face vary due to the relative
camera-face pose (frontal, 45 degree, profile, upside
down), and some facial features such as an eye or the
nose may become partially or wholly occluded.
2) Presence or absence of structural components: facial
features such as beards, mustaches, and glasses may or
may not be present and there is a great deal of variability
among these components including shape, color, and
size.
3) Facial expression: the appearance of faces is di-
rectly affected by a person’s facial expression.
4) Occlusion: faces may be partially occluded by other
objects. In an image with a group of people, some faces
may partially occlude other faces.
5) Image orientation: face images directly vary for dif-
ferent rotations about the camera’s optical axis.
6) Imaging conditions: when the image is formed, fac-
tors such as lighting (spectra, source distribution and
intensity) and camera characteristics (sensor response,
lenses) may change face appearance in the image. Image
condition includes also size, lighting condition, distortion,
noise, and compression.
7) Face Size: Size of faces also make difficult to auto-
mate a system for face detection and recognition.
Figure 1. Images captured by three different cameras.
8) A background variation: is another challenging fac-
tor for face detection in cluttered scenes. Discriminating
windows including a face from non-face is more difficult
when no constraints exist on background.
Some closely related problems of face detection [1]:
1) Face localization: aims to determine the image posi-
tion of a single face; this is a simplified detection prob-
lem with the assumption that an input image contains
only one face.
2) Face recognition or face identification: compares an
input image (probe) against a database (gallery) and re-
ports a match, if any.
3) Face authentication is to verify the claim of the
identity of an individual in an input image.
4) Face tracking methods continuously estimate the
location and possibly the orientation of a face in an im-
age sequence in real time.
5) Facial expression recognition concerns identifying
the affective states (happy, sad, disgusted, etc.) of hu-
mans.
6) Feature is used to denote a piece of information
which is relevant for solving the computational task re-
lated to a certain application. Feature is measurable heu-
ristic properties of the phenomena being observed.
3. Proposed Method
In the proposed method, the goal is to detect the presence
of faces in an image using MMCbCr Color Correction
approach and SFSC method to detect faces uniform and
non-uniform background color of the scene. It is able to
localize faces with different sizes in images taken under
varying illumination conditions.
The phases of the proposed method as illustrated in
Figure 2.
3.1. Color Correction Phase
In this phase we use Mean of Medians of CbCr Color
Correction approach (MMCbCr) to correct the input im-
ages. The Y component contains the luminance informa-
tion and the chrominance information is found in the
chrominance blue Cb and in the chrominance red Cr. The
MMCbCr
Color
Correction
SFSC Face
Detection
Figure 2. The phases of proposed method.
Copyright © 2013 SciRes. JSEA
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features 521
RGB components were converted to the YCbCr compo-
nents using the following formula [12,13].
Y0.257R0.504 G0.098 B 16
Cb0.148R0.291 G0.439B128 
Cr0.439R0.368G0.071 B128
The following steps summarize MMCbCr approach:
1) Transform the given image from RGB to YCbCr
color model.
2) Calculate the median values median (Cb), median
(Cr) for Cb and Cr color component, and maximum
value max(Y) in Y.
3) Calculate the mean values mean (Cb), mean (Cr) for
Cb and Cr color component.
4)
 

ValueMedian CbMedi2an Cr .
5) For all pixels of the image calculate Ynew, Cbnew,
and Crnew

new
Y, Y,235MaxYij ij

new
Cb,Cb,Value Mean Cbij ij

new
Cr,Cr,Value MeanCrij ij
6) Transform the image components Ynew, Cbnew, and
Crnew to RnewGnew Bnew.
7) Apply histogram equalization on Rnew G
new B
new
separately.
8) Combine Rnew G
new B
new to get the final color im-
age.
3.2. Face Detection Phase
In this phase, we use SFSC method to localizing faces in
input images. Here there are three stages: 1) Local
SMQT features which can be used as feature extraction
for object, 2) SNoW classifier requires for training, and 3)
Face detection Training and Classification.
4. Local SMQT Features
The SMQT performs an automatic structural breakdown
of information. These properties will be employed on
local areas in an image to extract illumination insensitive
features. Local areas can be defined in several ways.
Once the local area is defined it will be a set of pixel
values.
SMQTL :D xΜ
x
(1)
where x be one pixel and D(x) be a set of |D(x)| = D be
pixels in local area in an image. The resulting values are
insensitive to gain and bias. These properties are desir-
able with regard to the formation of the whole intensity
image I(x) which is a product of the reflectance R(x) and
the luminance E(x). Additionally, the influence of the
camera can be modeled as a gain factor g and a bias term
b [14]. Thus, a model of the image can be described by

I
x gExRx b
(2)
In order to design a robust classifier for object detec-
tion the reflectance should be extracted since it contains
the object structure. In general, the separation of the re-
flectance and the luminance is an ill posed problem. A
common approach to solving this problem involves as-
suming that E(x) is spatially smooth. Architecture Fur-
ther, if the luminance can be considered to be constant in
the chosen local area then E(x) is given by
,Ex ExD
 (3)
Given the validity of Equation (3), the SMQT on the
local area will yield illumination and camera-insensitive
features. This implies that all local patterns which con-
tain the same structure will yield the same SMQT fea-
tures for a specified level L.
5. Split up SNoW Classifier
The SNoW learning is a sparse network of linear units
over a feature space. One of the strong properties of
SNoW is the possibility to create lookup-tables for clas-
sification. Consider a Patch W of the SMQT features
M(x), then a classifier

nonface face
xx
xW xW
hMxhMx


(4)
Can be achieved using the non face table , the
face table and defining a threshold for θ. Since
both tables work on the same domain, this implies that
one single lookup-table
nonface
x
h
face
x
h
nonface face
xx x
hh h
(5)
can be created for single lookup-table classification.
The training database contain feature
patches with the SMQT features (x) and the correspond-
ing classes ci (face or non face). The non face table and
the face table can then be trained with the Winnow Up-
date Rule. Initially both tables contain zeros. If an index
in the table is addressed for the first time during training,
the value (weight) on that index is set to one. There are
three training parameters; the threshold γ, the promotion
parameter α > 1 and the demotion parameter 0< β < 1.
1, 2,,iN
If
face
xi
xW
hMx
and ci is a face then promo-
tion is conducted and is a face then promotion is con-
ducted as follows


face face
xi xi
hMx hMxxW

face
(6)
If ci is a non face and


xi
xW
hMx
then de-
motion takes place


face face
xi xi
hMxhMx xW
 (7)
Copyright © 2013 SciRes. JSEA
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features
522
This procedure is repeated until no changes occur.
Training of the non face table is performed in the same
manner, and finally the single table is created according
to Equation (5). One way to speed up the classification in
object recognition is to create a cascade of classifiers
[15]. Here the full SNoW classifier will be split up in sub
classifiers to achieve this goal. Note that there will be no
additional training of sub classifiers instead the full clas-
sifier will be divided. Consider all possible feature com-
binations for one feature, , then

,1,2,,2
i
Pi LD


2
1
,
LD
i
i
vxhx PxW

(8)
results in a relevance value with respective significance
to all features in the feature patch. Sorting all the feature
relevance values in the patch will result in an importance
list. Let be a subset chosen to contain the fea-
tures with the largest relevance values. Then
WW

x
xW
hMx
(9)
can function as a weak classifier, rejecting no faces
within the training database, but at the cost of an in-
creased number of false detections. The desired threshold
used on θ' is found from the face in the training database
that results in the lowest classification value from Equa-
tion (9). Extending the number of sub classifiers can be
achieved by selecting more subsets and performing the
same operations as described for one sub classifier. Con-
sider any division, according to the relevance values, of
the full set . Then W' has fewer fea-
tures and more false detections compared to W'' and so
forth in the same manner until the full classifier is
reached. One of the advantages of this division is that W''
will use the sum result from W' . Hence, the maximum of
summations and lookups in the table will be the number
of features in the patch W.
WW W


6. Face Detection Training and Classification
The face detector analyzes image patches 32 × 32 pixels
is applied. This patch is extracted and classified by
jumping Δx = 1and Δy = 1 pixels through the whole im-
age. In order to find faces of various sizes, the image is
repeatedly downscaled and resized with a scale factor Sc
= 1.2.
To overcome the illumination and sensor problem, the
proposed local SMQT features are extracted. Each pixel
will get one feature vector by analyzing its vicinity. This
feature vector can further be recalculated to an index.


1
1
2i
i
i
mVxL
(10)
where V(xi) is a value from the feature vector at position i.
This feature index can be calculated for all pixels which
results in the feature indices image. A circular mask con-
taining P = 648 pixels is applied to each patch to remove
background pixels, avoid edge effects from possible fil-
tering and to avoid undefined pixels at rotation operation.
The face and nonface tables are trained with the pa-
rameters α = 1.005, β = 0.995 and γ = 200. The two
trained tables are then combined into one table according
to Equation (5). Given the SNoW classifier table, the
proposed split up SNoW classifier is created. The splits
are here performed on 20, 50, 100, 200 and 648 summa-
tions. This setting will remove over 90% of the back-
ground patches in the initial stages from video frames
recorded in an office environment.
Overlapped detections are pruned using geometrical
location and classification scores. Each detection is
tested against all other detections. If one of the area
overlap ratios is over a fixed threshold, then the different
detections are considered to belong to the same face.
Given that two detections overlap each other, the detec-
tion with the highest classification score is kept and the
other one is removed. This procedure is repeated until no
more overlapping detections are found.
7. Experimental Discussion & Results
Our experiments are performed using Matlab ver. 7.4,
CPU 2.13GHZ to verify the effectiveness of the proposed
method. The proposed method is applied on 150 color
images gathered from various sources such as Internet,
UCD Face Image Database and Georgia Database. These
images are varying in: size, lighting effects, uniform and
nonuniform background, number of person in each image
and the rotation angle of person. Figure 3 shows some of
the output of tested images in Figure 4 obtained by ap-
plying proposed method and SFSC method.
As can been seen in Figure 3 the face detection per-
formance of the proposed method is better than SFSC
method. Figure 5 illustrates Comparison between pro-
posed method and SFSC method in terms of face detec-
tion rate, false positive rate and false negative rate.
As can be seen in Figure 5, face detection rate in pro-
posed method is better than SFSC method. The proposed
method could detect approximately 84.1% of the faces
correctly and SFSC method could detect approximately
74.6% of the faces correctly. Although false positive rate
and false negative rate in proposed method is less than in
SFSC method. In proposed method false positive rate is
10.4% and false negative rate is 15.9%. In SFSC method
false positive rate is 22.0% and false negative rate is
25.4%. Figure 6 illustrates detection time among 150
images in comparison of proposed method and SFSC
method, as can be seen detection time in proposed me-
thod is a little bit increased.
Copyright © 2013 SciRes. JSEA
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features
Copyright © 2013 SciRes. JSEA
523
SFSC method Proposed method
Figure 3. Detected faces after applying SFSC method and proposed method.
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features
524
Is099
p
5sn fortosearch,co
m
Is064-004 fotosearch.co
m
Figure 4. Samples of test images.
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
80.0
90.0
Detectio n
Rate False posit ive
rate Fal se negat i ve
rate
Rate (Percen t)
Proposed
SFSC
Figure 5. Comparison of two methods.
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
80.0
90.0
1 15294357718599113127141
Image Number
Detection Tim e
SFSC
proposed
Figure 6. Detection time of the two methods.
8. Conclusion
In this paper, we presented a new approach for face de-
tection using the MMCbCr Color Correction approach
and SFSC face detection method. The whole experiment
is applied on 150 color images obtained from different
sources from Internet, and Georgia Database. Using mat-
lab 7.4, the experimental results show that the proposed
method is more effective and accurate compared to SFSC
face detection method.
REFERENCES
[1] Y. Ming-Hsuan, K. David and A. Narendra, “Detecting
Faces in Images: A Survey,” IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, Vol. 24, No. 1,
2002, pp. 34-58. http://dx.doi.org/10.1109/34.982883
[2] I. Kim, J. Hyung Shim and J. Yang, “Face Detection,”
Stanford University, 2010.
http://www.stanford.edu/class/ee368/Project_03/Project/r
eports/ee368group02.pdf
[3] M. A. El-Sayed and N. Aboelwafa, “Study of Face Rec-
ognition Approach Based on Similarity Measures,” Inter-
national Journal of Computer Science Issues (IJCSI), Vol.
9, No. 2, 2012, pp. 133-139.
[4] M. A. El-Sayed and M. A. Khafagy, “An Identification
System Using Eye Detection Based on Wavelets and
Neural Networks,” International Journal of Computer
and Information Technology, Vol. 1, No. 2, 2012, pp. 43-
48.
[5] M. A. El-Sayed, “Edges Detection Based on Renyi En-
tropy with Split/Merge,” Computer Engineering and In-
telligent Systems (CEIS), Vol. 3, No. 9, 2012, pp. 32-41.
[6] M. Nilsson, J. Nordberg and I. Claesson, “Face Detection
Using Local SMQT Features and Split up SNOW Classi-
fier,” IEEE International conference on Acoustics, Speech,
and Signal Processing (ICASSP), Vol. 2, 2007, pp. 589-
592.
[7] W. Kienzle, G. Bakir, M. Franz and B. Schölkopf, “Face
Detection—Efficient and Rank Deficient,” In: Y. Weiss,
Ed., Advances in Neural Information Processing Systems,
Copyright © 2013 SciRes. JSEA
Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features 525
Vol. 17, MIT Press, Cambridge, 2005, pp. 673-680.
[8] Z. Shaaban, “Face Detection Methods,” World Scientific
and Engineering Academy and Society (WSEAS), 2011.
[9] J. Wu and Z.-H. Zhou, “Efficient Face Candidates Selec-
tor for Face Detection,” Pattern Recognition, Vol. 36, No.
5, 2003, pp. 1175-1186.
http://dx.doi.org/10.1016/S0031-3203(02)00165-6
[10] H. A. Rowley, S. Baluja and T. Kanade, “Neural Net-
work-Based Face Detection,” IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, Vol. 20, No. 1,
1998, pp. 23-28. http://dx.doi.org/10.1109/34.655647
[11] J. Yin and J. R. Cooperstock, “Color Correction Methods
with Applications to Digital Projection Environments,”
Journal of WSCG, 2004, in press.
[12] M. A. Berbar, “Novel Colors Correction Approaches for
Natural Scenes and Skin Detection Techniques,” Interna-
tional Journal of Video & Image Processing and Network
Security IJVIPNS-IJENS, Vol. 11, No. 2, 2011, pp. 1-10.
[13] E. Prathibha, A. Manjunath and R Likitha, “RGB to
YCbCr Color Conversion Using VHDL Approach,” In-
ternational Journal of Engineering Research and Devel-
opment, Vol. 1, No. 3, 2012, pp. 15-22.
[14] B. Froba and A. Ernst, “Face Detection with the Modified
Census Transform,” 6th IEEE International Conference
on Automatic Face and Gesture Recognition, Seoul, 17-
19 May 2004, pp. 91-96.
[15] P. Viola and M. Jones, “Rapid Object Detection Using a
Boosted Cascade of Simple Features,” Proceedings of the
2001 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (CVPR), Vol. 1, 2001, pp.
511-518.
Copyright © 2013 SciRes. JSEA