^{1}

^{2}

^{*}

^{1}

^{1}

With the continuous development of technology, face recognition technology has played an important role in police work. In order to get a clear face image, image preprocessing technology is needed. This paper mainly proposes illumination compensation technology and reconstruction technology based on symmetry.

Using face recognition technology, police officers can be used to conduct case investigations, determine the route of the suspect, delineate the scope of the suspect’s activities, and serialize the case. In the traditional video surveillance mode, the video scouts need to first observe the video around the incident to highlight the suspects, and then use the video relay to highlight the behavior track, finally confirm the identity of the suspect by visiting the important places above the track, involving a large number of police forces and time, and timeliness is not high. The application of face recognition technology can quickly obtain the face of a character from a large amount of video, and compare it with the existing face database to confirm the identity of the person. It can also automatically draw the activity track of the character through the video relay. The use of portrait recognition technology in video surveillance helps to improve the efficiency of identifying with standardized, standardized and programmed operations, and provide effective detection of cases. At the same time, it can save the verification time of police officers in the scenes of tracing suspects and dangerous personnel.

At present, many researches are focused on face feature extraction and recognition, but there are few researches on image preprocessing. Jiao Linan [

Face recognition technology digitally stores image information for feature extraction, optimization, retrieval comparison and big data operation. Key processes include face acquisition and detection, face image preprocessing, face feature extraction, and face recognition classification.

Due to the interference of light, angle, distance, etc., it is often impossible to directly operate on the obtained original face image. Image preprocessing must be performed on the basis of portrait detection. The main purpose is to eliminate noise in the image, enhance image contrast, obtain a portrait image with appropriate contrast and noise, and allow the image to be cut and analyzed, and the size and the facial features are unified and normalized. Preprocessing operations generally include grayscale, binaryzation, geometric correction, filtering, restoration, and enhancement of face images [

Extracting surveillance videos installed in streets or large shopping malls is often affected by lighting, resulting in unsatisfactory picture quality. In some scenes with a wide illumination range, the dynamic image will contain shadows and bright areas, which will adversely affect feature extraction. Uneven illumination can cause high-light or shadows on the face image. Even for the same face, its gray histogram will have a big difference under different illumination. As

In order to solve the influence of illumination to a certain extent, this paper intends to use a combination of logarithmic transformation and wavelet transform. First, logarithmic transformation of the face image is performed using a logarithmic relationship that conforms to the characteristics of the human visual system. The dark part of the image will be better enhanced [

The algorithm flow is shown in

The human visual system is logarithmic to the RGB tristimulus signal. This logarithmic relationship is expressed as follows:

g ( x , y ) = a + ln ( f ( x , y ) + 1 ) b ∗ ln c (1)

The parameters a, b, and c are parameters introduced to correct the curve change; f(x, y) + 1 is to ensure that after taking the logarithm is still meaningful.

Experience has shown that a takeing 0, b takeing 1 255 ∗ ln 1.2 , and c takeing 255 can effectively compensate for light.

The logarithmic transformation is introduced by the visual perception model of Equation (1), which can be described as follows: Let the gray of f(x, y) of the original image be r, and become another image g(x, y) by the nonlinear transformation function s = T(r). And the histograms of the two are P(r) and P(s). The global logarithmic transformation compresses the contrast in the high grayscale range, so here we use the POPLE algorithm proposed by Jiao Linan [

Wavelet analysis is the decomposition of a signal into a series of successive approximation expressions obtained by translation or stretching operations from the original wavelet function. In this paper, using Gabor wavelet transform, Gabor wavelet is very similar to the simple visual stimulation of cells in human visual system. It is sensitive to the edge of image, can provide good direction selection and scale selection characteristics, and is not sensitive to illumination changes. It can provide good adaptability to light changes and extract appropriate human contact information features.

A typical 2-D Gabor function h(x, y) is [

h ( x , y ) = 1 2 π σ x σ y exp ( − 1 2 ( x 2 σ x 2 + y 2 σ x 2 ) ) ⋅ exp ( 2 π j w x ) (2)

Using h(x, y) as the mother wavelet, a series of self-similars can be obtained by performing appropriate scale transformation and rotation transformation on it:

h m n ( x , y ) = a − m ( x ′ + y ′ ) , a > 1 , m , n ∈ Z (3)

where: x ′ = a − m ( x cos θ + y sin θ ) , y ′ = a − m ( − x cos θ + y sin θ ) , θ = n π K ; a − m is

a scale factor; S and K is the number of scales and directions; 0 ≤ m ≤ s − 1 , 0 ≤ n ≤ K − 1 . Different wavelet functions can be obtained by changing the values of m, n.

Wiener filtering uses the least mean square as the error limit. The goal is to find a convolution function g(t) and get an estimate of x ^ ( t ) , where x ^ ( t ) let the square error between x ^ ( t ) and x(t) is obtained minimally, and y(t) is the observed signal and contains noise information (

x ^ ( t ) = g ( t ) ∗ y ( t ) (4)

For the four components obtained after wavelet transform, the low-frequency component contains a large number of original image approximation information, which is robust. The Wiener filter is used to process the filtering noise. The three high-frequency components are mainly the edge information of the image, and the stability is poor. Canny edge operator is used for detection. Wavelet processing is performed on the processed components, and the original image is restored.

We can know that the face can be regarded as a basically symmetrical structure [

a positive face or a slight deflection. The method of this paper is to judge whether there is occlusion by integral projection, and determine the center line of the face on this basis, divide the face into four parts of up, down, left and right, and perform gradient descent optimization [

Integral projection is a method of target localization based on gray statistics. Commonly used are vertical integral projection and horizontal integral projection. This paper adds diagonal projection, smoothes the obtained integral map, removes the chopping wave, finds the wave trough, and then judges whether there is occlusion according to the symmetry in the integral graph. As shown in (a) of

After getting the four parts of the original image, we use A1, A2 to represent the upper left and upper right parts, respectively, with B1, B2 indicating the lower left and lower right corners. Assuming that the original image can be represented as a 2D matrix of m × n, where a and b are the coordinates of the center point respectively. Using 1 and 2 to distinguish the upper and lower parts, A1, A2, B1, B2 can be expressed as:

A 1 = [ F 1 , 1 ⋮ F 1 , a ] A 2 = [ F 1 , a + 1 ⋮ F 1 , m ] B 1 = [ F 2 , 1 ⋮ F 2 , a ] B 2 = [ F 2 , a + 1 ⋮ F 2 , m ] (5)

Using the gradient descent algorithm, A1, A2 tend to approximate, with B1, B2 also tending to approximate. Introducing the L2 norm, the distance function between A1 and A2 can be expressed as Equation (6), and A1 and A2 need to be constantly updated to minimize L(A1, A2):

L ( A 1 , A 2 ) = ‖ A 1 − A 2 ‖ 2 2 (6)

The traditional gradient function is

∇ L ( x ) = 2 ( A 1 − A 2 ) (7)

Equation (7) is used to update a1 and a2, which is slow and does not reflect the difference between the two half faces. It has been found that the image with poor illumination or occlusion is dark and the image quality is poor. It is assumed that this part is A2. This paper proposes a new iterative function, which is mainly for the part with poor image quality:

A 1 t + 1 = A 1 t − α ( A 1 t − A 2 t ) (8)

A 2 t + 1 = A 2 t − α ( A 2 t − A 1 t ) + ( A 2 t − A 2 t − 1 ) ∫ 0 A 2 t 1 2 π σ exp ( ( x − μ ) 2 2 σ 2 ) d x (9)

A 1 t + 1 represents the A1 value at the iteration t + 1, and A 2 t + 1 represents the A2 value at the iteration t + 1, and α is introduced to make the value stable, α is

taken β / t , where is a constant, t is the current iteration number, σ = A 2 t − A 2 t − 1 2 , μ = A 2 t + A 2 t − 1 2 .

The iterative update stops when the following conditions are met:

1) exceeded the preset number of iterations;

1) ‖ A 1 t + 1 − A 1 t ‖ < ε , ε is a very small normal number;

For b1, b2 does the same (

In order to validate the proposed method, the yaleA face database and the photos taken by ourselves were used for experiments. The yaleA face database contains

FIRST | SECOND | THIRD | FOURTH | |
---|---|---|---|---|

Not processed | 65.32% | 71.86% | 72.31% | 72.35% |

After lighting correction | 86.54% | 84.32% | 89.12% | 86.89% |

Based on symmetric reconstruction | 70.89% | 67.32% | 86.32% | 84.65% |

Fix + refactor | 89.65% | 93.83% | 92.65% | 80.12% |

15 people, each with 11 pictures and a total of 165 images. The photos took by ourself included 10 people, 11 photos per person, with different lighting conditions and occlusion pictures. Take 8 photos of each person as training samples and others as test samples (

It is seen from experiments that the method of reconstructing images based on symmetry needs to be improved. When processing the deflection angle, deviations will occur, affecting feature extraction, and block effects will occur when the tiles are merged. So further improvement is needed. For face recognition with light corrected images, it has a good effect.

The two methods proposed in this paper can improve image quality and detection rate to some extent. In addition, the method to find the center line needs to be improved, and 3D streamline model can be introduced in the later research.

The use of science and technology in policing has become unstoppable, which is conducive to case investigation, manpower liberation and time saving. The application of face recognition technology in police affairs can improve the actual combat efficiency of police affairs, promote the development of science and technology, and further serve the actual combat work of police affairs.

Supported by: Provincial key research and development projects in 2018 (2018C01084).

The authors declare no conflicts of interest regarding the publication of this paper.

Chen, Y., Chen, A., Jiang, Z.W. and Zhong, J.F. (2019) Surveillance Video Portrait Recognition Preprocessing Technology for Police Actual Combat. Open Journal of Applied Sciences, 9, 394-402. https://doi.org/10.4236/ojapps.2019.95033