Algorithm Research on Moving Object Detection of Surveillance Video Sequence *

In video surveillance, there are many interference factors such as target changes, complex scenes, and target deformation in the moving object tracking. In order to resolve this issue, based on the comparative analysis of several common moving object detection methods, a moving object detection and recognition algorithm combined frame difference with background subtraction is presented in this paper. In the algorithm, we first calculate the average of the values of the gray of the continuous multi-frame image in the dynamic image, and then get background image obtained by the statistical average of the continuous image sequence, that is, the continuous interception of the N-frame images are summed, and find the average. In this case, weight of object information has been increasing, and also restrains the static background. Eventually the motion detection image contains both the target contour and more target information of the target contour point from the background image, so as to achieve separating the moving target from the image. The simulation results show the effectiveness of the proposed algorithm.


Introduction
Compared with the relative static image, the movement image contains more information, and we can extract the contained information from the movement image by means of image processing.The purpose of moving object detection is to extract the required information from the image.Considering the complexity of the environment during the image acquisition process, the quality of moving object detection depends on the following characteristics: ability to adapt to changes in ambient light, ability to adapt to maintain good results in a variety of weather conditions, ability to avoid interference from detecting similar fluctuations and jitter exists in the scene, ability to accurately identify the erratic movement of large areas, and the change of the object's quantity in the movement scene.
The paper first introduces the principle of frame difference and background subtraction, and then researched the common two-frame difference and three-frame difference [1], lastly put forward a background subtraction based on codebook model and Bayesian classification.

The Principle of Frame Difference
The frame difference is the most effective method for detecting change of two adjacent frames in the video image [2].Suppose the video frame at time t is given by ( , , ) f x y t , then the next frame at t + 1 is ( , , 1) f x y t  .The binary image operation results of frame difference can be defined as: 1 ( , , ) ( , , 1) ( , , 1) 0 where Th represents the threshold for decision.If the frame difference image value is greater than the threshold, then put the point as a foreground pixel.Similarly, when less than the threshold, regarding the point as a background pixel.

Two Neighbour Frames Difference
Firstly, calculate the difference of two adjacent frame images, then get image result D k ( x, y ): where f k (x,y) and f k-i (x,y) respectively represent the two adjacent frame images, D k (x,y) represents difference im-age.The so-called adjacent frame difference is the case when set i = 1, and can be defined as: where T represents the threshold for binarization processing.According to the formular (3), D k (x,y) is binarized and processed by mathematical morphology filtering.
Then according to the connectivity of the calculated result R k (x,y), if the area of the connected region is greater than the threshold value, the area is judge as the object area to determine the minimum enclosing rectangle [3].

Three-Frame Difference
This method is an improved method based on two-frame difference.The method extract three consecutive frame images from the video stream, and the difference of three frame images is used to get the contour of moving object.
Then this method can be applied to remove effects on background because of object movement, so as to obtain more precise object contour [4][5][6], and its operation process is expressed as follows: ( , ) ( , ) 0 ( , ) ( , ) 1

The Principle of Background Subtraction
In moving object detection, background subtraction is a frequently-used detection method, which carries out difference calculation by the current image and background image to detect the area of the moving object [7].The algorithm can be described as: Step one: Firstly, we need to calculate the absolute gray image of the current frame image and the background image: Step two: By setting the threshold Th, we can obtain the binarization image, thereby extracting the moving object region from the image: Step three: using mathematical morphology to filter the difference image, and then analysis communicating region.And if the area of connected region is larger than a given threshold value, that is a moving object.

Background Subtraction Based on Codebook Model
This object detection based on the codebook model is based on vector quantization and clustering [8][9].By using the quantization and clustering idea, the changed pixel after image sequence analysis is classified and the code word set in a pixel called codebook.Building a codebook for each pixel is the key of moving object detection based on the codebook mode.Based on Kim algorithm [10], the paper design the motion detection method based on the codebook model, and the basic steps include codebook creating, codebook filtering, codebook updating, foreground detection and background updating.The color model for evaluating color distortion and brightness distortion is shown as: In this process, First we need to build a codebook for each pixel.Because the length of the codebook for each pixel is not the same as the others, so set C = c , here f is frequency of code,  is maximum time interval, p and q are respectively firt time and last time.
Figure 1 is the model for evaluating color distortion and brightness distortion used in the algorithm.Considering time t, input pixel x t = (R,G,B) and code c i are given, then the RGB vector is ( , , ) , , Then color distortion can be defined as: Copyright © 2013 SciRes.OPJ The brightness distortion in the Kim algorithm fin is redeed as: If codebook contains L codes at time t, then RGB input vector expressed: x t = (R t ,G t ,B t ).Judge that whether x t is matched with c i in accordance with color distortion and brightness similarity., where 1  means threshold of color distorti hed with c , on.If the above two formulas hold, then x t is matc i and code is updated as: 1, max{ , }, , , And new code is created as: 1 ( , , ), where Σ 2 is a custom larger variance.A , background model af fter creating codebook, let λ = max{λ,(N-q+p-1)}, then to get the maximum time interval which any code is not exist during overall codebook build process.
By setting the threshold value λ ter filtering the codes out of background will be obtained as follows: where T is the threshold, which may el μ is gotten, it is detected by fo be the half of the number of video frames.
After background mod reground to simply compare respective pixel code to the present pixel value and the current frame.If we find matching code word, then the current pixel should belong to the background; otherwise, it belongs to the foreground.Color distortion and luminance distortion as its matching criteria: e of the pre-set color distortion.If the above two formulas at the same time established, x matches the m-th code word, and code word is updated, and it is determined that the current pixel is a background pixel; otherwise, it is determined that the current pixel as a foreground pixel, at the same time to create a new code word to the codebook.

Improved Background Subtraction
w to get a relatively ideal background is on key issues of background subtraction points system, and it is need to consider how accurately to update background in response to a series of confounding factors such as the light changes.Under the case that the background image can be changed as the light changed accordingly, and achieve a certain accuracy of detection, under the premise of this topic in view of the strengths and weaknesses of background subtraction and frame difference method, here we combine two methods so that it can play each of the characteristics of the target recognition, and improve the effect of moving object detection.
The ame difference method, this paper presents an improved background establishment method based on multi-frame images with background subtraction and then average of the results.In the method, we first calculate the average of the values of the gray of the continuous multi-frame image in the dynamic image, and then get background image obtained by the statistical average of the continuous image sequence, that is, the continuous interception of the N-frame images are summed, and find the average.In this case, weight of object information has been increasing, and also restrains the static background.Eventually, the motion detection image contains both the target contour and more target information of the target contour point from the background image, so as to achieve separating the moving target from the image.The specific steps are as follows: Step one: in the stream age five frames, filter sequence image media, get rid of the image random noise.Thus, we reduce the com-plexity, overcome noise interference.And these images are, respectively, the marked as: f 0 -f t .
Step two: to use frame difference to two-two differen in the frame difference im ce with frames around current frame, then average the sum of difference image, set the result as background of current frame, denoted as f b (x,y).
Step three: if a point gray value age is larger than or equal to the threshold value set in advance, to use the average of image gray value instead of the gray value of the image here; otherwise, we can use the value of last frame instead of corresponding position.The equation is expressed as: where mean denotes the average of image gray value, Th ct the background image B k (x,y) fr   denotes the threshold set in advance, thereby get background image f b (x,y).
Step four: to extra om the video image sequence, so that it only include the stationary background image.The current frames f k (x,y) and f b (x,y), are respectively difference with the background image B k (x,y),then get the results FD(x,y) and FG(x,y): where T is the threshold set in advance.ea image by calcu ideo sequence should be ch Step five: to get the moving object ar lating intersection of the difference FD(x,y) and FG(x,y), then use Mathematical morphology to process motion region, and exclude the interference of background noise.The gradation of the background in the proposed method is closer to the real video of the background, thereby reducing the interfusion degree of moving object in the background.
In addition, because the v anged image, we need IIR filter [11] to update background.The IIR filtering method is relatively common.In this method, the current frame image is represented by F(x,y), and the background image is represented by B(x,y), so the IIR filter updating method is expressed as follows: And in the preliminary stage of background w (28) where B k (i) indicates the value of current background image, and is B k+1 (i) the updated value.And C k (i) value of current frame, estimation, e first select an initial background, and then it is updated by the following formula: While the moving object information of foreground uses a smaller update rate 2  = 0.01.After improved, the influe e of moving target to background model is obvious reduced, at the same time can be a timely and effective response to the back ound change.

Experimental Results and Analysis
In Figure 2 shows the progressive realization of algorithm, and ultimately achieves the effect of object detection based on background subtraction.Picture a) is the ckground image from video; picture b) is the difference of current frame and background image; picture c) is the effect of moving object detection.The experimental results show that the basic background subtraction algorithm can only obtain the contour of object but cannot effectively obtain the full area of the object.And Background model established in the static region by this method and realistic background have higher similarity.However the background model and the true background produce a deviation in the motion area.Because the mean results are affected by the gradation changes in the motion area, therefore, this method is only applicable to the case with fewer moving objects and longer background.Because long time video sequences is calculated, background updating should be slow down in order to ensure estimation bias of the minimum sequence mean.and update of the background model.Finally, experiments were conducted to the frame difference method, background subtraction method and improved method, and the results were analyzed and compared.

*
Foundation Item: This work was supported by the Technological Innovation Funds of Hebei University of Science and Technology (NO.20121213)  , and get the difference image.The operation process is given as follows: b  1,

Figure 1 .
Figure 1.The color model for evaluating color distortion and brightness distortion. color experimental environmen door laboratory.Using the fixed camera to r capture frames from video, thereby realizing moving object recognition based on background subtraction method.The experimental results are shown in Figure 2.
1 ,c 2 ,...,c L , here c L represents the codebook which length is L.