Multiple Tracking of Moving Objects with Kalman Filtering and PCA-GMM Method

In this article we propose to combine an integrated method, the PCA-GMM method that generates a relatively improved segmentation outcome as compared to conventional GMM with Kalman Filtering (KF). The combined new method the PCA-GMM-KF attempts tracking multiple moving objects; the size and position of the objects along the sequence of their images in dynamic scenes. The obtained experimental results successfully illustrate the tracking of multiple moving objects based on this robust combination


Introduction
Object tracking is an important technique used in many systems, especially in the field of computer vision.Background subtraction involves calculating the reference image and labeling the pixels corresponding to foreground objects.N. Friedman and S. Russell were proposed a Gaussian mixture model (GMM) for the background subtraction [1].C. Stauffer and W. Grimson developed an algorithm for foreground segmentation based on the Gaussian mixture model [2,3].W. Grimson and et al. used tracking information in multicamera calibration, for object detection and classification [4].T. Bouwmans, F. ElBaf and B. Vachon surveyed the background modeling using mixture of Gaussians for object detection [5].F. Zbu and K. Fujimura demonstrated a face tracking method using GMM and EM algorithm [6].P. Kaew-TraKulPong and R. Bowden proposed a method to detect moving shadows using Gaussian mixture model.The shadow detection also reduces the effect of small repetitive motions in the background scene [7].On the other hand, the binding ellipsis, due to S. Cheung and C. Kamath is inappropriate for complex-shaped objects such as human beings.They used frame-differencing as it produces minimal false foreground trails behind objects [8].R. Chan and J. Lin introduce mixture Kalman filtering for on-line estimation and prediction in conditional dynamic linear model [9]. A. Yilmaz, O. Javed, and M. Shah surveyed detection of object and tracking [10].K. Quast and A. Kaup presented Auto GMM-SAMT video surveillance system and used the mean shift algorithm to track the contour of objects of changing shape without the help of any predefined shape model [11].
The principal component analysis (PCA) is as a linear transformation filter can be used to simplify a dataset and creates new classifiers.H. Moon and P. Jonathan introduced a statistical pattern recognition PCA algorithm which has become the famous benchmark PCA algorithm [12].H. Kim, D. Kim, and S. Yang proposed PCA mixture models applicable to many areas, where applied as the alphabet recognition [13].B. Antoni, M. Vijay and V. Nuno introduced a generalization of the Stauffer-Grimson background subtraction algorithm.However, the background subtraction algorithm was evaluated through extensive experiments, involving scenes with dynamic backgrounds, and high detection.Moreover, in their generalization the low false positive rates were obtained regardless of object scale suggesting that it may be suitable for surveillance on dynamic scenes [14].I. Gómez, D. Olivieri, X. Vila, and S. Orozco proposed a method for foreground, background segmentation and created a feature vector for discriminating and tracking several people in the scene [15].Y. Sheikh and M. Shah Introduced single distribution with kernel density [16].Y. Yong and W. Ya-Fei used Gaussian mixture model with spatial local correlation and Markov Chain [17].R. Hassanpour, A. Shahbahrami and S. Wong proposed Gaussian mixture model and considered the higher level a priori knowledge of spatial and temporal relationship between the pixels in a video sequence [18].G. Rajkumar, K. Srinivasarho and P. Sribi-Vasa proposed a new image segmentation algorithm used the finite mixture of doubly truncated bivariate Gaussian distribution by integrating this distribution with the hierarchical clustering [19].K. Panta proposed novel data association schemes for the probability hypothesis density [20].
 is a Gaussian probability density defined as follows:     In our work, we show that the segmentation of moving objects generated by PCA-GMM performed relatively better than the conventional GMM.This improvement in performance is based on the following: given that GMM segments the foreground from the background labeling for each sequence frame, where the background model can be updated using sufficient statistics and all the estimated parameter can be obtained efficiently.However, when the foreground is not available, or can be changed due to critical situations like illumination changes or it has been removed from the scene, then the GMM needs to consider such situations.The PCA-GMM is an extension to GMM which considers these limitations [21].The PCA-GMM which integrates PCA with GMM produces precise and clear features of the shape in the scenes and provides good classification of the object.The tracking of moving object is then based on applying KF on this integrated method.
The remainder of the paper is organized as follows: In the Section 2, we introduced the PCA-GMM that links GMM with PCA, followed by brief description of Kalman filtering method, the experiment results of tracking moving objects by applying KF to PCA-GMM are placed in Section 4, followed by a conclusion in Section 5.

The PCA-GMM Method
First, in the adaptive mixture of Gaussian method, GMM, we considered the pixel processes as time series such that each pixel assumed to be an independent statistical process, incorporates all pixel observations.The mixture model does not distinguish components that correspond to background from those associated with foreground objects and record the observed intensity at each pixel over the previous N frames.
In the same manner as C. Stauffer and W. Grimson [3], we used the same number k of components   We consider the values of a particular pixel, with initial 0 0 0 x y   

 
, :1 over time t and we can define the history of this particular pixel as , where I is the image sequence.The probability of pixels observation data X is defined as follows: , , , where is the weight of each component at time , , k t  is the mean and , k t is the covariance mixture of each component at time t respectively, , , For simplicity we use , where I is the identity matrix.Next, we update the mixture of each pixel consecutively by reading in new pixel values and update each matching mixture component.We initialize the parameters  and  .They are updated for each frame in the mixture.We calculate W for each Gaussian component in the mixture.When the current pixel value matches none of the distributions, the least probable distribution is updated with the current pixel values.Then the parameters of the mixture are updated as follows: where the constant  is a learning rate to update the components weights, is a dummy variable with value 1 for matching models and 0 otherwise.We introduce the following criterion for screening the preliminary Gaussian matching distribution with Mahalanobis distance less than a threshold to collect the targeted pixels, with the update recursive equations indicates how to accelerate the update.Thus, we can find the foreground resulted from the background and then find the best segment of the image.The non active background Gaussian is treated as foreground.In the ordering, we selected the ratios k k W  Large values of ratios are associated with distributions which have a high weight, low variance.The first B distributions chosen under the expression: where the threshold T is a prior probability estimated from the background process.Movements caused by the background (such as tree branches shaking, surface fluctuations, etc.) were considered to segment for the foreground objects.
The arithmetic average of pixels in each component for the frame vector can be defined as follows: , where i is number of pixels in the frame i. Next, we compute the difference i i i for the image and construct the difference matrix A as: The covariance matrix of each frame vector is defined by: We compute the eigenvectors and corresponding eigenvalues by: using Singular Value Decomposition (SVD) method, where V is the set of eigenvectors associated with the corresponding eigenvalues λ.To apply the method of PCA we need to define the projection i by applying the translation matrix V to the vectors Next, we construct each frame to obtain the features of the shape.Then we can define the estimated data x by: ˆi One property of PCA is that the projection onto the principal subspace minimizes squared errors.To find the feature principal subspecies of PCA, the sequential reconstruction errors are estimated from: the major advantage of the PCA method comes from its generalization ability.It reduces the feature space dimension by considering the variance of the input data.This method determines which projections are preferable for representing the structure of input data.Those projections are selected in such a way that the maximum amount of information (i.e.maximum values of variances, where Eigenvalues represent the largest variances) is obtained in the smallest dimension of feature space.
The PCA-GMM addresses the major limitation of the GMM, which is the illumination changes and noise.The PCA-GMM method considers observing pixel data X from (1, 2), using the method of PCA.The estimated x is obtained by projecting the data on the axis, then with the new data To update each mixture of each pixel entails reading the new pixel values and select the matching components, then compute the mean obtained with x and finally injecting that mean into the covariance of PCA method form (5,6). Symbolically the update is based on: and ( 18) ˆ indicates how to accelerate the update.In the integrated method we insert the estimates of the mean , i t  and covariance , i t of each component in PCA into GMM respectively.We simply select the components with largest weights and re-estimate with (7) as follows: And their Gaussian matching parameters must satisfy: ˆk k W with the estimates of the ratios  .The large values of ratios are associated with distributions which have a high weights or low variances.We select the first Gaussian distributions: (21)  then for estimating the distribution of data by PCA-GMM method, we need to perform both the appropriate partitioning of the components and the estimation of model parameters.We can perform this task successfully due to the important property of the PCA method in data

Kalman Filtering: The Version of
A estimate the state of a t

PCA-GMM-KF Method
Kalman Filter (KF) is applied to linear system where the state is assumed to be Gaussian distributed.This filter not only provides an efficient computational solution to sequential systems but also provides an optimal solution for the discrete data for linear filtering problem [22].On tracking objects from frame to frame in long sequences of images, the continuity of the motion of the observed scene, allows the prediction of the image, at any instant, based on their previous trajectories.Because the moving state changes little in the neighboring consecutive frames, we model the system as linear Gaussian with the state parameters of Kalman filter given by the object location, its velocity and its size of the object respectively.A discrete-time dynamic equation state is given by where th

 
, , , , , e state vector x and y are the predicted coordinates of the object, u v and are velocities in respective direction,  present he width of the object rectangle, t re t  reprsents the change in time t and, t e s is the white Gaussian noise with zero means and c variance matrix t Q , that is The position of obtained PCA-GMM al be the measurement vector t go m wi rith ll Z .thatcan be injected in the measurement model of KF.Accordantly: R is covariance t matrix and H is the design matrix such that 1 0 1  0 0 0 0 1 0 0 0 0 0 0 1 .
The corresponding cova atrices are the inputs to the Kalman filter.In the sequential ge, if the dynamics of the moving object is known, prediction can be made about the positions of the objects in the current image.The Kalman filter state prediction t X and state covariance prediction t P are defined by: where ˆt (25) and ˆt P denotes the estimat X ed state vector and error covariance matrix respectively at time t.Then the Kalman filter update steps are as follows:

Experimental Results and Analysis
, in our mentation using the G sults based on PCA-GMM-K by performing the fo To demonstrate the effectiveness of the algorithm application, we first show how the PCA-GMM out-performs the conventional GMM algorithm.In our experiment, each pixel is checked against the current frame of the background model by comparing it with every component in the PCA-GMM.This relatively better performance of PCA-GMM is mainly due to its ability to manipulate the variation to ease the segmentation of the foreground from the background.Moreover, the PCA-GMM method is based on the minimum reconstruction error over the data needed to be estimated which contribute for the better segmentation [22].
It is evident from viewing the seg MM (Figure 1(a)), we can achieve remarkable improve by using the segmentation on PCA-GMM (Figure 1(b)).Next, based on this segmentation result we are motivated to tackle the tracking problem based on PCA-GMM rather than the GMM.
The tracking experimental re F method, are arranged into two cases of tracking multiple moving objects, where we have constructed two frame's sequences in the different backgrounds.The frame's dimensions were 167 × 120.
The tracking algorithm represented llowing: First we apply the PCA-GMM algorithm to extract the moving objects from background in each video frame, and then we predict the next position candidates in the next frame by predicting it from KF in Equations ( 22) and (23).Sequentially the tracking results based on the routine loop of detecting the objects by using data association function and the add new hypotheses function in KF algorithm respectively and finely we update the frame using the Kalman update function which in turn provides new input.Consequently, the resulted PCA-GMM-KF method reflects more accurate detection of object results at each level of the proposed approach (Figures 2(a

Conclusi
In this article, we a moving objects within the framework of Kalman Filtering.Given that the integrated combined PCA-GMM outperforms the conventional GMM in generating relatively improved performance in segmentation, we have been motivated to address the tracking of multiple objects within the framework of KF.The experimental results in addition of illustrating the outperformance of segmentation results of the PCA-GMM algorithm over GMM, have also considered the tracking of multiple moving objects.The proposed PCA-GMM-KF method which utilized the desirable segmentation results due to PCA-GMM can be viewed as a version of KF.The experimental results of tracking objects have shown that that the application of PCA-GMM-KF method to each case and under sequence of images, successfully captured the objects and produces a satisfactory results.

Equation ( 23 )
combines PCA-GMM co with KF.Such a mbination proposes what we may call PCA-GMM-KF, as a version of Kalman Filtering in tracking multiple moving objects.The predicted coordinates   , x y and dimensions ω of the rectangle in the measurem odel of Equation (23) under PCA-GMM-KF are used to locate the object in the present frame.Where, ent m ) and (b)).
Figure 1.(a) Segmentation oving objects under tradi-(b of m tional GMM algorithm; (b) Segmentation of moving object under PCA-GMM algorithm.