^{1}

^{*}

^{1}

^{*}

^{1}

^{*}

Tracking and segmentation of moving objects are suffering from many problems including those caused by elimination changes, noise and shadows. A modified algorithm for the adaptive background model is proposed by linking Gaussian mixture model with the method of principal component analysis PCA. This approach utilizes the advantage of the PCA method in providing the projections that capture the most relevant pixels for segmentation within the background models. We report the update on both the parameters of the modified method and that of the Gaussian mixture model. The obtained results show the relatively outperform of the integrated method.

Object segmentation is an important technique used in many systems, especially in the field of computer vision. Background subtraction which involves calculating a reference image, labels the pixels corresponding to foreground objects. N. Friedman and S. Russell were proposed a Gaussian mixture model (GMM) for the background subtraction [

The principal component analysis is as a linear transformation filter. It can be used to simplify a dataset and creates new classifiers. H. Moon and P. Jonathan introduced a statistical pattern recognition PCA algorithm which has become the famous benchmark PCA algorithm [

The remainder of the paper is organized as follows: in the Section 2, we present adaptive background subtraction algorithm based on Gaussian mixture model followed by PCA algorithm placed in the Section 3. The linkage between PCA and the adaptive model is explained in the Section 4. The applications of the algorithm with their experimental results are presented with comments in the Section 5.

In the adaptive mixture of Gaussian in our work, the pixel processes is considered at time series. We consider each pixel as an independent statistical process, incorporates all pixel observations. The mixture model does not distinguish components that correspond to background from those associated with foreground objects and record the observed intensity at each pixel over the previous N frames.

In the same manner as C. Stauffer and W. Grimson [

, where I is the image sequence. The probability of pixels observation data X is defined as follows:

where is the weight of each component k^{th} at time t, is the mean and is the covariance mixture of each component k^{th} at time t respectively, is a Gaussian probability density defined as follows:

For simplicity we use, where I identity matrix. Next, we update the mixture of each pixel consecutively by reading in new pixel values and update each matching mixture component. We initialize the parameters and. They are updated for each frame in the mixture. We calculate W for each k^{th} Gaussian component in the mixture. When the current pixel value matches none of the distributions, the least probable distribution is updated with the current pixel values. Then the parameters of the mixture are updated as follows:

where the constant is a learning rate to update the components weights, is a dummy variable with value 1 for matching models and 0 otherwise. We introduce the following criterion for screening the preliminary Gaussian matching distribution with Mahalanobis distance less than a threshold T

to collect the targeted pixels, with the update recursive equations

and

where indicates how to accelerate the update. Thus, we can find the foreground resulted from the background and then find the best segment of the image. The non active background Gaussian is treated as foreground. In the ordering, we selected the ratios Large values of ratios are associated with distributions which have a high weight, low variance. The first B distributions chosen under the expression:

where the threshold T is a prior probability estimated from the background process. Movements caused by the background (such as tree branches shaking, surface fluctuations, etc.) were considered to segment for the foreground objects.

We used observation data where. First we consider each of the N frames in one color (R) such that we can define the frame vectors for fixed i with respect to the j^{th} pixels index:

The arithmetic average of pixels in each component for the N_{i} frame vector can be defined as follows:

where i^{th} is number of pixels in the frame i.

Next, we compute the difference for the i^{th} image and construct the difference matrix A as:

The covariance matrix of each frame vector is defined by:

We compute the eigenvectors and corresponding eigenvalues by:

Using Singular Value Decomposition (SVD) method, where V is the set of eigenvectors associated with the corresponding eigenvalues λ. To apply the method of PCA we need to define the projection by applying the translation matrix to the vectors as follows:

Next, we construct each frame to obtain the features of the shape. Then we can define the estimated data by:

One property of PCA is that the projection onto the principal subspace minimizes squared errors. To find the feature principal subspecies of PCA, the sequential reconstruction errors are estimated from:

the major advantage of the PCA method comes from its generalization ability. It reduces the feature space dimension by considering the variance of the input data. This method determines which projections are referable for representing the structure of input data. Those projections are selected in such a way that the maximum amount of information (i.e. maximum values of variances, where Eigenvalues represent the largest variances) is obtained in the smallest dimension of feature space.

Since the Gaussian mixture model does not vary greatly with illumination changes and noise, we propose in this study an algorithm based on linking PCA method with GMM.

We extend Gaussian mixture of observing pixel data from (1, 2), using the method of PCA. The estimated is obtained by projecting the data on the axis, then with the new data, t = 1, N, we propose to compute based on S such that:

and again for simplicity we assume, where I is identity matrix. In linking PCA with GMM it is evident that the integrated based on by its physical structure, contains the maximum amount of information or has the minimum squared errors as compared to based on of the GMM in handling the limitation and shadow that contribute to noise in the adaptive Gaussian model.

To update each mixture of each pixel entails reading the new pixel values and select the matching components, then compute the mean obtained with and finally injecting that mean into the covariance of PCA method form (5, 6). Symbolically the update is based on:

and

where indicates how to accelerate the update. In the integrated method we insert the estimates of the mean and covariance of each component in PCA into GMM respectively. We simply select the components with largest weights and re-estimate with (7) as the follows:

And their Gaussian matching parameters must satisfy:

with the estimates the ratios Large values of ratios are associated with distributions which have a high weight, low variance and first Gaussian distributions:

Then for estimating the distribution of data by integrating PCA and mixture Gaussian, we need to perform both the appropriate partitioning of the component and the estimation of model parameters. We can perform this task successfully due to the important property of the PCA method.

In our application, we performed a quantitative comparison between two algorithms for segmentation and objects detection: The first algorithm based on Gaussian mixture model (GMM) and the second competing algorithm based on linking PCA with the adaptive background model.

In our experiment, each pixel is checked against the current frame of the background model by comparing it with every component in the Gaussian model under each of the two algorithms.

For each of the GMM method and PCA-GMM method, this process continues until a matching is found with k components (k = 3, 5). With changes in time, we used the data sequentially in the range 121 to 1000 frames and parameters and.

The experiment resulted in two frame’s sequences in the different backgrounds (tree, lightness).The frame’s dimensions were 167 × 120. From

Next we select each component of mixture model, minimum reconstruction error over the data need to be estimated. After the selection of one color in data, the estimation of the mean in the feature space, and the differences between mean and each color were computed.

After we applied the PCA to solve the problem of Gaussian mixture and obtained improved result in the two cases, see

Clearly, the detection accuracy in terms of recall and precision of PCA + GMM method is consistently higher than the mixture of Gaussians approach, the GMM method, as illustrated by

Consequently, the PCA + GMM method reflects more accurate detection of object results than the GMM method at each level of the proposed approach.

As the appearance of scene varies with illumination changes and noise, it is difficult to separate and track objects in a dynamic scene environment. In this paper, we reported an application study based on a proposed

algorithm that links PCA and the Gaussian Mixture Model (GMM), to address this difficulty. The proposed algorithm has manipulated the variation to ease the segmentation of the foreground from the background. The study confirms that the proposed algorithm produces relatively good results compared to GMM method.