Share This Article:

Rebound of Region of Interest (RROI), a New Kernel-Based Algorithm for Video Object Tracking Applications

Abstract Full-Text HTML Download Download as PDF (Size:2738KB) PP. 97-103
DOI: 10.4236/jsip.2014.54012    2,849 Downloads   3,289 Views  

ABSTRACT

This paper presents a new kernel-based algorithm for video object tracking called rebound of region of interest (RROI). The novel algorithm uses a rectangle-shaped section as region of interest (ROI) to represent and track specific objects in videos. The proposed algorithm is constituted by two stages. The first stage seeks to determine the direction of the object’s motion by analyzing the changing regions around the object being tracked between two consecutive frames. Once the direction of the object’s motion has been predicted, it is initialized an iterative process that seeks to minimize a function of dissimilarity in order to find the location of the object being tracked in the next frame. The main advantage of the proposed algorithm is that, unlike existing kernel-based methods, it is immune to highly cluttered conditions. The results obtained by the proposed algorithm show that the tracking process was successfully carried out for a set of color videos with different challenging conditions such as occlusion, illumination changes, cluttered conditions, and object scale changes.

Cite this paper

Ramirez, A. and Chouikha, M. (2014) Rebound of Region of Interest (RROI), a New Kernel-Based Algorithm for Video Object Tracking Applications. Journal of Signal and Information Processing, 5, 97-103. doi: 10.4236/jsip.2014.54012.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Foresti, G.L. (1999) Object Recognition and Tracking for Remote Video Surveillance. IEEE Transactions on Circuits and Systems for Video Technology, 9, 1045-1062. http://dx.doi.org/10.1109/76.795058
[2] Lipton, A.J., Fujiyoshi, H. and Patil, R.S. (1998) Moving Target Classification and Tracking from Real-Time Video. Proceedings of the Fourth IEEE Workshop on Applications of Computer Vision, 8-14.
[3] Li, Y., Goshtasby, A. and Garcia, O. (2000) Detecting and Tracking Human Faces in Videos. Proceeding of the ICPR’00, 1, 807-810.
[4] Gabriel, P., Hayet, J.-B., Piater, J. and Verly, J. (2005) Object Tracking Using Color Interest Points. IEEE Conference on Advanced Video and Signal Based Surveillance.
[5] Harris, C. and Stephens, M. (1988) A Combined Corner and Edge Detector. The 4th Alvey Conference, 147-151.
[6] Koenderink, J.J. and Van Doorn, A.J. (1987) Representation of Local Geometry in the Visual System. Biological Cybernectics, 55, 367-375.
[7] Haritaoglu, I., Harwood, D. and Davis, L. (2000) Real-Time Surveillance of People and Their Activities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 8.
http://dx.doi.org/10.1109/34.868683
[8] Sato, K. and Aggarwal, J. (2004) Temporal Spatio-Velocity Transform and Its Application to Tracking and Interaction. Computer Vision and Image Understanding, 96, 100-128.
http://dx.doi.org/10.1016/j.cviu.2004.02.003
[9] Sebastian, P. and Voon, Y.V. (2007) Tracking Using Normalized Cross Correlation and Color Space. International Conference on Intelligence and Advanced System.
[10] Aggarwal, A., Biswas, S., Singh, S., Sural, S. and Majumdar, A.K. (2006) Object Tracking. Using Background Subtraction and Motion Estimation in MPEG Videos. ACCV, LNCS, 3852, 121-130.
[11] Khatoonabadi, S.H. and Bajic, I.V. (2013) Video Object Tracking in the Compressed Domain Using Spatio-Temporal Markov Random Fields. IEEE Transaction on Image Processing, 22, 300-313.
http://dx.doi.org/10.1109/TIP.2012.2214049
[12] Chu, C.-T., Hwang, J.-N., Pai, H.-I. and Lan, K.-M. (2011) Robust Video Object Tracking Based on Multiple Kernels with Projected Gradients. ICASSP.
[13] PETS 2006 Benchmark Data. ftp://ftp.pets.rdg.ac.uk/pub/PETS2006/

  
comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.