= "pdfddcookie"; // var cookieValue = null; //返回cookie的value值 // if (document.cookie != null && document.cookie != '') { // var cookies = document.cookie.split(';'); //将获得的所有cookie切割成数组 // for (var i = 0; i < cookies.length; i++) { // var cookie = cookies[i]; //得到某下标的cookies数组 // if (cookie.substring(0, cookieName.length + 2).trim() == cookieName.trim() + "=") {//如果存在该cookie的话就将cookie的值拿出来 // cookieValue = cookie.substring(cookieName.length + 2, cookie.length); // break // } // } // } // if (cookieValue != "" && cookieValue != null) {//如果存在指定的cookie值 // return false; // } // else { // // return true; // } // } // function ShowTwo(webUrl){ // alert("22"); // $.funkyUI({url:webUrl,css:{width:"600",height:"500"}}); // } //window.onload = pdfdownloadjudge;
JSIP> Vol.2 No.2, May 2011
Share This Article:
Cite This Paper >>

Video Frame’s Background Modeling: Reviewing the Techniques

Abstract Full-Text HTML Download Download as PDF (Size:166KB) PP. 72-78
DOI: 10.4236/jsip.2011.22010    5,922 Downloads   10,950 Views   Citations
Author(s)    Leave a comment
Hamid Hassanpour, Mehdi Sedighi, Ali Reza Manashty

Affiliation(s)

.

ABSTRACT

Background modeling is a technique for extracting moving objects in video frames. This technique can be used in ma-chine vision applications, such as video frame compression and monitoring. To model the background in video frames, initially, a model of scene background is constructed, then the current frame is subtracted from the background. Even-tually, the difference determines the moving objects. This paper evaluates a number of existing background modeling techniques in term of accuracy, speed and memory requirement.

KEYWORDS

Background Modeling, Moving Object

Cite this paper

H. Hassanpour, M. Sedighi and A. Manashty, "Video Frame’s Background Modeling: Reviewing the Techniques," Journal of Signal and Information Processing, Vol. 2 No. 2, 2011, pp. 72-78. doi: 10.4236/jsip.2011.22010.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] C. R. Wren, A. Azarbayejani, T. Darrell and A. P. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, 1997, pp. 780-785. doi:10.1109/34.598236
[2] C. Stauffer and W. E. L. Grimson, “Adaptive Background Mixture Models for Real-Time Tracking,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, No. 23-25, 1999, pp. 246-252.
[3] L. Hanzo, P. J. Cherriman and J. Streit, “Video Compression and Communications,” WILEY, 2007. doi:10.1002/9780470519929
[4] Y. Q. Shi, H. F. Sun, “Image and Video Compression for Multimedia Engineering,” CRC Press, Boca Raton, 2008. doi:10.1201/9781420007268
[5] P. Kumar, K. Sengupta and A. Lee. “A Comparative Study of Different Color Spaces for Foreground and Shadow Detection for Traffic Monitoring System,” Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems, 2002, pp. 100-105.
[6] N. Schofield and C. McFarlane, “Segmentation and Tracking of Piglets in Images,” Machine Vision and Applications, 1995, pp. 187-193.
[7] Cavallaro, A. Salvador and E. Ebrahimi, “Detecting Shadows in Image Sequences,” Visual Media Production (CVMP), 2004, pp. 165-174.
[8] P. Blauensteiner, H. Wildenauer, A. Hanbury and M. Kampel, “On Colour Spaces for Change Detection and Shadow Suppression,” Proceedings of the 11th Computer Winter Vision Workshop (CVWWS6), 6-8 February 2006, pp. 117-123.
[9] Erum A. Khan and E. Reinhard, “A Survey of Color Spaces for Shadow Identification,” APGV ’04: Proceedings of the 1st Symposium on Applied perception in graphics and visualization, New York, 2004, p.160.
[10] M. Piccardi, “Background Subtraction Techniques: A Review,” IEEE International Conference on Systems, Man and Cybernetics, Vol. 4, 2004, pp. 3099-3104.
[11] A. M. Elgammal, D. Harwood and L. S. Davis, “Non-Parametric Model for Background Subtraction,” Proceedings of ECCV 2000, 2000, pp. 751-767.
[12] T. Horprasert, D. Harwood and L. S. Davis. “A Statistical Approach for Real-Time Robust Background Subtraction and Shadow Detection,” Proceedings of IEEE ICCV’99 FRAME-RATE Workshop, 1999, pp. 1-19.
[13] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao and S. Russell, “Towards Robust Automatic Traffic Scene Analysis in Real-Time,” Proceedings of the 12th IAPR International Conference on Computer Vision & Image Processing, Vol. 1, 1994, pp. 126-131.
[14] N. Martel-Brisson and A. Zaccarin, “Moving Cast Shadow Detection from a Gaussian Mixture Shadow Model,” In CVPR ’05: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, 2005, pp. 643-648.
[15] A. Prati, I. Mikic, M. M. Trivedi and R. Cucchiara, “Detecting Moving Shadows: Algorithms and Evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 7, 2003, pp. 918-923. doi:10.1109/TPAMI.2003.1206520
[16] S.-T. Su and Y.-Y. Chen, “Moving Object Segmentation Using Improved Running Gaussian Average Background Model,” Digital Image Computing: Techniques and Applications, 2006, pp. 24-30.
[17] N. Oliver, B. Rosario and A. Pentland, “A Bayesian Computer Vision System for Modeling Human Interactions,” International Conference on Vision Systems, Springer, 1999.
[18] R. Jain, R. Kasturi and B. G. Schunk, “Machine Vision,” McGraw-Hill Inc., 2003, pp. 63-69.
[19] N. Friedman and S. Russell, “Image Segmentation in Video Sequences: A Probabilistic Approach,” Proceedings of the 13th Conference on University in Artificial Intelligence (UAI), 1997.
[20] L. Wendy and A. R. Martinez, “Computational Statistics Handbook with Matlab,” CRC Press, 2002.

  
comments powered by Disqus
JSIP Subscription
E-Mail Alert
JSIP Most popular papers
Publication Ethics & OA Statement
JSIP News
Frequently Asked Questions
Recommend to Peers
Recommend to Library
Contact Us

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.