Share This Article:

Combining Multiple Cues for Pedestrian Detection in Crowded Situations

Abstract Full-Text HTML Download Download as PDF (Size:547KB) PP. 62-65
DOI: 10.4236/jsip.2013.43B011    2,682 Downloads   3,576 Views  

ABSTRACT

This paper proposes a vision-based pedestrian detection in crowded situations based on a single camera. The main idea behind our work is to fuse multiple cues so that the major challenges, such as occlusion and complex background facing in the topic of crowd detection can be successfully overcome. Based on the assumption that human heads are visible, circle Hough transform (CHT) is applied to detect all circular regions and each of which is considered as the head candidate of a pedestrian. After that, the false candidates resulting from complex background are firstly removed by using template matching algorithm. Two proposed cues called head foreground contrast (HFC) and block color relation (BCR) are incorporated for further verification. The rectangular region of every detected human is determined by the geometric relationships as well as foreground mask extracted through background subtraction process. Three videos are used to validate the proposed approach and the experimental results show that the proposed method effectively lowers the false positives at the expense of little detection rate.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

S. Huang, F. Chang and C. Lu, "Combining Multiple Cues for Pedestrian Detection in Crowded Situations," Journal of Signal and Information Processing, Vol. 4 No. 3B, 2013, pp. 62-65. doi: 10.4236/jsip.2013.43B011.

References

[1] C. Stauffer and W. E .L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE Intl. Conf. on Computer Vision and Pattern Recognition, Vol. 2, 2009, pp. 246-252.
[2] P. Tu, et al., “Unified Crowd Segmentation,” European Conf. on Computer Vision, 2008, pp. 691-704.
[3] T. Zhao, “Bayesian Human Segmentation in Crowded Situations,” IEEE Intl. Conf. on Computer Vision and Pattern Recognition, Vol. 2, 2003, pp. 459-466.
[4] M. Perreira Da Silva, V. Courboulay, and A. Prigent, P. Estraillier, “Fast, Low Resource, Head Detection and Tracking for Interactive Applications,” PsychNology Journal, 2009, pp. 243-264.
[5] D. M. Gavrila, “A Bayesian, Exemplar-Based Approach to Hierarchical Sha
[6] S. S. Huang, C. Y. Mao, P. Y. Hsiao, and L. A. Yen, “Global Template Matching for Guiding the Learning of Human Detector,” IEEE Conf. on Systems, Man, and Cybernetics, 2012, pp.565-570.
[7] T. Nguyen, D pe Matching,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 29, N. 8, 2007, pp. 1408-1421. P. Ogunbona, and W. Li, “Human Detection Based on Weighted Template Matching,” IEEE Intl. on Multimedia and Expo, 2009, pp. 634-637.
[8] S. S. Huang, L. C. Fu, and P. Y. Hsiao, “Region-Level Motion-Based Background Modeling and Subtraction Using MRFs,” IEEE Transactions on Image Processing, Vol. 16, No. 5, 2007, pp. 1446-1456. doi:10.1109/TIP.2007.894246
[9] S. Belongie and J. Malik, “Matching with Shape Contexts,” IEEE Workshop on Content-based Access of Image and Video Libraries, 2000, pp. 20–26. doi:10.1109/IVL.2000.853834
[10] A. Broggi, M. Bertozzi, A. Fascioli, and M. Sechi, “Shaped-Based Pedestrian Detection,” IEEE Intelligent Vehicle Symposium, 2000, pp. 215–220.
[11] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” IEEE Intl. Conf. on Computer Vision and Pattern Recognition, Vol. 1, 2005, pp. 886–893.
[12] Z. Hao, B. Wang, and J. Teng, “Fast Pedestrian Detection Based on Adaboost and Probability Template Matching,” Intl. Conf. on Advanced Computer Control, Vol. 2, 2010, pp. 390–394.
[13] S. Paisitkriangkrai, C. Shen, and J. Zhang, “Performance Evaluation of Local Features in Human Classification and Detection,” IET Computer Vision, Vol. 2, No. 4, 2008, pp. 236–246. doi:10.1049/iet-cvi:20080026

  
comments powered by Disqus

Copyright © 2018 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.