Image-Based Vehicle Speed Estimation ()

Md. Golam Moazzam^{1}, Mohammad Reduanul Haque^{2}, Mohammad Shorif Uddin^{1}

^{1}Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh.

^{2}Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh.

**DOI: **10.4236/jcc.2019.76001
PDF
HTML XML
1,486
Downloads
4,237
Views
Citations

Vehicle speed is an important parameter that finds tremendous application in traffic control identifying over speed vehicles with a view to reducing accidents. Many methods, such as using RADAR and LIDAR sensors have been proposed. However, these are expensive, and their accuracy is not quite satisfactory. In this paper, a video-based vehicle speed determination method is presented. The method shows satisfactory performance on standard data sets and gives that error rate of velocity estimation is within 10%.

Share and Cite:

Moazzam, M. , Haque, M. and Uddin, M. (2019) Image-Based Vehicle Speed Estimation. *Journal of Computer and Communications*, **7**, 1-5. doi: 10.4236/jcc.2019.76001.

1. Introduction

Traffic accidents are very dangerous as these result in injury and death of passengers and pedestrians in addition to the damage of vehicles and roads. Bangladesh is one of the top countries in the world where road accident rate is very high. A traffic accident is a great pain and loss of a nation. Though concerned authorities have taken many initiatives to minimize the road accident rate and increase the road safety, still every year thousands of people get killed and injured due to road accidents. According to statistics of road accidents and casualties from Bangladesh Police, in the year of 2016, 2566 dangerous accidents occurred and that caused 2463 deaths and 2134 injuries [1] . In fact, the actual number of accidents is very high as many accidents are not reported to concerned authorities. The speed of vehicle is considered as one of the main factors for road accidents, and, also it is an important traffic parameter, so detection of speed of a vehicle [2] - [7] is very significant for more smooth traffic management. Various methods based on RADAR (Radio Detection and Ranging) or LIDAR (Laser Infrared Detection and Ranging) or camera have been developed, but none of these techniques are perfect.

In this paper, an imaged-based vehicle speed estimation method is developed using physics-based velocity theory. It consists of a video camera that is placed a fixed location for capturing images and then a computation system works on the images to calculate the speed. Several video-based techniques are developed for detecting moving objects, such as temporal differentiation, optical flow, background subtraction, etc. Among these, background subtraction technique is simpler than other techniques. In this technique, the absolute difference of the background frame and the current frame is taken. Here, a hybrid technique is used that consists of an adaptive background subtraction technique and a three-frame differentiation method.

The method consists of five major modules such as image acquisition and enhancement, segmentation, centroid calculation, shadow removal and speed calculation.

The rest of the paper is organized as follows. Section II describes the theory and method along with implementation for vehicle speed estimation. In Section III, experimental result and discussions are given. Finally, conclusions are drawn in Section IV.

2. Methodology

A flow diagram of the vehicle speed detection method is shown in Figure 1. A brief explanation of this algorithm is given below:

It consists of three major blocks. The first block is video image capturing and background subtraction. It consists of a stationary video camera that is placed a fixed location for capturing images. The primary task of the vehicle speed estimation method is to detect the moving vehicle from the video. A three-frame differencing technique is applied to know the motion pixels. So, the second block is the extraction of vehicle by using an adaptive background subtraction method. Here, the stationary pixels are background pixel and the moving pixels are foreground, as shown in Figure 2. This is accomplished through image enhancement (noise reduction), vehicle centroid and area calculation for getting the vehicle bounding box.

Let the position of a captured pixel is (x,y) at time t = n and its intensity is I_{n}(x,y). Using three frame differencing technique, an object is moving if the position of a pixel in the current image (I_{n}) and its position in the consecutive previous frames (I_{n}_{-1}) and (I_{n}_{-2}). Mathematically, motion is detected if the conditions of Equation (1) hold.

$\left({I}_{n}\left(x,y\right)-{I}_{n-1}\left(x,y\right)>T{h}_{n}\left(x,y\right)\right)$ and $\left({I}_{n}\left(x,y\right)-{I}_{n-2}\left(x,y\right)>Th\left(x,y\right)\right)$ (1)

where $T{h}_{n}\left(x,y\right)$ is the threshold value at pixel position (x,y).

Then the background subtraction image is achieved by the subtraction of the background B_{n}(x,y) frame from the current frame I_{n}(x,y) through Equation (2).

Figure 1. Flow diagram of the vehicle speed detection method.

Figure 2. Moving object extraction through background subtraction.

$S{I}_{n}\left(x,y\right)={I}_{n}\left(x,y\right)-{B}_{n}\left(x,y\right)$ (2)

where, $S{I}_{n}\left(x,y\right)$ be the subtracted image.

From the background subtracted image the noise pixels are removed based on outlier and the shadow pixels are removed based on the intensity of pixels, as the intensity of a shadow pixel is lower than the intensity of an object pixel. After that object tracking is done through segmentation by connectivity of pixels and labeling of objects. Each labelled object is bounded through a rectangle. Then the area of each labelled object is calculated. Tracking of each object (vehicle) is recorded when it enters the scene (at frame
$S{I}_{0}$ ) and when it leaves the scene (at frame SI_{n}). Centroid of vehicle in the respective frame can be easily determined from the labelled object using the x and y coordinates as

$\left({x}_{c},{y}_{c}\right)=\left(\frac{\left({x}_{1}+{x}_{2}\right)}{2},\frac{{y}_{1}+{y}_{2}}{2}\right)$ (3)

Table 1. Vehicle speed measurement result.

where, $\left({x}_{c},{y}_{c}\right)$ is the center of the vehicle.

The third and the final block is the calculation of the speed of the vehicle. Speed can be easily calculated by counting the frame numbers needed to enter and leave the labelled object and the distance it covers. The Euclidean distance of the centroids of nth and (n − 1)th frames gives the distance traveled by the respective object (vehicle). The frame rate of the video is then multiplied by the total number of frames. From this total time and distance, speed is measured and mapped in real time through Equation (5).

$\text{Distance}=\sqrt{{\left({x}_{n-1}-{x}_{n}\right)}^{2}+{\left({y}_{n-1}-{y}_{n}\right)}^{2}}$ (4)

where, $\left(\left({x}_{n},{y}_{n}\right),\left({x}_{n-1},{x}_{n-1}\right)\right)$ is the coordinates of the centroid pixel in nth frame and (n − 1)th frame, respectively.

$\text{Speed}=\frac{\alpha \times \text{Distance}}{\left(\text{Frame}\left(n\right)-\text{Frame}\left(n-1\right)\right)\times \text{FrameRate}}$ (5)

where, α is the calibration coefficient that maps image to object motion and can be calculated as

α = real height of the vehicle/image height of the vehicle.

3. Experiment

For our experiment, we have used QMUL Dataset [8] for our experimentation. This is a traffic dataset whose video length is 1 hour and has 90,000 frames. The size of each frame is 360 × 288 pixels and the frame rate is 25 Hz.

Table 1 shows the experimental result. This table confirms that the speed of the vehicle calculated by our system is similar to the real speed of the vehicle, as the error rate is within 10%.

4. Conclusion

An image-based vehicle speed estimation system is presented in this paper, which is a good alternative to the traditional RADAR or LIDAR-based system. We have done experimentation through standard dataset and estimated the real speed of a vehicle. From the experimental findings, it is confirmed that the system works well with good accuracy as the error rate is within 10% limit. In the future, we will work for accuracy improvement as well as implementation of the system in real-life environment.

NOTES

*An earlier version of this paper is published in the Proc. of International Workshop on Computational Intelligence (IWCI), 12-13 December 2016, Dhaka, Bangladesh.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Sharif Hossen, Md. (2019) Analysis of Road Accidents in Bangladesh. American Journal of Transportation and Logistics (AJTL), 2, 1-11. |

[2] |
Cathey, F.W. and Dailey, D.J. (2005) A Novel Technique to Dynamically Measure Vehicle Speed Using Uncalibrated Roadway Cameras. IEEE Intelligent Vehicles Symposium, Las Vegas, 6-8 June 2005, 777-782. https://doi.org/10.1109/ivs.2005.1505199 |

[3] |
Douxchamps, D., Macq, B. and Chihara, K. (2006) High Accuracy Traffic Monitoring Using Road-Side Line Scan Cameras. IEEE Intelligent Transportation Systems Conference (ITLS2006), Toronto, September 2006, 875-878. https://doi.org/10.1109/itsc.2006.1706854 |

[4] |
Ibrahim, O., ElGendy, H. and ElShafee, A.M. (2011) Towards Speed Detection Camera System for a RADAR Alternative. 2011 11th International Conference on ITS Telecommunications, St. Petersburg, 23-25 August 2011, 627-632. https://doi.org/10.1109/itst.2011.6060131 |

[5] | Dedeo˘glu, Y. (2004) Moving Object Detection, Tracking and Classification for Smart Video Surveillance. Master Thesis, Department of Computer Engineering, Bilkent University, Turkish. |

[6] | Collins, R.T., Lipton, A.J., Kanade, T., Fujiyoshi, H., Duggins, D., Tsin, Y., Tolliver, D., Enomoto, N., Hasegawa, O., Burt, P. and Wixson, L. (2000) A System for Video Surveillance and Monitoring. The RoboticsInstitute, Carnegie Mellon University, Princeton, NJ. |

[7] | Adnan, M.A. and Zainuddin, N.I. (2013) Vehicle Speed Measurement Technique Using Various Speed Detection Instrumentation. 2013 IEEE Business Engineering and Industrial Applications Colloquium (BEIAC), Langkawi, 7-9 April 2013, 668-672. https://doi.org/10.1109/beiac.2013.6560214 |

[8] |
QMUL Junction Dataset (2016). http://personal.ie.cuhk.edu.hk/~ccloy/downloads_qmul_junction.html |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.