Development and Evaluation of Intersection-Based Turning Movement Counts Framework Using Two Channel LiDAR Sensors

Abstract

This paper presents vehicle localization and tracking methodology to utilize two-channel LiDAR data for turning movement counts. The proposed methodology uniquely integrates a K-means clustering technique, an inverse sensor model, and a Kalman filter to obtain the final trajectories of an individual vehicle. The objective of applying K-means clustering is to robustly differentiate LiDAR data generated by pedestrians and multiple vehicles to identify their presence in the LiDAR’s field of view (FOV). To localize the detected vehicle, an inverse sensor model was used to calculate the accurate location of the vehicles in the LiDAR’s FOV with a known LiDAR position. A constant velocity model based Kalman filter is defined to utilize the localized vehicle information to construct its trajectory by combining LiDAR data from the consecutive scanning cycles. To test the accuracy of the proposed methodology, the turning movement data was collected from busy intersections located in Newark, NJ. The results show that the proposed method can effectively develop the trajectories of the turning vehicles at the intersections and has an average accuracy of 83.8%. Obtained R-squared value for localizing the vehicles ranges from 0.87 to 0.89. To measure the accuracy of the proposed method, it is compared with previously developed methods that focused on the application of multiple-channel LiDARs. The comparison shows that the proposed methodology utilizes two-channel LiDAR data effectively which has a low resolution of data cluster and can achieve acceptable accuracy compared to multiple-channel LiDARs and therefore can be used as a cost-effective measure for large-scale data collection of smart cities.

Share and Cite:

Jagirdar, R. , Lee, J. , Besenski, D. , Kang, M. and Pathak, C. (2023) Development and Evaluation of Intersection-Based Turning Movement Counts Framework Using Two Channel LiDAR Sensors. Journal of Transportation Technologies, 13, 524-544. doi: 10.4236/jtts.2023.134024.

1. Introduction

By 2025, nearly 70% of the world population may settle down in urban areas [1] [2] . The latest U.S. Census Bureau data showed that most of the U.S. largest cities experienced population growth between 2015 and 2016 [3] . The population expansion puts immense stress on the cities since it demands sophisticated transportation facilities, a healthy economy, a stable water/power supply system, and a quality environment. “Smart City” is one of the feasible options for city planning commissions to fulfill growing requirements. Over the last decade, many regional planning authorities have intensified their efforts to achieve the “Smart City” title. The smart city includes data collection from multiple means to monitor and manage traffic and transportation systems, power plants, water supply networks, and waste management. According to the experts, over the next 20 years of the period, cities worldwide will invest approximately $41 trillion to upgrade their infrastructure [4] [5] [6] .

The application of sensors and data visualization is one of the fundamental exercises carried out by regional planning authorities to collect diversified traffic data. In achieving Smart City/Smart Transportation aspects, it is essential to obtain real-time information through deployed sensors and analyze it through open data portals that allow controlling cities’ various operations proactively [7] . Commonly, smart cities use the Internet of Things (IoT) devices such as connected sensors, which can collect real-time data related to lights, atmosphere, air quality, traffic, and analyze it. As per the recent published report, IoT market is expected to reach about USD 1742.8 billion by 2030 [8] . The current technology to collect traffic data consists of Remote Traffic Microwave Sensors (RTMS), Video Analytics, loop detectors, and pneumatic tube counters. Even though these technologies have been used widely, it would be expensive and challenging to deploy on a large scale and have granular traffic data at low cost. Thus, the proposed methodology focuses on developing a data collection of the turning movement counts at intersections and equipping the roadway networks with a low-cost effective way of collecting the data.

Currently, LiDAR technology has a fair amount of share in various industries such as agricultural, robot etc. [9] [10] . The high demand of LiDAR pushes manufacturers to introduce cost effective LiDARs with a low number of channels. A channel is defined as a pair of emitters and receivers, the higher the channel number is the more granular data LiDAR sensor can generate. However, low resolution data is a major obstacle in taking advantage of the cost-effective LiDAR sensors. It is observed that in previous research efforts, LiDAR applications for object tracking or traffic data collection are conducted under a controlled environment using expensive multiple channel LiDAR sensors. In this research a unique framework is developed by integrating a K-means clustering method to identify vehicle’s presence, an inverse sensor model to localize a vehicle, and a Kalman filter to generate vehicle’s trajectory using two channel LiDAR. The generated vehicle trajectories are then used to conduct turning movement counts.

To date, limited studies are focused on dealing with the low-resolution LiDAR sensors to conduct traffic data collection and the presented study proposed a framework to handle the low-resolution LiDAR data to create vehicle trajectories for turning movement counts. The primary objective of the research is to take advantage of low-cost LiDAR sensors that can be used for large scale data collection practices to achieve smart city concept. The remainder of the paper is organized as follows: The literature review section briefly explains the earlier research effort to use LiDAR technology for object tracking. The methodology section describes the approach’s critical components, and development of framework in depth. The conducted data collection activity is described in the proof-of-concept test, and its results are discussed in the following sections.

2. Literature Review

Grejner-Brzezinska et al. [11] examined the feasibility of airborne LiDAR to collect traffic flow data. The study objectives were to develop a technique to identify and extract vehicles from LiDAR data. The algorithm extracts the vehicle information from LiDAR data by removing road surface data. The LiDAR data was collected along with a GPS/Inertial Navigation System (INS) sensor. Estimated results show that the large vehicles’ velocities are estimated accurately compared to small vehicles.

Zhao et al. [12] , in 2006 studied the primary application of a LiDAR sensor at an intersection for tracking and classification of an object. Their proposed methodology was to classify the object into three categories: 1) pedestrian, 2) bicycle, and 3) cars, buses, trucks. The object classification was done by Markov states in which an object model is predefined based on their typical appearances. The tracking of an object was done by matching frame to frame data generated at each scanning cycle of the LiDAR. The data was collected at an intersection using SICK LMS291 LiDAR with a video camera to compare the results. The results show that the developed methodology can attain 95% accuracy after comparing it with ground truth data. However, only ten minutes of collected data is processed for this study.

Wenqing [13] presented an algorithm to accurately track a vehicle using a LiDAR sensor through an intersection. The primary objective of the study was to collect stop distance measurement of the vehicles at the intersection. A four-step algorithm was introduced, which first identifies the object, extracts the feature points of the detected object, tracks an object, and later calculates the stopping distance. The data was collected by installing a LiDAR sensor at the corner of the intersection. Object extraction with the threshold value was applied to identify the object from the background of the LiDAR data. The feature points were extracted to represent an entire cluster with a single position by fitting the vehicle’s front and side profiles with lines and finding the intersection of these two lines. Small gaps between the clusters of the feature points over time were analyzed to track the vehicles. The results show that the algorithm is effective in extracting vehicle objects, using a static background and a static threshold.

Fod et al. [14] proposed a method to track people in crowded areas using single or multiple laser range finder data. A Kalman filter-based trajectory was proposed in this research. The proposed method was performed under different scenarios with the SICK planner scanning laser finder. The results illustrate that the proposed method can track multiple people with low errors and with reasonable computational efficiency.

Zhao and Shibasajki [15] studied the application of LiDAR in indoor areas such as malls and exhibition halls to detect and track pedestrians using a single LiDAR scanner. The data were collected using a single-row laser range finder. The moving feet profiles were extracted from raw data and spatially integrated into a global coordinate system. A simplified Kalman filter was used to track pedestrian trajectories. The method was evaluated using real-world and simulation-based data. The results suggest that the developed method has some limitation to crowded places in the indoor environment compared to the crowded environment in an open area.

Cui et al. [16] proposed an algorithm to track people using multiple LiDAR sensors in an indoor environment. From the obtained raw data, stable features were extracted, which were the movement of legs of people from successive laser frames. A tracker was developed based on Kalman Filter. The developed algorithm was tested in the indoor environment at the exhibition hall. The result shows that the method was robust and works well compared to conventional laser-based trackers such as measurement split and temporal occlusion.

Nashashibi et al. [17] introduced a robust method for detecting, tracking, and classification vehicles using mobile LiDAR sensors. The algorithm consists of three stages to detect and classify an object as a vehicle. The first stage uses the Ramer algorithm [18] to create a set of reflected data points using collected distance data. The Ramer algorithm allows reducing the number of points in a curve that is approximated to create a line segment. The second stage classifies the objects by analyzing length, vertices, and orientation of the points to the sensor. The third stage performs occlusion that handles the missing data causes due to obstructed LiDAR’s field of view. The results show that the developed methodology was able to classify the objects as vehicles in the background of the noisy data.

Thuy and León [19] proposed a method to track an object over time using two dimensional (2-D) LiDAR. The researchers introduced a multi-modal object-tracking algorithm that employs a particle filter-based solution. The study’s primary objective was to eliminate the error propagation that occurred in the tracking due to a linear Kalman filter. The study used particle filter-based Monte Carlo simulations for object tracking to model the non-linear process. The suggested method was applied to the data collected by two separate one-layer scanners (2-D LiDAR), which are synchronized with an angular resolution of a half degree and an overall angular range of 180 degrees. Data was collected by mounting one of the LiDAR on the front bumper and the second LiDAR on the rear bumper. The vehicle was equipped with the Inertial Measurement Unit (IMU) and combined with a DGPS to obtain the precise location of the vehicle and LiDAR. The result showed that the proposed method reduces the error while tracking an object in a 2-D LiDAR environment.

Taipalus and Ahtiainen [20] presented an algorithm for detecting and tracking walking humans using 2-D mobile LiDAR. The algorithm consists of two separate steps; the first step identifies the detected cluster, and the second step tracks the defined cluster over time within the scanning range of the LiDAR. A list of predefined features is provided in the process to identify the detected cluster points as a human. Based on the predefined features, if the two different clusters satisfy the condition, the object is defined as a human, and the human target is generated to track within the scanning range.

Tarko et al. [21] developed a traffic scanner (TScan) method to measure and collect traffic data at an intersection accurately. A 64 channel 3-D Velodyne HD-LiDAR was used to collect the traffic data. Distance information from the sensors was grouped and applied to spherical coordinates to identify the background objects. The clustering method was applied every time an object was identified to determine if the collected points were from the same object or not. The clustering method was used to analyze the gap between the two successive LiDAR points; the threshold value was set to identify the points from the different physical surfaces. Once the detected object was identified, a Kalman filter is applied to track the vehicle through the scanning range of the LiDAR. After tracking, the individual moving objects were classified into heavy and non-heavy vehicles, bicycles, and pedestrians. The results indicate that the method measured the vehicle positions and speeds with higher accuracy.

Kluge et al. [22] presented a method to track multiple objects using laser range finder data. The method consists of steps such as object identification, object extraction, object matching, and object tracking. Objects were extracted from the laser range finder by calculating the difference between the successive laser range obtained data; if the difference is higher than the threshold, the obtained data is identified as an object. Object identification and extraction were made by segmenting the scanning data into different groups, and the threshold value was chosen to identify the maximum gap and classified as different objects. A graph theory [23] and bipartite graph [24] were used to correspond the object of one scan into the successive scans.

To this point, it is observed that in the previous research efforts, the LiDAR applications for object tracking and traffic data collection are conducted under a controlled environment using expensive 3-D LiDAR sensors. Moreover, the application of LiDAR is mainly studied for autonomous vehicle applications by integrating it with image sensors, GPS, etc. Additionally, no research has been conducted focusing on the turning movement counts at an intersection using low channel LiDAR sensors. This paper proposed a framework to utilize two channel LiDAR sensors by integrating K-means clustering, an inverse sensor model and discrete Kalman filter.

3. Framework Development

This section discusses the different stages of the proposed framework and its components in detail to develop vehicles’ trajectory.

The methodology consists of four primary stages as shown in Figure 1. Stage 1 consists of the application of K-means clustering technique to identify vehicle’s presence by differentiating pedestrian and actual vehicle based on calculated means of clustered data points obtained from each scanning cycle. In stage 2, an inverse sensor model is applied to localize detected vehicles using consecutive calculated means from the previous stage. Obtained output at the end of stage 2 could be used to plot the vehicles trajectory. However, considering the manufacturing error of the LiDAR an additional stage 3 is added to the framework. In stage 3, a Discrete Kalman Filter is applied to deal with certain noisy localized data points and predict vehicles trajectory. In stage 4, predicted trajectories are used as an input and a standard deviation along the primary axis is studied to identify the vehicles’ trajectories as through, left or right. Detailed steps of stage 4 are shown in Figure 5.

In stage 1, only azimuth (α) with its corresponding distance (cm) information are used as an input. In the developed framework a K-means clustering is applied for each scanning cycle to calculate mean values of the clustered data. K-means clustering is also used to differentiate between two vehicles as shown in Figure 2. Since the number of data points generated by pedestrians is apparently less compared to vehicles it is easier to remove the discard narrowly clustered points.

In stage 2, the calculated means of the clustered data from consecutive scanning cycles are used in inverse sensor model to localize its presence in the FOV of LiDAR sensors and the distance between two vehicles is used to differentiate different vehicles. Since collected data consists of noises and does not produce accurate vehicles trajectory, a Discrete Kalman Filter is used in stage 3 to predict the trajectories from the known localized presence points and remove the outlier

Figure 1. Four tage approach for vehicle trajectory development.

Figure 2. Multiple vehicle presence in LiDAR Field of View (FOV).

data points to obtain a clean trajectory as shown in Figure 3. Furthermore, a threshold value of ±2 feet is applied to the predicted trajectories to further remove the outliers. Figure 3(a) and Figure 3(b) represent a North Bound through movement of a vehicle after threshold window is defined to remove the outlier data points.

After obtaining the clear vehicle trajectory, the vehicle’s movement at the intersection is identified by calculating the standard deviation of vehicles’ localized point along primary axis at each intersection in stage 4. In this study, the axis perpendicular to the entrance point of the vehicle into the intersection is considered a primary axis. It is observed that the range of variation for through movement varies from 0.2 feet to 0.9 feet whereas the variation for non-through movements is between 1 foot and 11 feet as shown in Figure 4. The variations along the primary axis are caused due to the lateral movement of the vehicles which are traversing away or towards the LiDAR location in this study Figure 5 shows the proposed framework of the methodology including a detailed flow chart of stage 4 to construct vehicle trajectories that are used to identify turning movement counts.

3.1. K-Means Clustering

K-Means clustering is an unsupervised machine learning technique to cluster the different data points into K clusters by calculating the nearest means. Unsupervised machine learning is selected since it can divide the datasets into different groups without any known label, unlike supervised machine learning techniques. The K-Means clustering method uses an iterative process to achieve minimum distance between the centroid of the “K” groups and assigned data points to that group. The less variation within clusters, the more homogeneous (similar) the data points are within the same cluster.

Equations from (1) to (3) explain the K-Means Clustering algorithm. A set of data { x 1 , x 2 , x 3 , , x n } is defined as an input to the K-mean clustering, where the defined data set is a d-dimensional data. The algorithm’s primary objective is

(a)(b)

Figure 3. North bound through movement at an intersection 1. (a) Before application of threshold window; (b) After application of threshold window.

Figure 4. Observed standard deviation of vehicles’ localized point along primary axis at each intersection.

Figure 5. Framework of proposed methodology with detailed stage 4.

to assign the input data into the “k” cluster by minimizing the Euclidean distance between each set of data points and the centroid of the cluster. The objective function is defined as below,

S = arg min k = 1 k j = 1 n ω j k x j μ k 2 (1)

where,

ω j k = 1 for data points x j if it belongs to cluster k else ω j k = 0 .

μ k is the centroid of cluster x j .

The minimization process is conducted in two parts. First the Equation (1) is minimized with respect to ( ω j k ) to assign the point to the closest cluster and later centroid ( μ k ) for each cluster is adjusted. During the minimization process, the function S is differentiated with respect to ω j k and updates the cluster assignments as shown in Equation (2). Second the function S is differentiated with respect to μ k and computes the centroids after the cluster assignments from the previous step as described in Equation (3). In the proposed methodology, the K-Means Clustering is used to identify presence of multiple objects in the LiDAR field of view.

s ω j k = k = 1 k j = 1 n x j μ k 2 { 1 if k = arg min j x j μ k 2 0 otherwise (2)

s μ k = 2 j = 1 n ω j k ( x j μ k ) = 0 μ k = j = 1 n ω j k x j j = 1 m ω j k (3)

3.2. Inverse Sensor Model

The inverse sensor model is often used in robotics to generate a surrounding map using range information collected by LiDAR or RADAR with the robot’s known position. An inverse sensor model is primarily defined as a state model for occupancy grid mapping. The state model consists of a map of the surrounding area, that is used to identify the detected objects’ location by converting polar coordinates to cartesian coordinates { x n , x m } . In the state model, the distance measurement from the known LiDAR position is used to identify the objects’ exact coordinates on a grid map using Equations (4) and (5). A known state of the LiDAR is given by { x n , x m , θ } .

[ x 1 , o c c x 2 , o c c ] = [ cos θ sin θ sin θ cos θ ] [ d 0 ] + [ x 1 x 2 ] (4)

[ i 1 , o c c i 2 , o c c ] = c e i l ( 1 r [ x 1 x 2 ] ) (5)

Since the LiDAR is generating multiple light beams and collecting distance measurements at the different azimuths (angles) the Equations (4) and (5) can be written as.

[ x n k x m k ] = [ d k cos ( θ + α k ) d k s i n ( θ + α k ) ] + [ x n x m ] (6)

[ i n , o c c i m , o c c ] = c e i l ( 1 r [ x n x m ] ) (7)

where,

x n , o c c = Location of the nth occupied cell on the x axis of the grid map from LiDAR ( n = 1 , 2 , 3 , , K ) .

x m , o c c = Location of the mth occupied cell on the y axis of grid map from LiDAR α k = ( α 1 , α 2 , α 3 , , α K ) .

Distance measurements: d k = ( d 1 , d 2 , d 3 , , d K ) .

Direction of rays (azimuth): α k = ( α 1 , α 2 , α 3 , , α K ) .

3.3. Discrete Kalman Filter

The Kalman filter is one of the popular approaches that has been used to estimate the state of the dynamic system over the period. In this study discrete Kalman Filter is used for the prediction and tracking of the detected vehicles. Furthermore, it helps to deal with noisy data. During the application of an algorithm, the state is assumed to be a linear system with a gaussian distribution. The discrete Kalman filter consists of two main steps: 1) Prediction and 2) Correction. A prediction step allows estimating the current state and error covariance to obtain the priory estimates for the next time step. The correction step is responsible for feedback by incorporating a new measurement into the priori estimate to get an improved estimation.

Equations (8) and (9) represent the state prediction steps of the discrete Kalman filter. A constant velocity model is used for the Kalman filter, which is a renowned model to track moving objects. The model assumes that the velocity of an object is constant throughout the sampling interval.

X ^ S = A X ^ S 1 + B u s + w s (8)

P ˙ s = A P s 1 A T + Q s (9)

where,

A = [ 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 ] (10)

B = [ ( Δ t ) 2 2 0 0 ( Δ t ) 2 2 Δ t 0 0 Δ t ] (11)

u = [ 4 4 1 1 ] (12)

u = control variable of matrix

w = predicted state noise matrix

Q = process noise covariance matrix

X = state matrix

s = current step

s 1 = previous step

P = state covariance matrix

A control variable in Equation (8) is an acceleration parameter of a vehicle at an intersection. Since the constant velocity model is defined for the Kalman filter, the acceleration of a vehicle is assumed constant with the value of 4 ft/s2 (1.22 m/s2) [25] . In Equation (9), Q preserves the state covariance matrix to become too small or zero and it is represented as shown in Equation (13).

A = [ 1 0 0 0 0 1 0 0 Δ t 0 1 0 0 Δ t 0 1 ] (13)

Here, ∆t represents the time difference between consecutive LiDAR scanning cycles. Equations (14) to (17) explain the measurement update steps. Equation (15) represents the update with the measurement by incorporating Kalman gain (K). A Kalman gain is represented as per Equation (14), which is a weight factor based on the comparison of errors in the estimate and those in the measurement. Equation (17) calculates a posterior error covariance. At the end of each update measurement step the process is repeated with posterior estimates to predict a new prior estimate. Figure 6 shows the detailed representation about the Discrete Kalman Filter algorithm.

K s = ( P ˙ s H T ) [ ( H P ˙ s H T ) + R ] 1 (14)

X ^ s = X ^ s + [ K s ( Y s H X ^ s ) ] (15)

Figure 6. Discrete Kalman Filter essential flow chart.

where,

R = sensor noise covariance matrix

Ys = measurement input

Y s = ( H X s ) + m s (16)

P s = ( I K s H ) + P ˙ s (17)

R = [ 0.6542 2 0 0 0 0 0.6542 2 0 0 0 0 1 0 0 0 0 1 ] (18)

A sensor noise covariance matrix R is represented, as shown in Equation (18). Sensor noises often occur during transmitting and receiving the signals caused by either faulty communication or power supply. Data imputation is adopted to study the measurement noises of the LiDAR sensor during the data collection procedure. Random scanning cycles (pair of azimuth & distance) are selected from the data sets, which have no missing values. To apply data imputation, some distance information is randomly removed and predicted using data imputation. A linear interpolation-based imputation technique is applied. Calculated standard deviation (feet) is compared with the manufacturers’ defined measurement error (feet). Equation (19) represents the basic idea of linear regression. The observed standard deviation after data imputation is 0.652 feet (19.89 cm). The error in measurements provided by the manufacturer is 0.591 feet (18 cm), which is nearly equal to the calculated standard deviation in data imputation.

y y i = y i + n + y i x i + n + x i ( x x i ) (19)

where,

x = Initial azimuth value

y = Missing distance information

x i = Incremental Azimuth values

y i = First nonzero distance information in vector

y i + n = Last nonzero distance information in vector

x i + n = Last Azimuth Value

4. Proof of Concept Test

The turning movement data was collected from busy intersections located in Newark, NJ in proximity to the New Jersey Institute of Technology to test the accuracy of the proposed methodology. Table 1 provides detailed information about the data collection setup. The LiDAR sensors are installed at 3’ to 3.5’ from the ground to get enough reflection from the vehicles’ surface as indicated in Figure 7.

Figure 7. LiDAR sensor installation at an intersection for turning movement data collection.

Table 1. Data collection time period and LiDAR configurations for turning movement counts.

At each intersection, a minimum of three LiDAR sensors are installed to cover all the approaches as shown in Figure 8. The primary reason to deploy multiple sensors at the intersections is to deal with occlusion problems. The occlusion is often caused by two vehicles traveling side-by-side or detecting a vehicle obstructed by pedestrians. Enhancement in the detection is the second reason to place multiple sensors since it is observed that the detection ability of Scanse Sweep LiDAR in an open area is reduced while scanning a horizontal plane. A python-based program is used to automate the data collection process, which allows the collection of the data for a more extended time without any external interruption. Raspberry pi minicomputers are used to run a python script and save collected data. All LiDAR sensors are connected to Raspberry pi minicomputers, which are connected to the Wi-Fi hotspot to synchronize the clock of sensors. Synchronization of the timestamp is essential since it allows to identify multiple detections of a discrete vehicle.

(a)(b)

Figure 8. Graphical representation of studied intersections with lidar placement. (a) Intersection-1: Central Ave and Lock Street. (b) Intersection-2: Dr. Martin Luther King and Warren Street.

5. Results

First, the collected LiDAR data are processed independently from each other to capture the vehicle trajectories by applying the steps described in framework development. The nearest LiDAR for each movement is used to capture the turning movements. For the application of the inverse sensor model to localize the detected vehicles, a grid map of 50 feet × 50 feet is defined with known LiDAR position at (0, 0). Individual cell dimension is defined as 1 foot × 1 foot. The obtained turning movement counts from each intersection are then compared with ground truth count, which was obtained using video recording.

The distribution of the variations along the primary axis has a normal distribution, which justifies the application of discrete Kalman Filter with the assumption of the gaussian distribution of data. The accuracy for all the left-turning movements was below 70%. To improve the left turning movement counts, the trajectories within the intersection are studied to improve the accuracy of the proposed method. Movements that are not captured by a single LiDAR at the nearest corner due to but captured in the middle of the intersection with multiple sensors are studied and a timestamp for a detected vehicle is used to reduce the double count errors. Figure 9 shows the accuracy for each turning movement at intersection 1. Unlike intersection 1, intersection 2 has more observed pedestrian activity. Furthermore, intersection 2 was wider compared to intersection 1, and low accuracy was noticed compared to intersection 1. Figure 10 shows the accuracy for each turning movement at intersection 2.

Figure 11 and Figure 12 represent the R-squared value for the X and Y axes between proposed method-based localization points and reference points. The data were selected randomly from both locations to study the relation between LiDAR-based reference points of each axis vs. model-based localized points. The obtained R-squared value ranges between 0.87 and 0.89, which shows that the developed methodology can localize the detected object accurately.

Figure 9. Obtained results comparison with ground truth data (Intersection-1).

Figure 10. Obtained results comparison with ground truth data (Intersection-2).

Figure 11. Comparison between LiDAR based reference points of X axis vs. proposed model based localized points.

Figure 12. Comparison between LiDAR based reference points of Y axis vs. proposed model based localized points.

Table 2. Accuracy comparison of proposed methodology with recent research works.

6. Conclusions

This paper presents a framework to detect, localize, and track vehicles at signalized intersections for turning movement counts by applying K-means clustering, inverse sensor model, and Kalman filter-based methods. The data from two intersections were collected to study the effectiveness of the proposed turning movement count methodology. Unlike state-of-the-art data collection sensors such as RTMS and Video Detection systems, the LiDARs are placed parallel to the roadway surface to scan the horizontal plane from the height of 3' - 3.5' from road surface. A minimum of three LiDAR sensors were used at an intersection during the data collection—the LiDAR sensors were placed at each corner of an intersection.

The sensors were connected to Raspberry pi minicomputer to store the data. The raspberry pi was connected to the external power supply, and a Wi-Fi network to accurately synchronize the system clock. The conducted data analysis shows that the proposed methodology can accurately detect, localize, and track 79% - 88% of vehicles within the intersection. The movement counts obtained from individual LiDAR are also compared with ground truth. The comparison indicates the lower accuracy (<75%) for the left-turning movement for each direction, mainly caused due to the occlusion and missing data.

The developed methodology can also be used for vehicle delays at an intersection. By making LiDAR sensors more Internet of Things (IoT) infrastructure friendly, it will allow city authorities to achieve the “Smart Cities” concept. Furthermore, the proposed method’s accuracy is compared with the recent research works that used 16, 32, or 64 channels 3-D LiDAR to conduct traffic data collection, as shown in Table 2.

7. Recommendations

For the future research work, the team will investigate following things:

1) A constant velocity model based Kalman filter is used in the proposed approach;

2) An unscented Kalman filter should be considered; and

3) Different types of signalized intersections with various lane configurations will be studied. Increase the number of LiDAR sensors at an intersection to reduce the occlusion effect and improve the accuracy.

Acknowledgements

We would like to thank Dr. Joyoung Lee for his guidance, encouragement, and advice to conduct this study. Dr. Lee’s perseverance and problem-solving skills helped to deal with the problems faced during research. Last, thank you, Dr. Kang, Dr. Pathak, and Dr. Besenski, for their valuable inputs for the conducted research.

Author Contribution Statement

The authors confirm contribution to the paper as follows: study conception and design: Joyoung Lee, Ravi Jagirdar; data collection: Ravi Jagirdar; analysis and interpretation of results: Ravi Jagirdar, Joyoung Lee; draft manuscript preparation: Ravi Jagirdar, Joyoung Lee, Chaitanya Pathak, Minwook Kang, Dejan Besenski. All authors reviewed the results and approved the final version of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] United Nations-Department of Economic and Social Affairs (2014) World Urbanization Prospects.
https://www.un.org/en/development/desa/publications/2014-revision-world-urbanization-prospects.html
[2] United States-Census Bureau (2017) Southern Cities Growing Quickly.
https://www.census.gov/library/visualizations/2017/comm/cb17-81-cities-growing.html
[3] United States Census Bureau 2010 Census Urban and Rural Classification and Urban Area Criteria (2010).
https://www.census.gov/programs-surveys/geography/guidance/geo-areas/urban-rural/2010-urban-rural.html
[4] U.S. Department of Transportation, “Smart City Challenge”.
https://www.transportation.gov/smartcity
[5] Guy Allee, Smart America, “Smart Cities USA”.
http://smartamerica.org/teams/smart-cities-usa/
[6] Mirbakhsh, A., Lee, J., Jagirdar, R., Hyun, K. and Besenski, D. (2023) Collective Assessments of Active Traffic Management Strategies in an Extensive Microsimulation Testbed. Journal of Engineering Applications, 2, 146-153.
http://193.255.128.114/index.php/enap/article/view/929
[7] Pathak, C., Lee, J., Kim, K., Dimitrijevic, B., Spasovic, L. and Reif, J. (2016) Geo-Spatial Analysis of Bluetooth Signal Reception and Its Implications on Arterial Travel Time Estimation. Transportation Research Board 95th Annual Meeting, Washington DC, 10-14 January 2016, 23-29.
https://trid.trb.org/view/1393964
[8] Precedence Research (2022) Industrial IoT Market Size 2022-2030.
https://www.precedenceresearch.com/industrial-iot-market
[9] Grand View Research (2022) LiDAR Market Size, Share & Trends Analysis Report by Product Type (Airborne, Terrestrial, Mobile & UAV), by Component, by Application, by Region, and Segment Forecasts, 2022-2030.
https://www.grandviewresearch.com/industry-analysis/lidar-light-detection-and-ranging-market
[10] Mirbakhsh, A., Lee, J. and Besenski, D. (2023) Development of a Signal-Free Intersection Control System for CAVs and Corridor Level Impact Assessment. Journal of Future Transportation, 3, 552-567.
https://doi.org/10.3390/futuretransp3020032
[11] Brzezinska, G., Dorota, A., Toth, C. and McCord, M. (2005) Airborne LiDAR: A New Source of Traffic Flow Data. Ohio State University, Dept. of Civil & Environmental Engineering & Geodetic Science, Columbus, FHWA/OH-2005/14.
https://rosap.ntl.bts.gov/view/dot/5726
[12] Zhao, H., Shao, X.W., Katabira, K. and Shibasaki, R. (2006) Joint Tracking and Classification of Moving Objects at Intersection Using a Single-Row Laser Range Scanner. IEEE Intelligent Transportation Systems Conference, Toronto, 17-20 September 2006, 287-294.
https://ieeexplore.ieee.org/document/1706756
https://doi.org/10.1109/ITSC.2006.1706756
[13] Yao, W.Q. (2012) LIDAR-Based Vehicle Tracking for Stopping Distance Measurement at Intersections.
https://www.google.com/search?client=firefox-b-1-d&q=LIDAR-Based+Vehicle+Tracking+for+Stopping+Distance+Measurement+at+Intersections
[14] Fod, A., Howard, A. and Mataric, M.A.J. (2002) A Laser-Based People Tracker. Proceedings 2002 IEEE International Conference on Robotics and Automation, Vol. 3, 3024-3029.
https://ieeexplore.ieee.org/document/1013691
[15] Zhao, H. and Shibasaki, R. (2005) A Novel System for Tracking Pedestrians Using Multiple Single-Row Laser-Range Scanners. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, 35, 283-291.
https://ieeexplore.ieee.org/document/1396163
https://doi.org/10.1109/TSMCA.2005.843396
[16] Cui, J., Zha, H., Zhao, H. and Shibasaki, R. (2007) Laser-Based Detection and Tracking of Multiple People in Crowds. Computer Vision and Image Understanding, 106, 300-312.
https://doi.org/10.1016/j.cviu.2006.07.015
[17] Nashashibi, F., and Bargeton, A. (2008) Laser-Based Vehicles Tracking and Classification Using Occlusion Reasoning and Confidence Estimation. IEEE Intelligent Vehicles Symposium, Eindhoven, 4-6 June 2008, 847-852.
https://ieeexplore.ieee.org/document/4621244
https://doi.org/10.1109/IVS.2008.4621244
[18] Ramer, U. (1972) An Iterative Procedure for the Polygonal Approximation of Plane Curves. Computer Graphics and Image Processing, 1, 244-256.
https://doi.org/10.1016/S0146-664X(72)80017-0
[19] Thuy, M. and León, F.P. (2009) Non-Linear Multimodal Object Tracking Based on 2D LiDAR Data. Metrology and Measurement Systems, 16, 359-369.
https://www.semanticscholar.org/paper/NON-LINEAR-MULTIMODAL-OBJECTTRACKING-BASED-ON-2D-Thuy-Le%C3%B3n/b9b256b07bdcb5fdfe8041278f52fd5dc4d4abb3
[20] Taipalus, T. and Ahtiainen, J. (2011) Human Detection and Tracking with Knee-High Mobile 2D LIDAR. 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, 7-11 December 2011, 1672-1677.
https://ieeexplore.ieee.org/document/6181529
https://doi.org/10.1109/ROBIO.2011.6181529
[21] Tarko, A.P., Ariyur, K.B., Romero, M.A. and Bandaru, V.K. (2016) T-Scan: Stationary LiDAR for Traffic and Safety Applications—Vehicle Detection and Tracking. Joint Transportation Research Program Publication No. FHWA/IN/JTRP-2016/24, Purdue University, West Lafayette.
https://doi.org/10.5703/1288284316347
[22] Kluge, B., Kohler, C. and Prassler, E. (2001) Fast and Robust Tracking of Multiple Moving Objects with a Laser Range Finder. IEEE International Conference on Robotics and Automation, Seoul, Vol. 2, 1683-1688.
https://ieeexplore.ieee.org/document/932853
[23] Ruohonen, K. (2008) Graph Theory.
https://archive.org/details/flooved3467/mode/2up
[24] Salvatore, J. (2007) Bipartite Graphs and Problem Solving. University of Chicago, Chicago.
https://www.math.uchicago.edu/~may/VIGRE/VIGRE2007/REUPapers/FINALAPP/Salvatore.pdf
[25] Wang, J., Dixon, K., Li, H.K. and Ogle, J. (2004) Normal Acceleration Behavior of Passenger Vehicles Starting from Rest at All-Way Stop-Controlled Intersections. Transportation Research Record, 1883, 158-166.
https://doi.org/10.3141/1883-18
[26] Xu, H., Tian, Z., Wu, J., Liu, H. and Zhao, J. (2018) High-Resolution Micro Traffic Data from Roadside LiDAR Sensors for Connected-Vehicles and New Traffic Applications. Nevada Department of Transportation Research Report, No. 224-14-803 TO 15.
https://rosap.ntl.bts.gov/view/dot/44347
[27] Yang, R. (2009) Vehicle Detection and Classification from a LIDAR Equipped Probe Vehicle. Ph.D. Dissertation, The Ohio State University, Columbus.
[28] Bandaru, V.K. (2016) Algorithms for LiDAR Based Traffic Tracking: Development and Demonstration. Open Access Theses, Purdue University, West Lafayette, 922.
https://docs.lib.purdue.edu/open_access_theses/922
[29] Sualeh, M. and Kim, G.-W. (2019) Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking. Sensors, 19, Article No. 1474.
https://doi.org/10.3390/s19061474

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.