A Nationwide Evaluation of the State of Practice of Performance Measurements for Intelligent Transportation Systems

Abstract

State departments of transportation’s (DOTs) decisions to invest resources to expand or implement intelligent transportation systems (ITS) programs or even retire existing infrastructure need to be based on performance evaluations. Nonetheless, an apparent gap exists between the need for ITS performance measurements and the actual implementation. The evidence available points to challenges in the ITS performance measurement processes. This paper evaluated the state of practice of performance measurement for ITS across the US and provided insights. A comprehensive literature review assessed the use of performance measures by DOTs for monitoring implemented ITS programs. Based on the gaps identified through the literature review, a nationwide qualitative survey was used to gather insights from key stakeholders on the subject matter and presented in this paper. From the data gathered, performance measurement of ITS is fairly integrated into ITS programs by DOTs, with most agencies considering the process beneficial. There, however, exist reasons that prevent agencies from measuring ITS performance to greater detail and quality. These include lack of data, fragmented or incomparable data formats, the complexity of the endeavor, lack of data scientists, and difficulty assigning responsibilities when inter-agency collaboration is required. Additionally, DOTs do not benchmark or compare their ITS performance with others for reasons that include lack of data, lack of guidance or best practices, and incomparable data formats. This paper is relevant as it provides insights expected to guide DOTs and other agencies in developing or reevaluating their ITS performance measurement processes.

Share and Cite:

Abedi, K. , Codjoe, J. and Thapa, R. (2023) A Nationwide Evaluation of the State of Practice of Performance Measurements for Intelligent Transportation Systems. Journal of Transportation Technologies, 13, 222-242. doi: 10.4236/jtts.2023.132011.

1. Introduction

Performance measurement needs in transportation planning and investment decision-making processes have increased for many reasons. For instance, it is required by the Moving Ahead for Progress in the 21st Century Act (MAP-21) and its replacement, the Fixing America’s Surface Transportation Act (FAST Act), for agencies to have performance-driven, out-come-based programs that provide greater transparency and accountability, which are needed to improve decision-making and efficient utilization of federal funds. It is also required that states, Metropolitan Planning Organizations (MPOs), and public transportation providers move towards performance-based strategy and program development through the performance-based planning and programming (PBPP) processes [1] [2] [3] [4] . Further, for the Transportation Systems Management and Operations (TSMO) program to be integrated, DOTs must incorporate performance management to assess implemented strategies, most of which are ITS specific [5] .

Thus, the decisions by DOTs to invest resources to expand or implement new ITS programs or even retire existing infrastructure need to be based on performance evaluations. Although responsible organizations like the Federal Highway Administration (FHWA) and the United States Department of Transportation (USDOT) have provided sufficient guidance and information to develop or incorporate performance measurement strategies into respective DOTs’ ITS programs, there exist apparent gaps between the need to increase emphasis on performance evaluations and the actual implementation. The seeming gaps necessitated the main objective of this paper to unearth the current state of practice of performance measurement for ITS in the US.

Objectives

This paper evaluated the current state of practice of performance measurement for ITS across the US with the main objective to gather insights on:

● Whether DOTs use performance measures to evaluate deployed ITS programs.

● Levels at which the performance of ITS programs are evaluated and reported, and the source of data used to generate performance indicators.

● Barriers to benchmarking ITS performance monitoring across jurisdictions and factors that prevent ITS performance monitoring to greater details and quality; and benefits of the Architecture Reference for Cooperative and Intelligent Transportation (ARC-IT) resources for developing performance measures for state ITS programs.

2. Methodology

As shown in Figure 1, the study consisted of 3 main tasks, which are as follows:

● Task 1: A comprehensive literature on the use of performance measurements by DOTs to assess implemented ITS programs in terms of their scale of deployments, system functionality, and benefits achieved through their implementation.

Figure 1. Framework of methodology.

● Task 2: A nationwide qualitative survey to ascertain the existing levels of the use of ITS performance measures by DOTs and issues surrounding ITS performance measurement evaluations.

● Task 3: An analysis of the preceding tasks into a synthesis of the current state of practice for performance measurement of ITS programs.

The details of the methodology adopted for the tasks are presented in the following section.

2.1. Literature Review Approach

An in-depth literature review was conducted to provide comprehensive insight into the use of performance measures by DOTs to assess ITS programs for extent deployments, functionality, and benefits. Publicly available sources were used to gather the required literature. Specifically, information was gathered directly from states’ DOT websites for state ITS architectures, state ITS strategic business plans, state-issued newsletters, and ITS performance reports. The literature review extended to relevant details provided by ARC-IT [6] and other relevant sources. Where sufficient literature existed, the data, calculation methods, and approaches used to generate performance indicators were also considered.

In order to ensure that information gathered from the literature review was encompassing, a log was maintained for searches from the sources below, whether or not they resulted in key information.

● DOTs websites

● United States National ITS resource website—ARC-IT and

● USDOT, FHWA, and related sources.

2.2. Nationwide Qualitative Survey Approach

Based on gaps identified in the literature review and on the objectives of the study, a qualitative survey questionnaire was designed and used to solicit key stakeholder insights in a nationwide survey. Specifically, the qualitative survey sought to gather information on:

● Whether DOTs use performance measures to evaluate deployed ITS programs.

● Levels at which the performance of ITS programs are evaluated and reported, and the source of data used to generate performance indicators.

● Barriers to benchmarking ITS performance monitoring across jurisdictions and factors that prevent ITS performance monitoring to greater details and quality.

● Benefits of ARC-IT resources for developing performance measures for state ITS programs.

The qualitative survey questions were designed and structured to address the above objectives appropriately. Each question and section was reviewed to assess the wording, flow, and structure. The instructions for completing the different questions and the language were reviewed to ensure that the survey was easy to complete. The final survey questionnaire consisted of 9 questions, with some sub-questions, designed to be completed in less than 10 minutes by target stakeholders, ITS managers, and transportation system operators of DOTs nationwide. The complete questionnaire can be found in Appendix.

The survey was published online on January 14, 2021, with the Qualtrics tool, and the hyperlink was sent to the email addresses of members of the AASHTO Committee on Transportation System Operations (CTSO) [7] , who formed the sampled target stakeholders, and who were allowed 21 days to complete the questionnaire. Using CTSO members ensured a nationwide extent of stakeholders and ease of collecting contact details of key DOT personnel. The Qualtrics online tool offered a comprehensive functionality to achieve a high response rate and make the survey responses immediately available for analysis.

A data cleaning and checking process was carried out in accordance with sound quality assurance procedures to avoid data interpretation errors. Also, the frequency counts and basic tabulations for each question were checked for consistency and any extreme values or possible logic mistakes.

2.3. Synthesis of Findings

The key insights from the literature review and nationwide qualitative survey are synthesized in the following sections. Overall, this paper is structured as follows: Section 1: Introduction; Section 2: Methodology; Section 3: Discussions of Findings; Section 4: Conclusions and Recommendations; and an Appendix.

3. Discussion of Findings

3.1. Overview of Literature Review

3.1.1. ITS Performance Measurement by DOTs

States usually group ITS into broad program areas that are designed to address transportation goals. The goals are typically outlined in two key documents: the statewide ITS Architectures and the ITS Strategic Business Plans. The vision, specific initiatives, processes, and strategies needed to achieve the goals are usually indicated at a five-year projected interval in the ITS strategic business plans. The business plan also provides a framework that is used to develop actionable goals, milestones, timelines, and performance metrics that are used to determine the success of the ITS programs [8] [9] . On the other hand, the Statewide ITS Architectures are used to describe the envisioned ITS, outlined programs, and the projects critical for the implementation, operation, and management of statewide ITS infrastructures, usually in a 15-to-20-year projected outlook. The statewide architectures are created in tandem with the National ITS Architecture [10] [11] . DOTs’ decisions to invest resources to expand or implement ITS programs or even retire existing infrastructure need to be based on performance evaluations. The ITS performance reports from these evaluations are expected to provide information on investment decisions, improve communications, and ensure targets and measures are based on data and objective information.

It was evident from the comprehensive search of all DOTs websites that different levels of ITS programs have been deployed across the US to enhance transportation system management and operations. However, of the 50 DOTs websites searched, there were no publicly available state-issued ITS Architectures, business plans, or performance reports for 29 states. Thus, the comprehensive assessment of the use of performance measures by DOTs to monitor the scale of ITS deployments, system functionality, and benefits achieved through ITS implementations could not be fully executed. The gaps identified pointed to the possible challenges in the ITS performance monitoring process by DOTs. The seeming gaps necessitated the main objective of this paper to unearth the current state of practice of performance measurement for ITS.

An overview of how some DOTs have structured and evaluated their ITS and performance measurement processes is summarized below.

Alabama: Alabama’s ITS programs aim to improve safety and reduce traffic fatalities. Eight ITS service areas have been outlined to achieve the goals, which include Travel and Traffic Management and Public Transportation Management. The Strategic Business Plan provided performance measures, reporting, and tracking matrices. These performance measures are grouped under TMC Operational Measures, Alabama Service Assistance Patrol, and System Performance Measures [8] [12] .

Florida: Florida has eight ITS service areas which include Traffic Management, Traveler Information, and Emergency Management, and 52 existing and planned service packages which include Traffic Incident Management System and Intersection Safety Warning [13] . The operational performance and outcomes for the Total Annual 511 calls; road ranger stops; ITS miles managed; incident duration; total time reliability, and customer satisfaction were reported in the state’s 2015/2016 ITS Performance Measure Annual Report [14] . The purpose, objectives, and methodologies for assessing each service area were detailed in the report.

Iowa: The state’s Transportation System Management and Operation (TSMO) programs are centered on eight strategies that include ITS and Communications, which are aimed at preserving capacity and improving transportation systems’ security, safety, and reliability [15] [16] . The plan for each focus area has proposed performance management strategies to evaluate the effectiveness of the strategic area and support decisions related to resource allocation, technology deployment, and actions to achieve the objectives.

Minnesota: The overview volume of Minnesota Statewide Regional ITS Architecture, version 2018, summarized the purpose, general descriptions, objectives, and performance measures for the state’s ITS program. The objectives are service-area specific and aim to enhance transportation through the safe and efficient movement of people, goods, and information while focusing on increased mobility, fuel efficiency, reduced pollution, and operating efficiency [10] . The development objectives, strategies, and associated performance measures for all goal areas are summarized in the state’s 2018 Regional Architecture Development for Intelligent Transportation (RAD-IT) output [17] .

3.1.2. National ITS Reference Architecture

The National ITS Reference Architecture (ARC-IT) has provided high-level functional requirements, goals, objectives, and proposed performance measures that can be used to monitor service packages. The proposed performance measures are from other resources, such as the USDOT, some DOTs, and Metropolitan Transportation Commissions [18] . State and regional transportation agencies can draw on the resources and approaches used in the ARC-IT to develop their respective ITS performance measures. However, as suggested by the ARC-IT, mappings between objectives and service packages are not always straightforward and are often situation-dependent; thus, the mappings should be used only as starting points requiring further analysis to identify the best linkages for an agency’s ITS service package [19] .

3.1.3. Information from Other Relevant Sources

Besides the information gathered from the state’s performance measurement approaches, other FHWA, USDOT, and other agencies have provided useful resources. For instance, the National Transportation Coalition has identified and defined a set of key operations performance measures of national significance. These measures can be used to identify and implement intra-agency network performance measures that support planning and operations functions [20] . Additionally, the FHWA, for instance, has addressed work zone performance measures needs through its issued reports that agencies can access in developing related performance measurement programs [21] [22] . For instance, performance measures focused on incident management are provided in USDOT and FHWA resources [23] [24] . The general descriptions, objectives to reference, performance measures, anticipated data needs, management and operations strategies to consider, and safety-related impacts on TSMO strategies are provided in factsheets in the related desk reference [25] .

3.1.4. Summary of Literature Review

Responsible organizations like the FHWA and USDOT, through ARC-IT and other resources, have provided sufficient guidance and information to develop or incorporate performance measurement strategies into respective DOT ITS programs. An apparent gap, however, exists between the need for DOTs to increase emphasis on performance measurements of transportation systems, including ITS, and the actual implementation. The evidence pointed to possible challenges in the ITS performance monitoring process by DOTs. Insight from the literature survey necessitated the overall objective of this paper. The insight also guided the development of the nationwide qualitative survey questionnaire in Appendix.

3.2. Nationwide Qualitative Survey

Overall, 67 CTSO respondents participated in the survey, with 16 CTSOs (23.88%) providing incomplete or blank inputs for all questions, as shown in Figure 2. The responses of the 16 CTSO participants with incomplete or blank inputs were considered invalid and excluded; thus, only 51 (76.12%) of the responding CTSO participants were considered for the analysis. The findings of the survey are synthesized in the following section. Details on the survey questionnaire and the possible selections for each question can be found in Appendix.

Figure 2. Survey respondents.

3.2.1. Information about Respondents

The object of questions one and two was to gather information about the respondent’s organization, the geographic coverage of the ITS, and the type of networks operated by the respondent’s organization.

Question 1: Which of the following best describes the type of organization you represent?

Of the 51 CTSO participants with valid responses, 84.32% (n = 43) represented DOTs, 7.84% (n = 4) represented MPOs, and 1.96% (n = 1) represented the FHWA. Two participants represented county-level DOT, with another one representing a nationwide data and software provider; together, these three (n = 3) were categorized as “Other,” representing 5.88% of the valid number of respondents.

Question 2a: How would you classify the extent of the ITS deployment that is under your organizations control?

Out of 57 tallied responses received from the 51 CTSO participants (respondents), 70.18% (n = 40) had the organizations they represent controlling deployed ITS on a statewide scale; 14.03% (n = 8) on regional; 3.51% (n = 2) on municipal; and 3.51% (n = 2) on a nationwide scale. Deployment on metropolitan extent was 7.02% (n = 4), with 1.75% (n = 1) indicating a city extent control of deployed ITS.

Question 2b: What roadway network do you operate on?

The types of road networks operated by respondents’ organizations are shown in descending order in Figure 3. Out of 186 tallied responses from 51 respondents, interstate highways, expressways, and principal arterials were the most operated, indicated respectively by 23.66% (n = 44), 22.04% (n = 41), and 19.35% (n = 36) of the tallied responses. Major and minor collectors, minor arterials, and local roads had 16.67% (n = 31), 11.29% (n = 21), and 5.38% (n = 10) of the tallied responses. Three tallied responses indicated “other,” but two failed to specify details, while one indicated that its organization owned roadway infrastructure, which made it function as a regional transportation planning agency under an agreement.

Figure 3. Type of roadway network operated.

3.2.2. Performance Measurement Practice

The objectives of questions three through five were to gather information on whether DOTs use performance measures to evaluate deployed ITS programs, the levels (depths) of evaluation and reporting, and the data sources for performance evaluation.

Question 3a: Which of the following best describes the Intelligent Transportation Systems (ITS) service areas currently deployed by your organization?

Traveler Information and Traffic Management were the most deployed service areas, as indicated by 15.94% (n = 40) and 15.54% (n = 39) of the 251 tallied responses from 46 respondents, respectively. Weather, Data Management, Maintenance and Construction were indicated by 12.35% (n = 31), 10.76% (n = 27), and 10.36% (n = 26), respectively, as deployments. Public Safety and Commercial Vehicle Operations polled 9.56% (n = 24) and 9.16% (n = 23), with Vehicle Safety at 5.18% (n = 13). Sustainable Travel, Parking Management, Support, and Public Transportation polled percentages less than 5% (

Figure 4. Types of ITS service areas deployed.

Question 3b: Do you currently monitor the performance of your organization’s ITS programs?

Out of the 46 responses to the specific question, 36 (78.26%) indicated their organizations currently monitored ITS programs’ performance, with 10 (21.74%) indicating the contrary.

Question 4a: Which of the following best describes the levels at which your organizations ITS performance is monitored?

Out of 99 tallied responses from 25 respondents, technology deployment (22.22%, n = 22), system functionality (21.21%, n = 21), and service provision (15.15%, n = 15) were the three most common areas ITS is monitored, as shown in Figure 5. Performance monitoring on technology deployment would monitor the number or extent to which a particular system is deployed in a jurisdiction, such as the number of speed cameras installed. Monitoring a system’s functionality would, for instance, monitor the time a system is in service or out of service, while the level of service provision would monitor, for instance, the quality of service provided.

Figure 5. Levels ITS performance is monitored.

Further, ITS performance monitored on levels of user benefits, returns on investments, and economic impacts were somehow fairly represented with 11.11% (n = 11), 10.10% (n = 10), and 10.10% (n = 10), respectively, as indicated by the tallied response. ITS performance monitored on policy achievement and network benefits were insufficiently indicated by 7.07% (n = 7) and 2.02% (n = 2), respectively. A respondent indicated resource allocation as an “other” level that ITS performance is monitored.

Question 4b: Do you consider the ITS performance monitoring by your organization beneficial to operations and taxpayers?

Of 25 respondents, 92% (n = 23) indicated that ITS performance monitoring benefits their organization’s operations and the taxpayers. Two respondents were “not sure” about the benefits.

Question 4c: Who collects the data your organization uses in monitoring performance?

A considerable amount of data is sourced directly from ITS systems, as indicated by 28.79% (n = 19) of the 66 tallied responses, as shown in Figure 6. The data directly collected by the ITS systems are expected to be immediately available to agencies at no additional cost, though the storage, processing, transmission, and data analysis may attract a cost.

Figure 6. Agency or source of data collected.

Generally, the cost of data and availability depend on who owns the data: public or private. As indicated from the survey, privately collected data (12.12%, n = 8) and private contractors (16.67%, n = 11) account for 28.79% of the data used to monitor ITS performance. Also, data collected internally by agencies and public sectors accounted for 18.18% (n = 12) and 22.73% (n = 15), respectively. One tallied response indicated university support for data collection.

Question 5a: Do you publish the findings of the performance monitoring you describe?

Out of 25 respondents, 8% (n = 2) do not publish performance monitoring reports, while 28% (n = 7) publish only internally. Agencies that publish only publicly were 12% (n = 3), while 52% (n = 13) published internally and externally.

While the replies indicate that reports are likely to be widely accessible if the statistical significance of the small sample size is ignored, the difficulty in citing agency performance measures through the literature search could not be fully explained.

Question 5b: If possible, please provide a URL link to your published reports

The URL links of published ITS performance reports, dashboards, and other information provided by respondents are shown in Table 1. The information provided additional resources as most of the published reports were not found through the literature search. This table provides a quick reference to readers on how performance measures for assessing ITS programs by some DOTs are structured.

Table 1. URL links to published reports.

3.2.3. Barriers to ITS Performance Monitoring

Questions six through nine aimed to gather information on barriers to benchmarking ITS performance monitoring across jurisdictions, factors that prevent ITS performance monitoring to greater detail and quality, and the usefulness of ARC-IT resources to DOTs in the development of ITS performance measurement programs.

Question 6: Do you consult or find the suggested Performance Measures listed for individual service packages described in the ARC-IT helpful in developing your organizations ITS performance measures?

From the survey, 51.52% (n = 17) of the 33 respondents indicated their organizations did not consult or find these recommendations helpful. The number of responses, however, was insufficient to conclude if the feedback could be generalized across agencies.

Question 7: Does your organization compare ITS performance, benefits, and deployment/usage with other jurisdictions or USDOT/FHWA benchmark?

Out of 33 respondents, only 36.36% (n = 12) of the agencies benchmarked or compared ITS performance, benefits, or deployments with other jurisdictions or agencies, including USDOT and FHWA.

Question 8: What are the main barriers that prevent benchmarking or the establishment of consistent performance indicators across your organizations jurisdiction?

Of the 51 tallied responses of 33 respondents, 31.37% (n = 16), 19.61% (n = 10), and 17.65% (n = 9) indicated the lack of available data, lack of guidance or best practices, and incomparable or inconsistent data formats, respectively, as reasons their organizations did not benchmark or compare ITS performance with other agencies or jurisdictions. Also, benchmarking “not part of agency objectives” and “lack of inter-agency cooperation” were indicated as reasons by 5.88% (n = 3) and 5.88% (n = 3), respectively. “Other” reasons specified by 13.73% (n = 7) included resource constraints, lack of knowledge, time constraints, and funding constraints. Also, 5.88% (n = 3) indicated nothing (“none”) prevented their organizations from comparing or benchmarking ITS performance. The reasons provided are shown in Figure 7, in descending order.

Figure 7. Reasons agencies do not compare or benchmark ITS performance with others.

Question 9: Does any of the following prevent your organization from measuring ITS performance, benefits, and deployment/usage more often or to a higher quality?

Of the 66 tallied responses of 33 respondents, the reasons that prevent monitoring of ITS performance, benefits, and deployment to greater details and quality are mostly lack of available data (27.27%, n = 18), the complexity of the endeavor (19.70%, n = 13), and fragmented and incomparable data (15.15%, n = 10). Also, unsure of benefits and lack of cooperation with stakeholders were indicated as reasons by 13.64% (n = 9) and 6.06% (n = 4), respectively. The “Other” reasons specified by 13.64% (n = 9) of the tallies included resource, funding, and time constraints, lack of data scientists and specific data-focused positions in organizations, and difficulty assigning responsibilities when inter-agency collaboration is required. Additionally, 4.55% (n = 3) indicated “nothing” prevented their organizations from measuring performance to greater detail and quality. The reasons provided by respondents in descending order are shown in Figure 8.

Figure 8. Reasons preventing organizations from measuring ITS performance to greater detail and quality.

4. Conclusions and Recommendations

Responsible organizations like the FHWA and USDOT have provided guidance for transportation agencies to develop and incorporate performance monitoring strategies to evaluate transportation systems, including ITS. A review of available literature, however, indicates a seeming gap between the need to increase emphasis on performance monitoring of deployed ITS programs and the actual implementation, necessitating the need to unearth the current state of practice by this paper.

From the nationwide survey, the feedback suggests performance monitoring has been fairly integrated into ITS programs by transportation agencies, with most organizations monitoring performance on the level of system deployment and systems functionality. Few agencies monitor service provision and user benefits, while policy achievement and network benefits are less monitored. Regarding data used in the performance monitoring process, considerable amounts are directly from ITS equipment, which are expected to be available at no additional cost, while public and private entities also provide a good amount, but which comes at a cost. On the relevance of the ARC-IT-provided resources, organizations rarely consulted it or found it helpful, but the number of responses was insufficient to generalize this feedback across all agencies nationwide.

Further, DOTs generally do not benchmark or compare ITS performance with other agencies or jurisdictions, mainly due to a lack of available data, inadequate guidance, or best practices on the subject matter and the collection of incomparable data by entities. Also, the lack of data, the complexity of the performance monitoring process, and the collection of incomparable or fragmented data were cited as reasons preventing the performance monitoring of ITS to greater detail and quality. Other reasons cited included the lack of data scientists or specific data-focused positions in organizations and difficulty assigning responsibilities when inter-agency collaboration is required.

Recommendation for Future Studies

ARC-IT has provided information and guidance on how to develop and incorporate performance measurement strategies into ITS programs. A study is, however, required to evaluate the use of these resources by responsible agencies in developing their ITS performance measurement strategies and how beneficial the resources are. The results from such a study can help formulate guidance or develop best practices on ITS performance measurement for DOTs, MPOs, and related agencies.

Additionally, though the 51 responding CTSO participants represented, to a reasonable extent, all 50 state DOTs nationwide, a future study designed to ensure higher number numbers of respondents from state DOTs and their corresponding MPOs, related subordinate agencies on the subject matter can confirm the findings herein or unearth to a greater detail state of practice of performance monitoring of deployed ITS nationwide. The findings from this further study can provide guidance to responsible agencies such as FHWA and USDOT and researchers in developing strategies to aid DOTs and subordinate agencies in incorporating performance monitoring procedures into their intelligent transportation systems’ monitoring and operations.

Significance of Study

This paper unearths the current state of practice of performance measurements of the ITS program and provides readers with a quick overview of the challenges in the ITS performance measurement process. These findings and conclusions were expected to guide DOTs, MPO, and cities in developing or reevaluating the performance measurement processes of their ITS programs.

Acknowledgements

The funding support of the Louisiana Department of Transportation and Development (DOTD) is duly acknowledged. The support of the USDOT through the DOTs and members of the AASHTO Committee on Transportation System Operations (CTSO) are also acknowledged for their in-depth feedback provided through the survey.

Author Contributions

The authors confirm the paper’s contribution: paper conception and design: J. Codjoe; data collection: K. A. Abedi; analysis and interpretation of results: K. A. Abedi, R. Thapa & J. Codjoe; draft manuscript preparation: K. A. Abedi, R. Thapa & J. Codjoe. All authors reviewed the results and approved the final version of the manuscript.

Appendix

QUALITATIVE SURVEY QUESTIONNAIRE

Dear Transportation System Operators,

In conjunction with the Louisiana Department of Transportation and Development (DOTD), the Louisiana Transportation Research Center (LTRC) is conducting this survey to help develop a set of performance measures for Louisiana’s Intelligent Transportation Systems (ITS) applications.

The survey is designed to solicit information regarding the current performance measures you use to quantify the benefits of ITS applications in your jurisdiction and any suggestions you may have for us.

This survey will not take more than 10 (ten) minutes.

For more information on this survey, please contact Dr. Raju Thapa at Raju.Thapa@la.gov.

We appreciate your assistance with this survey.

ABOUT YOU/YOUR ORGANIZATION

1) Which of the following best describes the type of organization you represent? (Tick one only)

☐Federal Highway Authority (FHWA)

☐United States Department of Transportation (USDOT)

☐State Department of Transportation (DOT)

☐Metropolitan Planning Organization (MPO)

☐Regional Transportation Planning Office (RTPO)

☐Non-Governmental Organization

☐ITS Service Provider

☐Vehicle/Component Manufacturer

☐Research/Academic Institution

☐Independent Expert/Consultant

☐Other (Please Specify)

2a) How would you classify the extent of the ITS deployment that is under your organization’s control? (Tick all that apply)

☐Nationwide

☐Statewide

☐Regional

☐Municipal

☐City

☐Other (please specify)

2b) What roadway network do you operate on? (Tick all that apply)

☐Interstate Highways

☐Other Freeways & Expressways

☐Other Principal Arterials

☐Minor Arterials

☐Major and Minor Collectors

☐Local Roads

☐Other (please specify)

PERFORMANCE MEASURES

3a) Which of the following best describes the Intelligent Transportation Systems (ITS) service areas currently deployed by your organization? (Tick all that apply). Service Areas are as described in ARC-IT 8.3.

☐Commercial Vehicle Operations

☐Data Management

☐Maintenance and Construction

☐Parking Management

☐Public Safety

☐Public Transportation

☐Support

☐Sustainable Travel

☐Traffic Management

☐Traveler Information

☐Vehicle Safety

☐Weather

3b) Do you currently monitor the performance of your organization’s ITS programs? (Tick one only).

☐Yes

☐No

4a) Which of the following best describes the levels at which your organization’s ITS performance is monitored? (Tick all that apply).

☐Technology Deployment (e.g., number of speed cameras installed)

☐System Functionality (e.g., time out of service)

☐Service provision (including quality/level of service)

☐User benefits (e.g., reduction in journey times)

☐Network benefits (e.g., reduction in traffic congestion)

☐Broader economic impacts (e.g., jobs created, Gross Value Added)

☐Policy achievement (e.g., achievement of policy goals/targets)

☐Return on investment (including indicators of financial sustainability/contribution)

☐Others (please specify)

4b) Do you consider the ITS performance monitoring by your organization beneficial to operations and taxpayers? (Tick all that apply)

☐Yes

☐No

☐Not Sure

4c) Who collects the data your organization uses in monitoring performance? (Tick all that apply).

☐Public sector (e.g., data collected by a local authority)

☐Private contractor (e.g., data collected by a road concessionaire/operator)

☐Privately collected (e.g., floating car data, vehicle-generated data)

☐Internally collected (e.g., internal bespoke data collection exercises)

☐ITS systems (e.g., data collected and reported automatically)

☐Other (please specify)

5a) Do you publish the findings of the performance monitoring you describe? (Tick one only).

☐Yes—internally

☐Yes—publicly

☐Both—internal and externally

☐No

5b) If possible, please provide us with a URL link to your published reports)

6) Do you consult or find the suggested Performance Measures listed for individual service packages described in the ARC-IT helpful in developing your organization’s ITS performance measures? (Tick one only). See https://www.arc-it.net/html/archuse/performancemeasures.html

☐Yes

☐No

7) Does your organization compare ITS performance, benefits, and deployment/usage with other jurisdictions or USDOT/FHWA benchmark? (Tick one only)

☐Yes

☐No

8) What are the main barriers that prevent benchmarking or the establishment of consistent performance indicators across your organization’s jurisdiction? (Tick all that apply)

☐Lack of available data

☐Data recorded are in incomparable formats

☐Not part of organization’s objectives

☐Lack of guidance/Best practice

☐Lack of cooperation with interested parties

☐Other (please specify)

☐None

Other (please specify) …………………………………

9) Does any of the following prevent your organization from measuring ITS performance, benefits, and deployment/usage more often or to a higher quality? (Tick all relevant)

☐Lack of available data

☐Incompatibility of data

☐Unsure of benefits

☐Complexity

☐Lack of cooperation with other stakeholders

☐Other (please specify)

☐Nothing

Other (please specify) …………………………………

Please provide the following details:

Name:

Organization:

Email:

Telephone Number:

Thank you for completing this questionnaire. Someone from DOTD/LTRC may contact you to follow up on some of your responses. We appreciate your input.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Grant, M., Bauer, J., Plaskon, T. and Mason, J. (2010) Advancing Metropolitan Planning for Operations: An Objectives-Driven, Performance-Based Approach. A Guidebook. United States Department of Transportation, FHWA. National Operations Center of Excellence, February. FHWA-HOP-10-026.
[2] U.S. Department of Transportation (2015, April 3) Moving Ahead for Progress in the 21st Century Act (MAP-21).
https://www.transportation.gov/map21
[3] U.S. Department of Transportation (2015, December 4) The Fixing America’s Surface Transportation Act or “FAST Act”.
https://www.transportation.gov/fastact
[4] Grant, M., D’Ignazio, J., Bond, A. and McKeeman, A. (2013) Performance Based Planning and Programming Guidebook. ICF International, Inc. United States Department of Transportation, FHWA-HEP-13-041.
[5] Clark, J., Neuner, M., Sethi, S., et al. (2017) Transportation Systems Management and Action. FHWA-HOP-17-025. FHWA. U.S. Department of Transportation.
[6] United States Department of Transportation. Architecture Reference for Cooperative and Intelligent Transportation.
https://local.iteris.com/arc-it
[7] American Association of State Highway and Transportation Officials (AASHTO). AASHTO Committee on Transportation System Operations.
https://systemoperations.transportation.org/membership
[8] Gresham Smith and Partners. Alabama Department of Transportation (2015) Intelligent Transportation Systems Strategic Business Plan.
[9] California Department of Transportation (Caltrans) (2015, March) Strategic Management Plan 2015-2020.
https://dot.ca.gov/-/media/dot-media/programs/sustainability/documents/caltrans-strategic-mgmt-plan-033015-a11y.pdf
[10] Minnesota Department of Transportation (MnDOT). Minnesota Statewide Regional ITS Architecture, Version 2018. Overview Volume, December 2018.
https://www.dot.state.mn.us/its/projects/2016-2020/itsarchitecture/overview-volume.pdf
[11] New Jersey Department of Transportation. Division of Statewide Traffic Operations. New Jersey Statewide ITS Architecture—Final Report, February 18, 2005.
https://www.nj.gov/transportation/eng/elec/ITS/pdf/ITS_Architecture_v1.01.pdf
[12] Gresham Smith and Partners. Alabama Department of Transportation (2014) Alabama Statewide ITS Architecture, Final Report.
[13] Florida Department of Transportation (FDOT) (2020) Florida Statewide ITS Architecture.
https://teo.fdot.gov/architecture/architectures/statewide/index.html
[14] Florida Department of Transportation (FDOT), State Traffic Engineering and Operations Office (2016) Statewide Intelligent Transportation Systems Performance Measures, Annual Report Fiscal Year 2015/2016.
[15] Iowa Department of Transportation. Intelligent Transportation Systems (ITS) and Communications Systems Service Layer Plan, January 2018.
https://iowadot.gov/TSMO/ServiceLayerPlan3.pdf
[16] Iowa Department of Transportation. Transportation Systems Management and Operations (TSMO) Strategic Plan. February 2016.
https://iowadot.gov/TSMO/TSMO-Strategic-Plan.pdf?ver=2016-05-02-113238-673
[17] Minnesota Department of Transportation (MnDOT). Minnesota Statewide Regional ITS Architecture, Version 2018. Implementation Volume: ITS Initiatives and Project Concepts for Implementation, December 2018.
http://www.dot.state.mn.us/its/projects/2016-2020/itsarchitecture/implementation-volume.pdf
[18] United States Department of Transportation. The National ITS Reference Architecture (ARC-IT Version 9.0).
https://local.iteris.com/arc-it/html/archuse/archuse.html
[19] United States Department of Transportation (USDOT). The National ITS Reference Architecture (ARC-IT Version 9.0).
https://local.iteris.com/arc-it/html/archuse/objective231.html
[20] National Academies of Sciences, Engineering, and Medicine (2010) Measuring Transportation Network Performance. The National Academies Press, Washington DC.
[21] Ullman G.L., Lomax, T.J. and Scriba, T. (2011, September) A Primer on Work Zone Safety and Mobility Performance Measurement. Texas Transportation Institute. USDOT. FHWA. FHWA-HOP-11-033.
https://ops.fhwa.dot.gov/wz/resources/publications/fhwahop11033/fhwahop11033.pdf
[22] Zimmerman, B., Scriba, T., Matthews, K., et al. (2010, October) Scan 08-04. Best Practices in Work Zone Assessment, Data Collection, and Performance Evaluation.
http://onlinepubs.trb.org/onlinepubs/nchrp/docs/NCHRP20-68A_08-04.pdf
[23] Federal Highway Authority (FHWA) (2012, October) 2012 Traffic Incident Management National Analysis Report Executive Summary.
https://ops.fhwa.dot.gov/eto_tim_pse/docs/timsa12/tim_sa_2012.pdf
[24] NCHRP, Applied Engineering Management Corp. and Texas A&M Transportation Institute. NCHRP 07-20—Guidance for Implementing Traffic Incident Management Performance Measurement.
http://nchrptimpm.timnetwork.org/?page_id=884
[25] Worth, P., Bauer, J., Grant, M., et al. (2010) Desk Reference: Advancing Metropolitan Planning for Operations. The Building Blocks of a Model Transportation Plan Incorporating Operations. USDOT. FHWA. FHWA-HOP-10-027.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.