Conduct a data collection pilot test to validate end-to-end data acquisition, transfer, processing, and quality assessment processes.

Experience with data collection from connected vehicle devices during the Safety Pilot Model Deployment in Ann Arbor, Michigan.

Date Posted
01/31/2017
TwitterLinkedInFacebook
Identifier
2016-L00758

Safety Pilot Model Deployment: Lessons Learned and Recommendations for Future Connected Vehicle Activities

Summary Information

The Connected Vehicle Safety Pilot was a research program that demonstrated the readiness of DSRC-based connected vehicle safety applications for nationwide deployment. The vision of the Connected Vehicle Safety Pilot Program was to test connected vehicle safety applications, based on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications systems using dedicated short-range communications (DSRC) technology, in real-world driving scenarios in order to determine their effectiveness at reducing crashes and to ensure that the devices were safe and did not unnecessarily distract motorists or cause unintended consequences.

The Connected Vehicle Safety Pilot was part of a major scientific research program run jointly by the U.S. Department of Transportation (USDOT) and its research and development partners in private industry. This research initiative was a multi-modal effort led by the Intelligent Transportation Systems Joint Program Office (ITS JPO) and the National Highway Traffic Safety Administration (NHTSA), with research support from several agencies, including Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), and Federal Transit Administration (FTA). This one-year, real-world deployment was launched in August 2012 in Ann Arbor, Michigan. The deployment utilized connected vehicle technology in over 2,800 vehicles and at 29 infrastructure sites at a total cost of over $50 million dollars in order to test the effectiveness of the connected vehicle crash avoidance systems. Overall, the Safety Pilot Program was a major success and has led the USDOT to initiate rulemaking that would propose to create a new Federal Motor Vehicle Safety Standard (FMVSS) to require V2V communication capability for all light vehicles and to create minimum performance requirements for V2V devices and messages.

Given the magnitude of this program and the positive outcomes generated, the Volpe National Transportation Systems Center conducted a study sponsored by the ITS JPO to gather observations and insights from the Safety Pilot Model Deployment. This report represents an analysis of activities across all stages of the Safety Pilot Model Deployment including scoping, acquisitions, planning, execution, and evaluation. The analysis aimed to identify specific accomplishments, effective activities and strategies, activities or areas needing additional effort, unintended outcomes, and any limitations and obstacles encountered throughout the Model Deployment. It also assessed the roles of organizations and the interactions among these organizations in the project. Findings were used to develop recommendations for use in future deployments of connected vehicle technology. Information for this analysis was gathered from a combination of over 70 participant interviews and a review of program documentation. It is anticipated that findings from this study will be valuable to future USDOT research programs and early adopters of connected vehicle technology.

The report contains numerous lessons across many topics, including program management, outreach and showcase, experiment setup, DSRC device development, device deployment and monitoring, and data management.

Lessons Learned

One of the primary objectives of the Safety Pilot Program was to collect data to support the 2013 NHTSA agency decision on light vehicles. In support of this objective, the Independent Evaluator (IE) worked with the Test Conductor to develop the experimental design and define data needs, prepare driver surveys, coordinate data transfer, and share information about data analysis as needed for a successful Model Deployment. The Test Conductor also provided technical support to the IE in processing the field test data and performing quality assurance. These activities were intended to prepare the data for analysis by the Independent Evaluator during the final stage of the Model Deployment – the Post-Model Deployment Evaluation.



Since the SPMD was the first of its kind deployment of connected vehicles, the USDOT did not know exactly what types of data would be available, what data would be needed to support future research, or even what potential research questions may be developed during or after the Model Deployment. Therefore, the USDOT decided to retain all data collected during SPMD, rather than only the data needed for the NHTSA decision. This allowed the flexibility to determine at a later date, even after the Model Deployment ended, how best to utilize the data to support identified research areas, as well as preserve the data for potential research not yet identified. While it resulted in a large volume of data and associated costs, the decision to keep all data allowed several important analyses to be conducted that were not originally envisioned.



Both subjective and objective data were needed to meet the objectives of the independent evaluation. Subjective data was gathered from surveys, interviews and focus group sessions with test subjects. Objective data was collected using in-vehicle data acquisition systems installed on a total of 186 vehicles, including 64 Integrated Light Vehicles (ILVs), 100 light vehicles equipped with Aftermarket Safety Devices (ASDs), 16 heavy vehicles with Retrofit Safety Devices (RSDs), 3 heavy vehicles with Integrated Safety Devices, and 3 transit vehicles with Transit Safety Retrofit Packages (TRPs). The objective data consisted of both video data and numerical data. The numerical data included in-vehicle sensor data, remote sensor and radar data, environmental data, and data from V2V sensory components using DSRC and relative positioning. Since the Test Conductor supplied the data acquisition systems for the ASD, RSD, and TRP vehicles, they were also responsible for harvesting, processing and transferring data from these vehicles to the Independent Evaluator. Similarly, the Integrated Light Vehicle Developer supplied the DAS units for the integrated light vehicles and was responsible for harvesting, processing, and transferring data from the ILVs to the IE.



In addition to the data needed for the evaluation, other forms of data were generated by and collected from a variety of sources and systems within the Model Deployment, including BSMs gathered by the RSUs from passing vehicles, signal phase and timing messages transmitted by RSUs, and communication messages with the SCMS. Weather data and traditional traffic data was also collected from previously existing systems in order to understand the context in which the Model Deployment was conducted. The Test Conductor was responsible for collecting all of this additional data characterizing the Model Deployment environment and delivering it to the USDOT and the Independent Evaluator.



Related recommendations made in the source report include:

  • Assess the data collection approach in terms of types of data and data volumes that will be collected. Develop a plan for collecting and storing the data, including the sizing of IT hardware and data management processes.
  • Define all data needs and develop a uniform data format up-front. Specify data types, format, and device specifications in the procurement documents.
  • Allow the data evaluator to provide detailed data requirements to the data collection entities as a part of the planning process. Consider including “business definitions” or the intended use of the data to reduce the risk of discrepancy in understanding of data needs (e.g. “a way to determine if there is an in-lane, lead vehicle” instead of “forward radar data”).
  • Collaborate with and inform stakeholder groups about data and analysis needs and requirements. Create checkpoints for collaboration between the contractor and the evaluator prior to the equipment selection to ensure that the data collected will meet the research needs.
  • Conduct a data collection pilot test to validate end-to-end data acquisition, transfer, processing, and quality assessment processes.
  • Common-ize as many related data elements across suppliers as is reasonable and practical. This includes data structure, primary keys, units, naming schemes, and formats.
  • When strategizing data collection, storage, and merging approach, it is best to think specifics rather than in general, particularly when it comes to how datasets will interface with each other, i.e., synchronization. The cost of integrating disparate data sources should be weighed against the cost of managing and analyzing the data as separate entities, particularly if there are issues with the data structure being considered intellectual property.
  • Explore methods for implementing over-the-air downloads of data to minimize impacts on resources and participants. This may need to be balanced with needing to physically interact with the vehicles to ensure state of health of the vehicle and equipment.
  • Ensure RSUs have an adequate connection to a back office system if being used to collect and transfer DSRC messages received from vehicles. Consider processes for parsing redundant data to reduce storage requirements.
  • Examine fully populated data samples prior to launch of a field test to ensure that the data and formats provided are compatible with the analysis approach. Allow adequate time for review of the samples and for any changes that may be required to the devices if data formats are not as anticipated.
  • Define the timing of the data quality checks and the specific checks to be conducted as a part of the data requirements and specifications using experience from previous field tests as a baseline.
  • Implement automated system of checks on the data in near-real time to identify and flag data quality issues.
  • Data vendors should keep an operational, identical copy of the evaluation database for quality assessment purposes.
  • Identify all of the potential privacy issues at the beginning of the project and develop plans to address them early in the project if the intent is to release data to the public in a timely manner.

 



·