Specify interoperability testing requirements and steps as part of the connected vehicle device requirements prior to starting multiple rounds of testing, feedback, reset, and retesting.

Experience with deploying connected vehicle devices during the Safety Pilot Model Deployment in Ann Arbor, Michigan.

Date Posted
01/31/2017
TwitterLinkedInFacebook
Identifier
2016-L00757

Safety Pilot Model Deployment: Lessons Learned and Recommendations for Future Connected Vehicle Activities

Summary Information

The Connected Vehicle Safety Pilot was a research program that demonstrated the readiness of DSRC-based connected vehicle safety applications for nationwide deployment. The vision of the Connected Vehicle Safety Pilot Program was to test connected vehicle safety applications, based on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications systems using dedicated short-range communications (DSRC) technology, in real-world driving scenarios in order to determine their effectiveness at reducing crashes and to ensure that the devices were safe and did not unnecessarily distract motorists or cause unintended consequences.

The Connected Vehicle Safety Pilot was part of a major scientific research program run jointly by the U.S. Department of Transportation (USDOT) and its research and development partners in private industry. This research initiative was a multi-modal effort led by the Intelligent Transportation Systems Joint Program Office (ITS JPO) and the National Highway Traffic Safety Administration (NHTSA), with research support from several agencies, including Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), and Federal Transit Administration (FTA). This one-year, real-world deployment was launched in August 2012 in Ann Arbor, Michigan. The deployment utilized connected vehicle technology in over 2,800 vehicles and at 29 infrastructure sites at a total cost of over $50 million dollars in order to test the effectiveness of the connected vehicle crash avoidance systems. Overall, the Safety Pilot Program was a major success and has led the USDOT to initiate rulemaking that would propose to create a new Federal Motor Vehicle Safety Standard (FMVSS) to require V2V communication capability for all light vehicles and to create minimum performance requirements for V2V devices and messages.

Given the magnitude of this program and the positive outcomes generated, the Volpe National Transportation Systems Center conducted a study sponsored by the ITS JPO to gather observations and insights from the Safety Pilot Model Deployment. This report represents an analysis of activities across all stages of the Safety Pilot Model Deployment including scoping, acquisitions, planning, execution, and evaluation. The analysis aimed to identify specific accomplishments, effective activities and strategies, activities or areas needing additional effort, unintended outcomes, and any limitations and obstacles encountered throughout the Model Deployment. It also assessed the roles of organizations and the interactions among these organizations in the project. Findings were used to develop recommendations for use in future deployments of connected vehicle technology. Information for this analysis was gathered from a combination of over 70 participant interviews and a review of program documentation. It is anticipated that findings from this study will be valuable to future USDOT research programs and early adopters of connected vehicle technology.

The report contains numerous lessons across many topics, including program management, outreach and showcase, experiment setup, DSRC device development, device deployment and monitoring, and data management.

Lessons Learned

The Test Conductor conducted device interoperability testing to verify the ability of the vehicle-based and infrastructure-based devices produced by various suppliers to exchange, decode, log, and/or forward DSRC messages. All possible combinations of device types and suppliers were tested to verify this capability. The initial testing schedule included one round of interoperability testing, scheduled to occur in April through May 2012, prior to the Pre-Model Deployment dry run testing. Two reasons were cited for including only one round of testing. First, it was assumed that all devices would be ready according to the device development schedule and therefore available for testing in April 2012. But more importantly, since all of the devices went through the qualification testing process, it was assumed that no major issues would be identified during the interoperability testing that would require significant updates and additional rounds of retesting.

Unfortunately, the RSUs were not ready for testing as originally scheduled due to the delay in developing the specification and research Qualified Projects List (rQPL). As a result, the interoperability testing was divided into two stages. Stage 1 only tested vehicle-based devices. Stage 1 evaluated the vehicle-to-vehicle (V2V) Basic Safety Message (BSM) compatibility between a variety of vehicle-based devices and platforms from each of the selected suppliers. It included static bench testing at the Test Conductor facilities and dynamic field testing of devices on a predefined route within the MDGA and occurred from mid-April through mid-May as originally planned. The infrastructure-based devices (RSUs) were incorporated into the Stage 2 testing. Stage 2 evaluated both vehicle-based and infrastructure-based devices against a series of seven test cases. This testing was scheduled for two weeks at the end of June. At the time of the Stage 2 bench testing in June, the vehicle-based devices could not be tested against three of the test cases that included security functions, as these devices did not support this functionality at the time. Therefore, these functions were not included in the Pre-Model Deployment Dry Run Testing in July.

It is clear that devices were not as mature as initially assumed based on the qualification testing, as evidenced by the number of changes required following the initial stages of interoperability tests. The Test Conductor tested as much functionality as possible prior to the Pre-Model Deployment Dry Run Testing, however there was still much to be done following the dry run testing in July. As a result, the Test Conductor and USDOT jointly decided that two additional stages, Stages 3 and 4, of interoperability testing were required after the SPMD launch in August 2012. The Stage 3 field testing re-assessed devices that failed in previous stages of testing and tested security functionality that was previously not supported in the Stage 2 testing. A fourth and final stage of bench and field testing was conducted to re-assess the devices after multiple firmware updates were implemented to resolve issues discovered in the field.

Several issues were identified in the interoperability testing process. First, adding additional stages of testing and not being able to adjust the project schedule or deployment launch date added risk to the project since it was unknown how the VADs and ASDs would work with the RSUs until after the devices were purchased and deployed in the field. Second, the process of conducting the interoperability testing required far more resources and time than was originally planned for this activity. The device interoperability testing verified, via a manual process, that the data elements in the BSM sent by the transmitting device were identical to the data elements in the BSM received by the receiving device. The manual verification process involved a field by field comparison of each data element in the transmitted and received BSM. Since there are a large number of fields in the BSM, this was a labor intensive process. Also, this testing was somewhat limited and did not verify the validity and accuracy of the data generated and transmitted by each supplier’s device since this type of testing was assumed to be covered in other testing activities such as the device qualification testing and SPMD dry run testing.

Related recommendations made in the source report include:

  • Develop a plan and process for efficiently recalling devices in the event that numerous updates are required. Clearly communicate to volunteer drivers that they may be contacted to bring in their vehicles at any point during the pilot if problems are detected.
  • Analyze the need to purchase spare devices. If the budget allows, procure a sufficient inventory of spares to replace non-functional units as part of the recall process plan. Work with suppliers to determine replacement times to better estimate how many spares would be required.
  • Include state of health monitoring requirements and supporting processes for each type of device. Implement remote monitoring and device reset capabilities to reduce the number of devices that need to be physically recalled from the field.
  • Utilize automated tools to perform basic data comparisons between devices in order to more efficiently conduct the testing and test a wider variety of cases.
  • Define and document requirements and steps for all interoperability testing participants and data users.
  • Utilize automated tools to perform basic data comparisons between devices in order to more efficiently conduct the testing and test a wider variety of cases.
  • Define and document requirements and steps for all interoperability testing participants
  • Implement a full dry run that includes all installation, operation and interoperability requirements for all devices, infrastructure, and systems. Incorporate sufficient time into the schedule to ensure that all devices and systems are in a stable state prior to implementing a full dry run.
  • Ensure that in-depth system testing requirements, updates, and retest cycles are well-understood, and are appropriately resourced in time and budget. Plan for several iterations of component, subsystem, and total pilot system testing within the dry run. Depending on the number of system components (devices, infrastructure, data collection and backhaul connections, security implementation, etc), this could take several weeks to several months.
  • Be prepared to encounter field issues that were not discovered during the qualification testing, interoperability testing, and dry run testing.