CV Pilot sites credit successful demonstration of cross-site V2V and V2I interactions to close coordination with the test site and utilization of itemized test-run schedule with clear pass/fail criteria.

CV Pilot sites participate in an Interoperability Test to demonstrate over-the-air V2V and V2I interactions between the different site’s onboard and roadside CV equipment, generating lessons learned for running future tests.

McLean; Virginia; United States

Background (Show)

Lesson Learned

The success of the Interoperability Test was due to many contributing factors. In discussions conducted with the test’ participants on the final day of testing, success of the testing was attributed to the following practices that may serve beneficial for future Interoperability Testing activities.

Pre-Test Planning
  • Attend Connected Vehicle PlugFests.
      The USDOT holds an annual CV PlugFest to provide a venue for vendor-to-vendor connected vehicle testing as needed to develop certification services for multi-vendor connected vehicle networks. Prior to conducting Interoperability Testing, sites should consider attending these events to assess vendor capabilities.
  • Coordinate regularly in the months leading up to the actual test date.
      Coordination in the months leading up to the Interoperability Testing test date allowed for CV Pilot sites, vendors, and stakeholders to work together, procure equipment, develop a schedule, provide feedback, etc. This coordination was done via a bi-weekly technical roundtable. A clear definition of roles and responsibilities is important to support planning and execution of the test. Personnel should be clearly identified, and all roles should have backups in the case of unexpected events.
  • Coordinate with test beds to make sure all equipment and software is received weeks before Interoperability Testing is conducted.
      The CV Pilots sites mailed all their testing equipment to TFHRC two weeks before testing was conducted. This allowed time for TFHRC to set up OBUs in designated vehicles and make sure the software was working as designed. This allowed time for the installation process to be verified by responsible CV Pilot site representatives.
  • Schedule a full day for setup, checkout and dry runs.
      Having an extra day to make sure equipment was installed properly, applications run as expected, etc. was beneficial come the day of running the Interoperability Testing. CV Pilot sites and vendors were able to do last minute updates, study the test bed, and make changes to the test plan to accommodate for a successful execution.
  • Make conservative estimates for test runs.
      A basic assumption of 10-minutes per test run was assumed for the Interoperability Testing through discussions with the sites. However, this was based on the location of where the test was conducted, and accommodated for the start time, the test run, and data collection activities. This should be revised for future interoperability tests based on how long it takes to run through a test bed with an added buffer time.
Test Runs
  • Include pre-meeting and set aside 20-30-minutes for dry runs before conducting individual tests.
      While running the individual tests, it was found to be beneficial to run through the test procedures for each application’s test a few times so that drivers, vendors, and stakeholders were informed and knew what to expect. Additionally, time should be included at the end of each day to identify what tests need to be retested and to discuss any issues the drivers and other individuals encountered during testing.
  • Have walkie-talkies to communicate with drivers, test leads, USDOT representatives, etc. during test runs.
      Walkie-talkies were found to be indispensable during the Interoperability Test. USDOT representatives were able to communicate the start time of each test with in-vehicle personnel, as well as flaggers. End time for each test was also communicated via walkie-talkies.
  • Conduct additional RTCM or other positioning correction enabled testing.
      The Interoperability Testing relied on continuous localization, i.e., positioning for accurate data collection. However, the position information contained in the DSRC was not always accurate or reliable, negatively impacting some of the tests. The team discussed the use of Radio Technical Commission for Maritime Services (RTCM) corrections or RSU triangulation for improved location accuracy, but ultimately decided not to implement these corrections for the Interoperability Test. It should be noted that subsequent testing by New York City with one of the vendors utilizing a firmware update to the GPS chip in their device showed improved performance of GPS accuracy - reducing variability from approximately 7 meters to less than 1.5 meters which is required by SAE J2945.
  • Tune applications to optimize application performance.
      Each of the vendors had different configuration parameters for each of the applications tested. These parameters included lane widths and the triggering points for warnings within the application (e.g., the vehicle must be traveling at least 15 mi/h to trigger a forward collision warning). As tested during Day 1 of the testing, tuning the applications (in this case adjusting lane width) improved the consistency of application performance. Conducting additional testing using the Interoperability Test procedures for each application but varying application tuning for additional configuration parameters may provide insight into what settings provide the greatest consistency.
  • Anticipate lane width adjustments in operational environment.
      The CV Pilot sites needed to adjust the application lane width setting to accommodate for the narrow lanes at TFHRC (10 ft). Applications were designed for standard width (12 ft) lanes. Future tests should consider the implications of lane width changes in various jurisdictions and locations as this creates issues for vehicles to receive alerts in operational environments where application setting cannot be adjusted in real time. In addition, lane width adjustments relate to the device’s positioning capability.
  • Ensure sufficient precision for repeatability of tests.
      For some of the tests, the thresholds within the applications to trigger a warning/alert required some aggressive driver behavior including hard braking for EEBL and coordination/timing for IMA for the vehicles to come close to a collision. Repeatability for some of the tests proved somewhat difficult. In some cases, this could potentially be solved by loosening the configuration of the applications parameters. Another approach would be to use additional/more specific cones along the test track to instruct the drivers on how to behave (e.g., a "start braking here" cone and a "stop here" cone).

Lesson Comments

No comments posted to date

Comment on this Lesson

To comment on this lesson, fill in the information below and click on submit. An asterisk (*) indicates a required field. Your name and email address, if provided, will not be posted, but are to contact you, if needed to clarify your comments.


Connected Vehicle Pilots Phase 2 Interoperability Test-Test Report

Author: Hailemariam, Margaret; J.D. Schneeberger; Justin Anderson; James Chang; and Amy O’Hara

Published By: USDOT Federal Highway Administration

Source Date: 11/09/2018

Other Reference Number: FHWA-JPO-18-707

URL: https://rosap.ntl.bts.gov/view/dot/36715

Lesson Contacts

Lesson Analyst:

Kathy Thompson


Average User Rating

0 ( ratings)

Rate this Lesson

(click stars to rate)




United States

Systems Engineering

Show the V

Unit / Device Testing

Focus Areas

None defined

Goal Areas



None defined

Lesson ID: 2019-00885