Lesson

Conduct a data collection pilot test to validate end-to-end data acquisition, transfer, processing, and quality assessment processes.

Experience with data collection from connected vehicle devices during the Safety Pilot Model Deployment in Ann Arbor, Michigan.


September 2015
Michigan; Ann Arbor; Michigan; United States


Background (Show)

Lesson Learned

One of the primary objectives of the Safety Pilot Program was to collect data to support the 2013 NHTSA agency decision on light vehicles. In support of this objective, the Independent Evaluator (IE) worked with the Test Conductor to develop the experimental design and define data needs, prepare driver surveys, coordinate data transfer, and share information about data analysis as needed for a successful Model Deployment. The Test Conductor also provided technical support to the IE in processing the field test data and performing quality assurance. These activities were intended to prepare the data for analysis by the Independent Evaluator during the final stage of the Model Deployment – the Post-Model Deployment Evaluation.

Since the SPMD was the first of its kind deployment of connected vehicles, the USDOT did not know exactly what types of data would be available, what data would be needed to support future research, or even what potential research questions may be developed during or after the Model Deployment. Therefore, the USDOT decided to retain all data collected during SPMD, rather than only the data needed for the NHTSA decision. This allowed the flexibility to determine at a later date, even after the Model Deployment ended, how best to utilize the data to support identified research areas, as well as preserve the data for potential research not yet identified. While it resulted in a large volume of data and associated costs, the decision to keep all data allowed several important analyses to be conducted that were not originally envisioned.

Both subjective and objective data were needed to meet the objectives of the independent evaluation. Subjective data was gathered from surveys, interviews and focus group sessions with test subjects. Objective data was collected using in-vehicle data acquisition systems installed on a total of 186 vehicles, including 64 Integrated Light Vehicles (ILVs), 100 light vehicles equipped with Aftermarket Safety Devices (ASDs), 16 heavy vehicles with Retrofit Safety Devices (RSDs), 3 heavy vehicles with Integrated Safety Devices, and 3 transit vehicles with Transit Safety Retrofit Packages (TRPs). The objective data consisted of both video data and numerical data. The numerical data included in-vehicle sensor data, remote sensor and radar data, environmental data, and data from V2V sensory components using DSRC and relative positioning. Since the Test Conductor supplied the data acquisition systems for the ASD, RSD, and TRP vehicles, they were also responsible for harvesting, processing and transferring data from these vehicles to the Independent Evaluator. Similarly, the Integrated Light Vehicle Developer supplied the DAS units for the integrated light vehicles and was responsible for harvesting, processing, and transferring data from the ILVs to the IE.

In addition to the data needed for the evaluation, other forms of data were generated by and collected from a variety of sources and systems within the Model Deployment, including BSMs gathered by the RSUs from passing vehicles, signal phase and timing messages transmitted by RSUs, and communication messages with the SCMS. Weather data and traditional traffic data was also collected from previously existing systems in order to understand the context in which the Model Deployment was conducted. The Test Conductor was responsible for collecting all of this additional data characterizing the Model Deployment environment and delivering it to the USDOT and the Independent Evaluator.

Related recommendations made in the source report include:
  • Assess the data collection approach in terms of types of data and data volumes that will be collected. Develop a plan for collecting and storing the data, including the sizing of IT hardware and data management processes.
  • Define all data needs and develop a uniform data format up-front. Specify data types, format, and device specifications in the procurement documents.
  • Allow the data evaluator to provide detailed data requirements to the data collection entities as a part of the planning process. Consider including “business definitions? or the intended use of the data to reduce the risk of discrepancy in understanding of data needs (e.g. “a way to determine if there is an in-lane, lead vehicle? instead of “forward radar data?).
  • Collaborate with and inform stakeholder groups about data and analysis needs and requirements. Create checkpoints for collaboration between the contractor and the evaluator prior to the equipment selection to ensure that the data collected will meet the research needs.
  • Conduct a data collection pilot test to validate end-to-end data acquisition, transfer, processing, and quality assessment processes.
  • Common-ize as many related data elements across suppliers as is reasonable and practical. This includes data structure, primary keys, units, naming schemes, and formats.
  • When strategizing data collection, storage, and merging approach, it is best to think specifics rather than in general, particularly when it comes to how datasets will interface with each other, i.e., synchronization. The cost of integrating disparate data sources should be weighed against the cost of managing and analyzing the data as separate entities, particularly if there are issues with the data structure being considered intellectual property.
  • Explore methods for implementing over-the-air downloads of data to minimize impacts on resources and participants. This may need to be balanced with needing to physically interact with the vehicles to ensure state of health of the vehicle and equipment.
  • Ensure RSUs have an adequate connection to a back office system if being used to collect and transfer DSRC messages received from vehicles. Consider processes for parsing redundant data to reduce storage requirements.
  • Examine fully populated data samples prior to launch of a field test to ensure that the data and formats provided are compatible with the analysis approach. Allow adequate time for review of the samples and for any changes that may be required to the devices if data formats are not as anticipated.
  • Define the timing of the data quality checks and the specific checks to be conducted as a part of the data requirements and specifications using experience from previous field tests as a baseline.
  • Implement automated system of checks on the data in near-real time to identify and flag data quality issues.
  • Data vendors should keep an operational, identical copy of the evaluation database for quality assessment purposes.
  • Identify all of the potential privacy issues at the beginning of the project and develop plans to address them early in the project if the intent is to release data to the public in a timely manner.





Lesson Comments

No comments posted to date

Comment on this Lesson

To comment on this lesson, fill in the information below and click on submit. An asterisk (*) indicates a required field. Your name and email address, if provided, will not be posted, but are to contact you, if needed to clarify your comments.



Source

Safety Pilot Model Deployment: Lessons Learned and Recommendations for Future Connected Vehicle Activities

Author: Kevin Gay, Valarie Kniss (Volpe)

Published By: U.S. Department of Transportation

Source Date: September 2015

EDL Number: FHWA-JPO-16-363

URL: http://ntl.bts.gov/lib/59000/59300/59361/FHWA-JPO-16-363.pdf

Other Lessons From this Source

Lesson Contacts

Lesson Contact(s):

Kevin Gay
US DOT
kevin.gay@dot.gov

Lesson Analyst:

Gregory Hatcher
Noblis
202-488-5704
ghatcher@noblis.org


Rating

Average User Rating

0 ( ratings)

Rate this Lesson

(click stars to rate)


Lesson ID: 2016-00758