Over four days in the summer of 2018, a watershed moment in the maturation of connected vehicle technologies occurred at the Turner-Fairbank Highway Research Center (TFHRC), a federally owned and operated national research facility in McLean, Virginia. Through a collaborative effort, the United States Department of Transportation (USDOT) and three Connected Vehicle (CV) Pilot Demonstration sites— New York City, Wyoming, and the Tampa Hillsborough Expressway Authority (THEA)—conducted an interoperability test to demonstrate if a vehicle with an onboard device from one of the sites was able to receive messages from onboard units (OBUs) and roadside units (RSUs), between each CV Pilot site, in accordance with the key connected vehicle interfaces and standards. A test of this nature, involving three deployment sites and five device vendors had never been done before.
Working with the USDOT and its contractors, the CV Pilot sites collaborated to harmonize the data elements that would make such interactions possible, establish the security profiles, and agree on interpretations of the various standards for connected vehicle systems. Coming into the test, participants were eager to see if all the planning and coordination over six months paid off—and it did.
In total, over one hundred interoperability test runs were conducted for four test application-based cases—Forward Collision Warning (FCW), Intersection Movement Assist (IMA), Emergency Electronic Brake Lights (EEBL), and reception of vehicle-to-infrastructure (V2I) signal phase and timing (SPaT) and MAP messages. Results of the testing indicated successful transfer of messages between devices located on six vehicles—from five different vendors—and between in-vehicle devices and roadside units (RSUs). All devices used for the test were enrolled with a commercial Security Credential Management System (SCMS) and used test certificates from the SCMS to ensure trusted communication between OBUs and RSUs. Based on the testing, it was concluded that all vendors and CV Pilot site deployment configurations were interoperable and could trigger warnings in each others’ devices.
The success of the Interoperability Test was due to many contributing factors. In discussions conducted with the test’ participants on the final day of testing, success of the testing was attributed to the following practices that may serve beneficial for future Interoperability Testing activities.
- Attend Connected Vehicle PlugFests.
- The USDOT holds an annual CV PlugFest to provide a venue for vendor-to-vendor connected vehicle testing as needed to develop certification services for multi-vendor connected vehicle networks. Prior to conducting Interoperability Testing, sites should consider attending these events to assess vendor capabilities.
- Coordinate regularly in the months leading up to the actual test date.
- Coordination in the months leading up to the Interoperability Testing test date allowed for CV Pilot sites, vendors, and stakeholders to work together, procure equipment, develop a schedule, provide feedback, etc. This coordination was done via a bi-weekly technical roundtable. A clear definition of roles and responsibilities is important to support planning and execution of the test. Personnel should be clearly identified, and all roles should have backups in the case of unexpected events.
- Coordinate with test beds to make sure all equipment and software is received weeks before Interoperability Testing is conducted.
- The CV Pilots sites mailed all their testing equipment to TFHRC two weeks before testing was conducted. This allowed time for TFHRC to set up OBUs in designated vehicles and make sure the software was working as designed. This allowed time for the installation process to be verified by responsible CV Pilot site representatives.
- Schedule a full day for setup, checkout and dry runs.
- Having an extra day to make sure equipment was installed properly, applications run as expected, etc. was beneficial come the day of running the Interoperability Testing. CV Pilot sites and vendors were able to do last minute updates, study the test bed, and make changes to the test plan to accommodate for a successful execution.
- Make conservative estimates for test runs.
- A basic assumption of 10-minutes per test run was assumed for the Interoperability Testing through discussions with the sites. However, this was based on the location of where the test was conducted, and accommodated for the start time, the test run, and data collection activities. This should be revised for future interoperability tests based on how long it takes to run through a test bed with an added buffer time.
- Include pre-meeting and set aside 20-30-minutes for dry runs before conducting individual tests.
- While running the individual tests, it was found to be beneficial to run through the test procedures for each application’s test a few times so that drivers, vendors, and stakeholders were informed and knew what to expect. Additionally, time should be included at the end of each day to identify what tests need to be retested and to discuss any issues the drivers and other individuals encountered during testing.
- Have walkie-talkies to communicate with drivers, test leads, USDOT representatives, etc. during test runs.
- Walkie-talkies were found to be indispensable during the Interoperability Test. USDOT representatives were able to communicate the start time of each test with in-vehicle personnel, as well as flaggers. End time for each test was also communicated via walkie-talkies.
- Conduct additional RTCM or other positioning correction enabled testing.
- The Interoperability Testing relied on continuous localization, i.e., positioning for accurate data collection. However, the position information contained in the DSRC was not always accurate or reliable, negatively impacting some of the tests. The team discussed the use of Radio Technical Commission for Maritime Services (RTCM) corrections or RSU triangulation for improved location accuracy, but ultimately decided not to implement these corrections for the Interoperability Test. It should be noted that subsequent testing by New York City with one of the vendors utilizing a firmware update to the GPS chip in their device showed improved performance of GPS accuracy - reducing variability from approximately 7 meters to less than 1.5 meters which is required by SAE J2945.
- Tune applications to optimize application performance.
- Each of the vendors had different configuration parameters for each of the applications tested. These parameters included lane widths and the triggering points for warnings within the application (e.g., the vehicle must be traveling at least 15 mi/h to trigger a forward collision warning). As tested during Day 1 of the testing, tuning the applications (in this case adjusting lane width) improved the consistency of application performance. Conducting additional testing using the Interoperability Test procedures for each application but varying application tuning for additional configuration parameters may provide insight into what settings provide the greatest consistency.
- Anticipate lane width adjustments in operational environment.
- The CV Pilot sites needed to adjust the application lane width setting to accommodate for the narrow lanes at TFHRC (10 ft). Applications were designed for standard width (12 ft) lanes. Future tests should consider the implications of lane width changes in various jurisdictions and locations as this creates issues for vehicles to receive alerts in operational environments where application setting cannot be adjusted in real time. In addition, lane width adjustments relate to the device’s positioning capability.
- Ensure sufficient precision for repeatability of tests.
- For some of the tests, the thresholds within the applications to trigger a warning/alert required some aggressive driver behavior including hard braking for EEBL and coordination/timing for IMA for the vehicles to come close to a collision. Repeatability for some of the tests proved somewhat difficult. In some cases, this could potentially be solved by loosening the configuration of the applications parameters. Another approach would be to use additional/more specific cones along the test track to instruct the drivers on how to behave (e.g., a "start braking here" cone and a "stop here" cone).
(Our website has many links to other organizations. While we offer these electronic linkages for your convenience in accessing transportation-related information, please be aware that when you exit our website, the privacy and accessibility policies stated on our website may not be the same as that on other websites.)