When Using Computer Vision Technology to Support Wrong-Way Driving Detection Systems Evaluate Data Processing Delays to Determine the Proper Number of Traffic Cameras Needed to Balance Performance and Cost.

State DOT and University Researchers Evaluated the Installation of Wrong-Way Detection Systems at 10 Locations in Iowa.

Date Posted
05/01/2022
Identifier
2022-L01110

Automating Wrong-Way Driving Detection Using Existing CCTV Cameras

Summary Information

A high percentage of wrong-way driving crashes are fatal or near-fatal, and one of the most efficient ways to reduce the danger of wrong-way driving is fast detection and trigger several warnings to the driver and others in the vicinity as appropriate. This study focused on detecting wrong-way driving on Iowa highways in ten different locations using only closed circuit television (CCTV) camera data on a real-time basis with no need to manually pre-calibrate the camera. To achieve this goal, a deep learning model was implemented to detect and track vehicles, and a machine learning model was trained to classify the road into different directions and extract vehicle properties for further comparisons. When wrong-way driving was detected, the system would record the violation scene and send the video file via electronic mail to any recipient in charge of appropriate reactions for the event. The researchers tested the performance of the model on the data streams from ten different cameras. Camera data was collected at these ten locations for five days, November 15 through November 19, 2020, every day from 10 AM to 5 PM.

  • Determine the number of traffic cameras needed by analyzing data processing delays to balance performance and cost. The perception module can take multiple video streams as input, so it requires agencies to adjust the number of video streams (traffic cameras) needed to simultaneously process in each single graphics processing unit (GPU) to achieve a balance between performance and cost. In this study, a total of 160 cameras was split into two GPUs where the average GPU memory usage was around 65 percent so that the overall processing latency caused by the perception module was less than one second.
  • Ensure that the detector and tracker model accommodates camera movements. The detector and tracker model is suggested to be designed in a way that it can detect camera rotation and automatically readjust the model to learn the correct driving direction with the new camera direction to avoid frequent manual recalibration
  • Consider Cloud computing for better data storage, faster analysis and easier transfer to other use cases. The researchers in this study used GPUs to run the model on a local machine. With increasing suppliers of cloud computing, it would be a good idea to move the computation to the cloud and store the results and saved data thereby to provide them for other use cases such as incident or congestion detection.
  • Process data from several cameras simultaneously to improve detection efficiency. In this study, the researchers considered two different models, one of which was not able to process and analyze more than one camera at a time, leading to loss of real-time aspect in the analysis.
Goal Areas
System Engineering Elements

Keywords Taxonomy: