Font Size: a A A

Research On Traffic Conflict Extraction Based On Convolutional Nerual Network

Posted on:2020-11-13Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y ZhouFull Text:PDF
GTID:2392330623960261Subject:Traffic and Transportation Engineering
Abstract/Summary:PDF Full Text Request
With the rapid growth of computer hardware and development of deep learning,extracting traffic flow information from video data,has gradually become an important research domain in intelligent transportation.Road intersections have always been the focus of attention in road traffic safety evaluation,due to lots traffic flow,traffic conflicts and accidents.This paper proposed an automatic extraction of traffic conflicts between vehicles from surveillance videos.Transfer learning,deformable 3D vehicle model,estimation of distribution algorithm and numerical simulation were applied in vehicle detection,localization and tracking.Time to collisions(TTC),which indicate the conflict of vehicles,were calculated and counted on our dataset with this approach.Firstly,labels were made for our dataset.Objects in the video image are divided into three categories: slow traffic(pedestrians and bicycles),cars,big vehicles(buses and trucks).A YOLO model was trained with total 1148 labeled images by using transfer learning to detect and classify objects in images from our dataset.The Kuhn-Munkres algorithm was used to match detections between adjacent images.A Kaman filter was built to combine the results of YOLO model detection and vehicle trackers,to realize the optimal estimation of objection position in the image.Then,the camera was calibrated and the projection matrix was calculated to obtain the transformation between real word coordinate system and the image pixel coordinate system.For the vehicle detected by YOLO model,a deformable 3D vehicle model with 12 shape parameters and 3 position parameters,which using world coordinate system was established,and the 3D model can be projected on image by projection matrix.A fitness evaluation score was used to compute the similarity between the projected frame of 3d model and the contour of vehicle in the image.An estimation algorithm(EDA)was adopted to obtain the optimal estimation of the 3D model,which indicates the position and shape in real world of the vehicle in the image.Finally,the vehicle 3D model was projected onto the ground plane.The TTC of vehicles was calculated by numerical simulation,according to the positions,and shapes of vehicle projections between adjacent images.The severity of traffic conflicts was determined by the duration of time to collision,and the threshold is 2 seconds.The traffic conflict automatic extraction program was developed by python.Main open source modules invoked in this program include TensoFlow,OpenCV and Numpy,and more than 3000 lines of code were written independently.The program was applied to process all case videos from our dataset,and 573 traffic conflicts were extracted,of which 320 were dangerous with collision time less than 2 seconds.The average time of TTC was 1.96 seconds and the overall distribution of TTC was left-biased mormal.Most of the collision time war concentrated between 1 seconds and 2.5 seconds.Totally 4 differnet kinds of traffic conflict was extracted.
Keywords/Search Tags:convolution neural network, transfer learning, deformable 3D vehicle model, time to collision(TTC), traffic conflict extraction from videos
PDF Full Text Request
Related items