With the rapid development of social economy,the car ownership of China is increasing significantly,at the same time the incidence of traffic accidents have gradually increased.Advanced Driving Assistance System(ADAS)is one of the important means to solve the traffic safety problems,which has become an important research topic for researchers.Object detection algorithm is one of the key technologies in advanced driving assistance system,and valuable research results have been emerging one after another in recent years.Subcat,RCNN,Faster-RCNN,YOLO and other Object detection algorithms have a good performance in simple scenes.However,the detection algorithms still have some limitations in the actual traffic scenes.The problems of practical application of driving assistance system are studied in complex traffic scenes,and an optimization method to improve the accuracy of the existing object detection algorithm is put forward in this thesis.The main works of the thesis are as follows:1.The causes of the increase in the false detection results of the existing object detection algorithm is analyzed in the complicated traffic scene.Based on the principle of camera imaging,a method to remove the false detection results by geometric constraint model is proposed in this thesis.2.For the problems of missing detection and inaccurate location of detection results in the existing object detection algorithm,a continuous motion information fusion model based on conditional random field(CRF)is proposed to improve the performance of the object detection algorithm.3.The effectiveness of the proposed algorithm is verified by the comparison experiments.The experimental results show that the proposed algorithm is reliable,even in different complex road conditions.Combined with the existing multiple object detection algorithms,the optimization algorithm proposed provides a new idea for the object detection problems in complex traffic scenes,and has capacity to promote the research and development process in the field of automatic driving in this thesis. |