| With the continuous rise in the number of cars in our country,the improvement of traffic efficiency and the accident prevention in traffic scenes approaches the first goals of modern traffic construction.Thanks to excellent performance in solving the above difficulties,cooperative roadside infrastructure,supported by intelligent sensing technologies,is gaining popularity.Among intelligent sensing technologies,vehicle detection and multi-object tracking are the premises and basis to achieve roadside functions such as active safety warning,traffic drainage,etc.,therefore,the precision and robustness of their methods are crucial.However,under the roadside perspective,vehicle targets practically occluded by objects and environmental background.Since the range,angle,and degree of occlusion are uncertain,it is difficult to detect and track the occluded vehicle.In this paper,we concentrated on these two main issues,occluded vehicle detection in static images and occluded vehicle tracking in time series,respectively,from the point of view of the camera and radar fusion.Details are presented as below:Part Ⅰ: Detection optimizationSpecialized optimizations are done to solve the common detection problem in the occlusion scene.To reduce missed detections,we introduce a non-local feature fusion structure to gather the context information for the occluded target.On false detection,the recognition ability for the occluded target are improved from three main aspects: color space enhancement,edge information enhancement,and enhancing test.On false exclusion,a centrality-assisted distinctive index is designed to assist the selection of true detections.Compared with the general target detection algorithm,missed detection,false detection,and false exclusion were reduced to 31.31%,85.49%,and 22.64%,respectively,for the algorithms designed in this paper under the occlusion scenario.Part Ⅱ: Tracking improvements(filtering methods)To realize robust tracking of vehicles under occlusion,a multi-sensor fused filter algorithm is proposed to establish the relationship between frames.Specifically,this paper first introduces the space-time synchronization and external parameter calibration algorithms of vision and radar,then builds corresponding sub-filters for target tracking for vision and millimeter-wave radar,and then in the information fusion section,Federal Kalman Filter are adopted to fuse the sub-filters’ information.Compared with the visual-filter-based tracking method,the multisensor tracking algorithm designed in this paper improves the multi-target tracking accuracy by about 3.21%,increases the number of most tracking targets by about 4.20%,and decreases the number of most missing targets by about 26.19%.Part Ⅲ: Tracking improvements(deep-learning methods)To improve the tracking accuracy and matching robustness under dynamic occlusions,a multi-sensor multi-target tracking network is proposed,which contains spatial fusion,time series fusion and matching index optimization.In terms of spatial fusion,this paper innovatively fused the multi-sensor data from hierarchical relationship and interactive filtering.On time-series fusion,long-short-memory-network is introduced to aggregate series information.On the optimization of matching indexes,the appearance index and population distribution index are integrated to achieve the full life cycle management of obscuring targets.The comprehensive tracking algorithm achieves 87.86% of the target tracking accuracy,which is 4.45% higher than the multi-sensor Federal Kalman Filter algorithm. |