| With the continuous development of intelligent transportation concept,the demand for providing a set of accurate and reliable road sensing system is becoming stronger and stronger.The pavement sensing system relying on single vision sensor has gradually been proved to have many limitations.The passive vision sensor is vulnerable to the interference of light intensity and shadow occlusion,so it is urgent to find a remedy.As an active sensor with strong anti-jamming ability,lidar can provide stable three-dimensional point cloud information.The sensor combination configuration of industrial camera and lidar can effectively avoid all kinds of data defects of a single sensor,which makes the design of a road sensing algorithm based on industrial camera and lidar data fusion become a research hotspot.However,the data structures collected by the two sensors are quite different,and the pavement characteristics are expressed differently in the two types of data as well.To address the above two difficult problems,this paper designs the fusion scheme of data layer and feature layer to realize the effective fusion of cross modal data and improve the detection accuracy of sensing algorithm.The main research contents and contributions of this paper are as follows:⑴ A LIDAR point cloud data conversion method based on weighted altitude difference is proposed.The previous method converts the 3D point cloud data was simply transformed into 2D images such as depth map and height map.In this paper,taking advantage of the consistency of the height change of the flat area of the road,the height difference map is extracted from the height change of the LIDAR point cloud data,and the weighted height difference map is formed by adding the field point distance constraint and the road boundary point constraint.This method can convert the three-dimensional point cloud data into twodimensional weighted height difference map data,preserve the characteristics of pavement area,enhance the characteristics of pavement boundary,and realize the data level fusion of multi-modal data.⑵A feature adaptive fusion method of lidar camera is proposed.To tackle the problem of fusion difficulties caused by different feature representations of different data sources,this method introduces the feature adaptive fusion module into the semantic segmentation network encoder.The feature adaptive fusion module is mainly composed of adaptive feature conversion network and multi-channel feature weighted cascade network.The module achieves a feature-level fusion of multimodal data by linearly transforming LIDAR features and fusing them with visual image features in multiple layers.⑶A joint sensing platform of industrial camera and lidar sensor is built.Single-unit correction and joint calibration of sensors are performed to complete cross-modal data space calibration.The test on campus road shows that the two data fusion schemes designed in this paper improved the effectiveness and reliability of road perception;comparative experiments on public data sets show that this method can provide more accurate pavement perception and road extraction results. |