| Drivable area segmentation is a key module in the automatic driving system.It mainly completes the segmentation of the area where the vehicle can travel from the image of the road scene,and provides important information for the path planning and control modules.Considering the characteristics of practical applications,the drivable area segmentation algorithm needs to meet both high precision and real-time running speed.With the development of deep learning,drivable area segmentation algorithms based on deep convolutional neural networks have achieved breakthroughs in performance.Some studies use data from multiple modalities as input to improve the performance of drivable area segmentation networks.However,these methods do not deeply mine the characteristics of different modal data,so they cannot effectively eliminate the wrong coding of road areas of each modal data,which limits the reliability of the drivable area segmentation results.In general,a lightweight drivable area segmentation network based on trusted multimodal fusion is proposed in this paper,which abandons the cross-modal feature fusion operation used in the existing methods,and adopts the trusted fusion method at the result level to realize the segmentation with multimodal data.This fusion method eliminates the wrong segmentation results caused by the wrong coding of road regions of different modal data.In addition,the lightweight network architecture designed in this paper enables the network to achieve real-time running speed while keeping the accuracy unchanged.In order to fully reduce the computation of the network while improving reliability,this paper designs an extremely lightweight fully convolutional neural network structure to extract road features from RGB and depth images of the scene,and designs a multi-scale evidence collection module to calculate the evidence map of each model in order to fully judge the class of each pixel.In addition to using a lightweight feature extractor,this paper considers both efficiency and computational complexity during the design of other modules of the network.In order to fully fuse the expressions of the two modalities on the road and classify the pixels on the image,this paper designs a trusted multimodal drivable area fusion method based on subjective logic.Based on the evidence maps of the two modalities,a trusted multimodal fusion module is used to fuse the results of the two branches.In this process,the uncertainties of the two modalities are used as weights,which effectively avoids the interference of the wrong coding of road areas in the singlemodality data to the segmentation results.In addition,this paper designs a multi-GPU parallel inference deployment method,which makes full use of the advantages of the multi-branch network structure of this method,thus greatly improving the running speed of this method on the actual computing platform.The method in this paper is evaluated quantitatively and qualitatively on two international public datasets,KITTI and Cityscapes.The experimental results show that,compared with other algorithms,the method in this paper can achieve higher accuracy and faster running speed. |