| Autonomous driving technology is one of the hottest trends in the automotive industry.Accurate and reliable positioning information is of great significance for improving the active safety of smart cars and realizing autonomous driving.However,relying on GNSS positioning alone cannot meet the positioning needs of autonomous vehicles in all scenarios,and multisensor fusion positioning is an inevitable trend to achieve high-precision positioning of autonomous vehicles.This paper takes binocular visual odometer as the research object,and proposes a binocular visual odometer combined with semantic information to improve the accuracy and stability of binocular visual odometer in dynamic traffic scenes.The estimation result of the visual odometer is integrated with the GNSS positioning result.When the GNSS positioning fails,the estimation result of the visual odometer is used to continue to realize the global positioning.The main work of the thesis is as follows:A binocular visual odometry framework combined with semantic information is proposed.The YOLACT instance segmentation network provides semantic labels for pixels in the image,divides the image into foreground and background regions,and uses static feature points extracted from the background region to realize the vehicle’s own Location estimation.Experimental results show that the accuracy of the proposed algorithm is significantly better than ORB-SLAM2 and VINS-Mono in dynamic traffic scenarios.In order to improve the robustness and generalization ability of YOLACT in actual road scenes,a local instance segmentation dataset containing 900 traffic scene images and 6 types of common traffic participants was established to train the network weights.In addition,in order to improve the speed of the algorithm,the deep learning acceleration tool Tensor RT is used to accelerate the inference process of the network,which reduces the inference time by28.7% and improves the real-time performance of the network.In order to improve the overall performance of the visual odometer,the instance segmentation network is integrated with the front-end visual odometer of ORB-SLAM2.On this basis,the global coordinates and orientation of the camera are obtained through online calibration.In order to improve the final positioning effect,an anomaly detection strategy for GNSS positioning results is proposed.When the GNSS positioning is normal,the Kalman filter is used to fuse the positioning results of the GNSS and that of the visual odometer;when the GNSS positioning fails,the visual odometer is used to continue to achieve global positioning.Finally,a binocular vision experiment platform was built to verify the proposed algorithm in a variety of common traffic scenarios.Experimental results show that the proposed algorithm can effectively eliminate the interference of moving targets in a complex traffic environment.When the GNSS positioning fails,it continues to realize the global positioning function,showing good accuracy and stability,and has good application prospects in automatic driving. |