Font Size: a A A

Localization And Reconstruction Based On Multi-sensors In Complex Dynamic Scenes

Posted on:2023-10-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:X D LvFull Text:PDF
GTID:1528306839477584Subject:Instrument Science and Technology
Abstract/Summary:PDF Full Text Request
Robot autonomous navigation technology has progressed under the significant breakthrough of machine learning algorithms and the explosive growth of intelligent hardware computing power.Unmanned mobile robots with intelligent autonomous navigation technology have been applied in many fields to realize uncrewed operations instead of manual processes.Unmanned mobile robots with multi-sensors can autonomously navigate complex indoor and outdoor scenes relying on scene perception,localization,scene reconstruction,and path planning.Localization and mapping are core technologies for autonomous navigation of unmanned mobile robots.By constructing a precise map of the surrounding scene to assist path planning,unmanned robots can perform different tasks intelligently and automatically.In the operation process of unmanned mobile robots,the complexity and variability of the working environment should be fully considered.The lack of texture information,the robot’s violent movement,the moving objects in the dynamic scenes,the drastic change of illumination,and the degradation of the sensors will all lead to the decrease of the accuracy of localization and mapping.Aiming at the robust localization and scene reconstruction of mobile robots in a complex dynamic scene,this thesis has conducted in-depth research and experimental analysis on the scientific and technical problems involved.The main contents and results are as follows:Firstly,we proposed a Li DAR-Camera extrinsic calibration method based on 3D-2D match learning and estimated the extrinsic parameters with the learned 3D-2D matches between the Li DAR point clouds and the camera image.To describe the 3D-2D matches,we defined the concept of calibration flow and designed a double branch convolutional neural network to predict the calibration flow.After establishing the 3D-2D matching point set by the predicted calibration flow,the Li DAR-Camera extrinsic parameters were estimated using the EPn P algorithm under the RANSAC strategy.The proposed calibration method is verified and evaluated in different data sets,proving that the algorithm can perform high precision estimation in real-time.Experimental results show that the proposed calibration method can improve the calibration accuracy by more than 5 times compared with other Li DAR-Camera extrinsic calibration methods based on deep learning.Secondly,we proposed a moving objects estimation method considering geometric and semantic information of objects in the adjacent frames.The rigid flow was synthesized by the depth and pose and compared against the optical flow to obtain motion regions,which was adopted to distinguish between static and moving areas in the image.On this basis,semantic information was introduced to solve the problem of inaccurate and uneven motion region estimation to improve the accuracy of motion region estimation.In order to verify the effectiveness of the proposed method,the moving objects estimation algorithm was designed as an independent module and added into a visual SLAM system to achieve stable localization in dynamic scenes.Experimental results demonstrated that the predicted motion segmentation mask achieved 73.80% of IOU(Intersection-OverUnion)on the KITTI dataset,and the prediction accuracy of the proposed method is16.67%higher than other methods.Thirdly,we proposed a multi-sensor fusion-based localization and scene reconstruction method for a complex dynamic scene.The multi-level fusion between multiple sensors was implemented by fusing data collected from different sensors in different system modules.In the front-end of the system,the camera and the Li DAR assisted each other.The Li DAR point clouds provided 3D information for the feature points in the image.The moving objects elimination method based on the image can remove the points on the moving objects in the Li DAR point clouds for the localization accuracy improvement and the static 3D scene reconstruction.To further improve the localization accuracy,a combination of visual loop closure detection and Li DAR loop closure detection was utilized to ensure the global consistency of scene reconstruction.At the back-end of the system,the observation model of different sensors was integrated to construct a multiple constraint factor graph with nonlinear optimization to obtain the optimal system states.Experimental results demonstrated that the proposed multi-sensor fusion-based localization and scene reconstruction algorithm could operate robustly in multiple complex dynamic scenes.The absolute localization error is 1.088)in the outdoor urban canyon scene with the sequence length of 3641.818)in the Urban Nav dataset,and the localization accuracy is 42.86%higher than LIO-SAM.Finally,a multi-sensor robot platform is designed to verify the effectiveness of the proposed localization and scene reconstruction method for complex dynamic scenes in real environments.We developed this platform’s software and hardware system and carried out accurate intrinsic and extrinsic parameter calibration of multi-sensors.On the campus of Harbin Institute of Technology,multi-sensor data were collected.Through quantitative and qualitative experimental analysis,the effectiveness of the proposed localization and scene reconstruction method for complex dynamic scenes is demonstrated in real scenes.In addition,with the fusion of Li DAR and camera,semantic information is added to the map.Then a more advanced semantic map is obtained to assist the autonomous navigation of the robots.
Keywords/Search Tags:Complex Dynamic Scenes, Multi-sensor fusion, Simultaneous localization and mapping, LiDAR-camera calibration
PDF Full Text Request
Related items