| Autonomous mobile robots can perform tasks flexibly in complicated environment,which have great application potential in manufacturing,warehousing logistics,and etc.Map construction and localization by equipped sensors such as Lidar or camera is the prerequisite for autonomous operation.However,mapping only based on 2D-Lidar or camera data will result in the loss of key environment information,making it difficult to meet robustness and accuracy requirements for localization.Therefore,it is necessary to utilize the multi-sensor system that has been accurately calibrated to collect multi-source heterogeneous data to perform mapping and localization.This paper aims at the problems of extrinsic calibration process is complicated and inaccuracy,integrated information in prior map is insufficient,and the localization in complex environment is not robust and inaccurate,three key technologies are systematically studied: extrinsic calibration of2D-Lidar and camera,hybrid map construction based on multi-source heterogeneous data,and localization based on fusion data.The specific contents and innovation points of this thesis are as follows:Aiming at the problem of small field of view and insufficient observation ability of single Lidar or camera,this paper proposes a data driven method for extrinsic calibration of 2D-Lidar sensor system to expand the field of view,and proposes to use arbitrary trihedron and plane for extrinsic calibration of 2D-Lidar and camera.For the 2D-Lidar sensor system,the extrinsic parameters are obtained through global optimization by combining the pose constraints with the scan data association to construct factor graph,which solves the influence of the trajectory pose nodes on data driven calibration method and can improve the calibration accuracy.For the calibration of 2D-Lidar and camera,the key features such as points,lines,and planes of trihedron and planes are applied to construct data association.Then the initial value is obtained by analytical method and then optimized to improve accuracy.The proposed methods can accurately calibrate the2D-Lidar and camera without the needs of any meticulously designed special calibration objects and lays the foundation for data fusion.To solve the problem of insufficient information of single type prior map for complex environment,this paper proposes a hybrid map construction method based on 2D-Lidar and camera.This method divides the global map into local submaps and utilizes a heuristic data fusion strategy to improve the accuracy of scan and image matching.In each submap,the key features and corresponding description information are extracted from reference images,and then the visual feature submaps are constructed by local optimization.Meanwhile,nonlinear optimization is applied to further optimize the scan poses to construct metric submap.This process can synchronously integrate the heterogeneous information into the hybrid submap.All submaps will be used to build a pose graph,and a multi-level loop closure detection strategy is proposed to find the loop constraints.Finally,the hybrid map is obtained by global optimization.The fusion of heterogeneous data can further improve the accuracy of map construction,and the hybrid map can provide rich prior information for navigation.For the problems of poor success rate of global localization and insufficient accuracy of pose tracking based on single type sensor data in complex environment,this paper proposes a multi-level method to improve success rate and efficiency of global localization,and the Lidar and camera data is fused to improve pose tracking accuracy.For global localization,the visual description information in the offline database is queried through current image to search for candidate frames,and the candidate pose is calculated based on the visual matching results.Then the pose is validated by the likelihood fields to obtain the global pose.This method has success rate over 95% in complex environments.For pose tracking,a robust pose prediction method based on redundant motion estimation is proposed to improve the robustness of particle state transition.Then a tightly coupled method is proposed to fuse Lidar and visual data to correct the resample pose and pose tracking accuracy is improved by optimization.The proposed method has been validated to achieve a localization accuracy of 5mm/0.2deg.Finally,the three key technologies are validated in a complex workshop environment.Experiment results show that the proposed calibration method can accurately obtain the extrinsic parameters,and the mapping and localization methods based on Lidar and visual fusion data have excellent performance,which provides technique supports for the application of autonomous mobile robotics in complicated scene. |