| Nowadays,with the development of artificial intelligence and computer vision,the development of the robot industry has seen an unprecedented opportunity.The automous inspection robot as a member of the robots,its development has also become a major trend.However,in order to achieve autonomy,real-time localization and mapping are the key technologies.So,this paper researched real-time localization and mapping of an inspection robot based on visual-inertial fusion.The experiments showed that the proposed method SVI-SLAM had high localization accuracy and good robustness,and could construct a 3D environment map.The research on the inspection robot mainly had following contents:In order to meet the needs of inspection robot localization and mapping,this paper research the hardware and software aspects of the inspection robot platform,and the performance of robot was detected by self-excited vibration experiment.In terms of hardware,the actuator and hardware framework were built,sensors were selected and platform performance was tested.Through the experiment,the maximum continuous climbing slope was 18°.In terms of software,this paper designed the data communication system of the inspection robot,and a BP-PID control algorithm based on particle swarm optimization was proposed to control the inspection robot motor and improve the motion performance.In addition,the effect of mechanical vibration on the inertial conduction was experimented by self-excited vibration,which showed that the inertial conduction noise should be fully considered in the localization system.Besides,this paper designed the sixdegree-of-freedom motion model of the inspection robot,and the Lie group Lie algebra was used to represent the rotation in robot pose.In view of the autonomous walking requirements of the inspection robot,the key technologies of simultaneous localization and mapping technologies of the inspection robot were studied.This paper desigened localization and mapping system of the inspection robot based on visual-inertial fusion.Furthermore,the camera was subjected to internal calibration and error analysis,and the range of removing the edge of the image during the tracking process was proposed.The time drift of sensors were researched based on the external calibration of the camera and IMU.This paper analyzed the performances of different image features,and proposed mesh segmentation method to extract features,which improved the robustness of tracking in low-texture,repeating texture scenes.Moreover,the luminosity invariant hypothesis of optical flow was improved and multi-layer pyramid was used to improve system robustness.According to the motion performance of the inspection robot,the keyframe setting strategy was researched.The K-means++ method was used to train the loop closure bag-of-words model.Once loop closure was confirmed,the global bundle adjustment was used for pose optimization to improve the localization accuracy.This paperd arried out the outdoor image dehazing method using the dark channel method.The original image was downsampled to improve the running speed when the refractive index was obtained.Additionally,this paper proposed RGF-Dehaze method to distinguish the fog image and fog-free image by the image refractive.By comparing with GF-Dehaze and NC-Dehaze,the proposed RGF-Dehaze method could save a lot of computing resources.Moreover,the camera pose,speed and IMU bias were used to construct the target state equation for pose estimation.Besides,the pose estimation problem was convert to the least squares problem,and factor graph method was used to optimize the system.This paper adapt the pre-integration method to process the IMU data.The localization accuracy experiment on Eu Ro C datasets showed that this paper SVI-SLAM method could achieve good results and high accuracy.Considering the speed of the inspection robot and the limitations of the consumer-level RGB-D camera,this paper presented the segment tree method to recover depth from the stereo camera.Through the indoor and outdoor pictures of the inspection robot and Middlebury stereo datasets,this paper proposed R-ST method was compared with SGBM,ST-1 and ST-2.R-ST method could recover depth map fastly and rippingly,and generated semantic information about the surrounding environment.Besides,a 3D point cloud map was constructed according to the keyframe pose and depth.This paper proposed a 3D map optimization algorithm to improve map accuracy.A map management method was proposed,and the constructed three-dimensional map could be used as the priori map for the inspection robot localization.According to the ground equation,the ground part of the 3D point cloud map was extracted and removed,which reduced map storage space.In addition,3D point cloud map could be converted into 3D Octomap map and 2D grid map for path planning.Finally,this paper compare this paper proposed SVI-SLAM method with ORB-SLAM2 and OKVIS on Eu Ro C datasets to evaluate the accuracy and the performance of SVI-SLAM.In the offline experiment of eight datasets,the ORB-SLAM2,OKVIS and SVI-SLAM method were compared on Eu Ro C datasets,which proved that SVI-SLAM achieved best performance.In the outdoor experiments,the performance of the system was researched by analyzing data loading,feature extraction,optical flow tracking,and IMU bias estimation.In the outdoor localization experiment,the end-to-end error was only 1.375 m,which was only 0.18% of the error relative to the total trajectory.In the low-texture,repeating texture scene,a complete 3D point cloud map could be constructed in real-time,and the map was complete and clear.The 3D motion capture system was to evaluate the localization accuracy of the inspection robot indoor.From the four aspects including mean error,median,root mean square error and standard deviation,the localization accuracy could reach to centimeter level. |