| In recent years,with the continuous development of the robot industry,Simultaneous Localization and Mapping(SLAM)of robots has become a key technology,which uses the environmental map and self-positioning information provided by robots to achieve autonomous navigation.Visual SLAM is one of the research hotspots,but pure visual sensors are vulnerable to inconsistent ambient light and lack of texture.In this paper,the multi-sensor fusion technology based on visual sensor and inertial measurement unit(IMU)is used to solve the problem of robot autonomous navigation.First,point features and line features are introduced in the visual part to extract more geometric textures in the environment,and the high-frequency IMU can measure its own acceleration and angular velocity in a high-dynamic environment,thus making up for the lack of visual sensors in these cases.The main research results of this paper are as follows:1)At present,most SLAMs based on point-line features directly use the LSD algorithm to extract straight lines.However,the LSD algorithm is designed for structured environments and is not suitable for pose estimation problems.In this algorithm,there are a large number of redundant straight line features,which will not only waste computing resources,but also easily lead to the decline of system positioning accuracy.Aiming at the above problems,this paper proposes an adaptive threshold line segment extraction algorithm.First,the line segment length threshold is determined according to the current image resolution and the number of line segments that have been extracted,and then the adjacency matrix of the line segment is constructed to determine the direction and position of the line segment.Decide whether to merge or cull with other line segments.At the same time,geometric constraint line feature matching is considered to improve the efficiency of line feature processing.2)Aiming at the problems of poor robustness and low trajectory accuracy of the visual SLAM system based on point features in various low-texture environments,a visual-inertial SLAM algorithm based on point-line feature fusion is proposed.First,the corner features and line features of the current frame are extracted in the front-end part,and a certain number of point-line features are always maintained;then a back-end nonlinear optimization algorithm based on point-line features is proposed,through the current frame point and line features The parallax and the number of tracks are used to determine whether it is a key frame.Point,line,and inertial data of keyframes are then efficiently tightly coupled in a sliding window to achieve high-precision pose estimation.Finally,the cumulative error is eliminated through the loop detection calculation,and the trajectory positioning accuracy is further improved.3)An embedded mobile robot platform equipped with multiple sensors was designed and produced,and the proposed visual-inertial SLAM algorithm was verified by means of public data sets and real-time experiments in a real environment.Experiments on public datasets show that compared with existing similar work,the accuracy of the algorithm in this paper has increased by an average of 35% in various complex environments,which proves that the algorithm in this paper has better positioning accuracy and robustness.Then the realtime test of the algorithm in real environment is carried out on the mobile robot platform,which proves that the algorithm in this paper has good real-time performance and stable performance,and shows the robustness of the algorithm in this paper from the side. |