Font Size: a A A

Research On RGB-D Visual Odometry Based On Points And Edges

Posted on:2021-04-12Degree:MasterType:Thesis
Country:ChinaCandidate:J LiFull Text:PDF
GTID:2428330602494393Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Visual odometer is an important branch in the field of mobile robots,which plays a key role in the field of mobile robots to complete autonomous navigation in unknown environments.The visual odometry refers to the positioning of mobile robots through the processing of continuous image sequences.RGB-D cameras can acquire both RGB images and depth information,and have the advantages of high measurement accuracy,fast frequency,low price,and easy installation.These advantages have made the RGB-D visual odometry a hot topic in this field and some related applications have also been produced in recent years.Human living environment is often very complicated.For example,in indoor scenes,the collected images will include weak textures,strong light,long channels and other information.These complex image information are unavoidable by visual odom-etry and bring to position mobile robots great challenge.For indoor scenes,observing RGB-D images reveals that there are rich edge features.Edge features have richer en-vironmental information than point features,and have obvious advantages for dealing with tracking failures caused by missing textures and insufficient point features.There-fore,this thesis aims at the research on the robust visual positioning of RGB-D images in indoor environment,and combines the feature points and edge features to improve the RGB-D visual odometry,which effectively improves the accuracy of mobile robot pose estimation.Specifically,the main contents of this thesis are as follows.Add a sparse initialization process before minimizing edge feature distance errors.First,because the depth image information acquired by the RGB-D camera contains noise interference,which affects the accuracy of pose estimation,point cloud filtering is applied to the semi-dense point cloud data corresponding to the edge features.Sec-ondly,a sparse initialization process is added.A part of the reference frame point cloud data is combined with P3P and ICP algorithms to calculate the initial pose estimation as the input of the subsequent optimization objective function.Finally,the approxi-mate nearest neighbor is calculated for the edge feature,and the matching relationship between the reference frame and the current frame is obtained.The camera pose is op-timized by minimizing the distance error of the edge feature,and effectively improve the RGB-D visual odometry's precision based on the edge feature.Based on the distance error between locally adjacent frames,a unique reference frame selection method is designed to reduce the effect of motion blur on pose esti-mation.The combination of point and edge features enhances the ability to adapt to indoor weak texture environments.First,based on ORB-SLAM2,ORB-SLAM2 in?cludes RGB-D visual interface,introducing edge features to its visual odometry part,which effectively improves the algorithm's adaptability to indoor weak texture environ-ment.Secondly,the distance transformation method is adopted for the edge features,and it is found to have a more prominent response to motion blur.A reference frame filtering method based on edge features is designed.A reference frame that meets the filtering conditions is selected in a local adjacent frame.The reference frames are used to improve the effect of motion blur on camera pose estimation.Finally,the estimated pose of the camera is optimized based on the reprojection error of the feature points,and the accuracy of pose estimation is improved.In this thesis,the above methods are tested and verified on the international stan-dard TUM RGB-D data set,the above methods are evaluated from the perspective of pose estimation accuracy,and finally the experimental results are analyzed and summa-rized.
Keywords/Search Tags:Visual odometry, RGB-D camera, Edge detection, Reference frame, Sparse initialization, Pose estimation, ORB
PDF Full Text Request
Related items