| Vision odometry is a method of localisation for mobile robots using image frames captured by vision sensors.It completes the localisation task by performing matching operations on feature points extracted from adjacent frames and calculating the transformation matrix.In dynamic scenes,the key point for accurate positioning of the vision odometer is how to eliminate the dynamic features that are not conducive to matching and thus reduce the positioning error.This paper aims to improve the accuracy of the visual odometry localisation method in dynamic scenes by improving the quality of the depth images acquired by the sensor and by filtering out dynamic features.(1)When a vision sensor is used to capture a depth image,holes will appear in the depth image if there are highly reflective or translucent areas on the surface of the object.When the hole is at the edge of the object,it is difficult to obtain a clear edge using traditional image restoration algorithms,resulting in a decrease in the accuracy of visual odometry positioning.To address the above problems,this paper proposes a depth image restoration algorithm based on edge-first filling and curvature-driven diffusion.The algorithm uses the Canny operator to extract the image edges,and then uses the neighbourhood maximum filling strategy to fill the edges of the holes,which solves the problem of poor edge repair of depth images,and finally uses the CDD model to complete the repair of the internal holes.Experimental results show that the restoration results using this algorithm improve PSNR values by 10%-25%and MSSIM values by 0.02%-0.99%compared to BF,JBF,FMM and CDD.Applying it to the visual odometer localization algorithm in dynamic scenes,the absolute trajectory error was reduced by 7%-29%,indicating that the algorithm can effectively improve the localization accuracy of the visual odometer.(2)In dynamic scenes,how to accurately eliminate the features on dynamic objects is the key to accurate positioning of visual odometers.Currently,semantic segmentation or geometric constraints are usually used to eliminate dynamic objects under the scene.However,although semantic segmentation can segment objects,it cannot distinguish the motion state;geometric constraint can determine the overall motion state by the local features of objects,but it is not accurate enough to determine the motion state of non-rigid objects.To address these problems,this paper proposes a dynamic feature filtering algorithm based on semantic information and geometric constraints.The algorithm first uses Yolact to segment the object,uses geometric constraints to distinguish the motion state of the object and derives a mask for dynamic objects,so as to more accurately determine the motion state of non-rigid objects;then filters out dynamic features based on the mask and uses only static features for matching to achieve an accurate visual odometry localisation method.By comparing with ORB-SLAM2 and DynaSLAM,the absolute trajectory error of localization using this algorithm is reduced by 2.4%-97.8%on four sets of high dynamic sequences and 22.7%-46.9%on four sets of low dynamic sequences,indicating that the accuracy of visual odometry localization in dynamic scenes can be effectively improved using this algorithm.In order to verify the effectiveness of the proposed algorithm in real dynamic scenes,an indoor mobile robot was designed and implemented,and then the proposed algorithm was applied to the indoor mobile robot to complete the localization.The experimental results show that the localization trajectory is more accurate and the absolute trajectory error is lower than that of DynaSLAM in real dynamic scenes,which validates the effectiveness of the proposed visual odometry localization method in dynamic scenes. |