Font Size: a A A

Research On Autonomous Visual Navigation Method For Planetary Landing And Exploration Mission

Posted on:2017-12-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:M YuFull Text:PDF
GTID:1312330536981223Subject:Aeronautical and Astronautical Science and Technology
Abstract/Summary:PDF Full Text Request
Autonomous navigation and guidance technique is a key component of planetary exploration mission,which also has direct influence on the performance of percise planetary landing and planetary rover science investigation.As the deep space exploration mission developes,tradition navigational method such as inertial navigation no longer satisfies the navigation accuracy requirement of both planetary landing and planetary rover exploration mission.Optical sensors are affordable and techniquely ready to apply in deep space mission,they can provide abundant optical visual information,form which the absolute pose of lander with respect to globallyreferenced frame can be inferred,thus making it a solid support to deep space exploration mission.Therefore,it is necessary to develop innovative visual information aided planetary autonomous exploration system.With the support of the national 973 program “Research on problem of navigation,guidance and control for planetary percise landing mission”.This thesis focused on the of exploitation of optical visual information in planetary percise landing and planetary rover exploration missions.The general content of this thesis can be classified into the following five aspects:Firstly,the algorithm of visual feature extraction and matching,which is the front-end technique of visual navigation is studied.Consider the characteristics of descent images during planetary landing,an ad-hoc local image feature detector is proposed.Compared to standard local image feature detector(SURF),our proposed algorithm is more robust against affine transformations.In addition,a novel crater detection and matching algorithm is proposed,wherein image edge and region information are combined to describe a crater,and the 3D crater model is referred to reduce the false matching rate.Finally,research of 3D feature recognition is carried out.The simulation results also demonstrated the effectiveness of our proposed algorithms.Secondly,the construction method of visual database is studied.Due to the planetary terrain's natural attributes(e.g.planetary surface is often unitary-colored and contains much fewer structured landmarks than ground earth environment),some visual features might appear quite similar to one other,thereby brings much ambiguity to feature matching and followed pose estimation.Furthermore,the capability of on-board hardware also poses a great challenge to real-time performance and database storage.To solve the above mentioned issues,a metric named feature exploitability is first defined to select visual features that are not only distinctive but also makes good contribution to uncertainty reduction in pose estimation.After that,a hierarchical structure is established to improve the efficiency of database retrieving.The proposed approach is tested under various scenarios for evaluation,including a simulated planetary terrain test and an experimental test based upon vision aided inertial navigation system.Simulation results demonstrate the benefits of feature-exploitability driven database construction and the necessity of database construction in a vision-aided navigation system for planetary landing.Next,the optical vision based navigation algorithm is studied,followed by the study of crater detection and matching algorithm proposed in this paper,the Lidar measurements are combined with crater recognition to lessen the affect of “plane hypothesis” on navigation.During the final landing phase,the difference between descent images and images from database is quite large,thereby bringing much trouble to feature matching.To couple with such issue,a novel feature matching algorithm that only uses image positions is proposed,where feature position s are employed to construct affine invariants to generate invariant sets,after that,feature matching is realized by seeking set similarity,by such means,poorly shaped image textures are avoided,the simulation results also demontrated the advantage of our proposed algorithm over image texture based feature matching techniques.Lastly,research is focused on improving the autonomy of planetary rover exploration mission,wherein study is focused on improving the autonomy and navigation accuracy of rover exploration under SLAM structure,wherein interest landforms are first inferred from measured 3D Lidar points to generate revisit locations for rover,after that,a self-contained active revisiting path planning algorithm is proposed to improve the navigat ional performance.The simulation results showed the effectiveness of these two algorithms on improving the autonomy of rover exploration mission.
Keywords/Search Tags:planetary landing, visual feature, optical navigation, visual database, feature recognition
PDF Full Text Request
Related items