| Artificial intelligence co-pilot is an emerging topic,and an important branch of the topic is visual orientation.At present,there are many theoretical technologies in visual orientation,such as target tracking algorithm,target detection algorithm and SLAM(Simultaneous Localization and Mapping).SLAM technology is currently a navigation technology widely used in the field of robotics.It can observe the cockpit environment in real time and create maps at the same time without prior conditions.The visual SLAM technology that could be used for the visual orientation of the aircraft cockpit was proposed in this paper.The visual SLAM technology can realize the visual orientation task of the artificial intelligence co-pilot combining with the actual environment.First,an A320-300 simulated cockpit was used as the experimental environment in this paper,and the Kinect 2.0 depth camera was used for data collection.A total of one thousand sets of experimental data were obtained,and each set of data includes a color picture and a depth picture.The ORB(Oriented FAST and Rotated BRIEF)feature point matching algorithm was used to perform feature matching on the color picture sequence.The matching results was used for nonlinear optimization and establish the Pn P(Perspective-n-Point)model under the3D-2D model and the ICP(Iterative Closest Point)model under the 3D-3D model.When solving the camera’s motion pose,the Pn P model was solved by the GAUSS-NEWTON iterative algorithm and the g2o(General Graph Optimization)iterative algorithm,the ICP model was solved by the SVD(Singular value decomposition)iterative algorithm.The results of the three iterative algorithms were compared and analyzed and the analysis shows that the three iterative algorithms were all convergent algorithms.Among them,the GAUSS-NEWTON iterative algorithm has the fastest calculation speed,the g2 o iterative algorithm is the most intuitive,and the SVD iterative algorithm has the smallest solution error.The calculated camera motion pose was represented by the rotation matrix and the displacement vector.Finally,the Pangolin drawing function was used to make the camera’s posture change trajectory map,which verified the feasibility of applying the visual SLAM technology to the visual orientation of artificial intelligence co-pilots. |