| With the rapid development and wide application of moving platforms such as UAVs,cars and smartphones,the problem of the self-localization of moving platforms has attracted more and more attention.With the advantages of low cost and non-contact,the method based on videometrics has been widely employed in the fields of military and civil,such as aerospace,missile guidance,surveillance,virtual reality,autonomous driving and so on.According to the number of cameras,methods based on videometrics are divided into monocular,stereo,and multi-camera system.The method based on the multi-camera system is more flexible and can obtain more abundant information in the surrounding environment.Therefore,this dissertation has studied on the methods for estimating the ego-motion of the multi-camera system.Generally,the method for estimating the relative pose is used within a Random Sampling Consistency loop to obtain the model parameters and to find a correct inlier set.To reduce the computation cost and improve the robustness of RANSAC,reducing the number of points required for estimating a motion model is an efficient strategy.Therefore,this dissertation has systematically studied on the minimal solution of multi-camera relative pose estimation against the background of the self-localization problem of moving platforms in videometrics application.The main contributions of this dissertation are as follows:(1)Aiming at the self-localization problem of moving platforms based on the pure vision system,a relative pose estimation algorithm of the multi-camera system over three consecutive views based on the Ackerman model is proposed.The Ackerman motion model reduces the degree of freedom of the problem of the relative pose estimation and simplify the problem.In practice,the motion of the ground vehicles can be approximated by the Ackermann model over three successive views,which forces the local ego-motion to be a circular motion in the plane at a constant speed.Firstly,the LK optical flow method is employed to match features over three views with the forward and backward error strategy.Then,the generalized epipolar constraints based on the Ackermann model are established by the matching features over three views.Finally,the relative pose of the multi-camera system is obtained by the constraints.The simulation and real experiments show that the algorithm has high computational efficiency while ensuring the accuracy of the relative pose estimation.(2)Exploiting the properties of the points that are far away,a relative pose estimation method of multi-camera systems with decoupled rotation and translation is proposed.In practice,for points that are far away,the parallax-shift(induced by translation)between two views is hardly noticeable in the case of small ego-motion.Therefore,this dissertation proposes a two-step method for estimating the relative rotation matrix and the relative translation vector.Firstly,the gravity direction is aligned using the pitch and roll angle from the inertial measurement unit,then the relative rotation matrix is obtained using the far points,and finally the relative translation vector is estimated using the nearby points known the relative rotation matrix.We compared our algorithms with state-of-the-art algorithms on synthetic and real datasets.The experiments demonstrate that our algorithms are accurate and efficient.(3)Exploiting the information of the orientation-and scale-covariant features(point correspondence,rotation,scales along both image axes and shear),a relative pose estimation algorithm of the multi-camera system employing orientation-and scale-covariant features is proposed.New constraints are derived on the orientation-and scale-covariant feature related to the parameter of the relative pose of the multi-camera system based on the homography matrix.According to the new homography constraint and the traditional homography constraint,a pair of orientation-and scale-covariant features produce three equations.The information of the inertial measurement unit can reduce the degree of freedom of the relative pose by 2,so two orientation-and scale-covariant features are needed to estimate the relative pose.We use the hidden variable resultant method to solve for the unknowns in the system of polynomial,which leads to a 6-degree univariate polynomial.The relative rotation matrix is solved by the6-degree univariate polynomial.Then,the relative translation is solved linearly.We compared our algorithms with state-of-the-art algorithms on synthetic and real datasets.The experiments demonstrate that our algorithm is feasible and effective.The algorithm is applicable for the planar scene.(4)Exploiting the local affine transformation matrix of matches,a multi-camera relative pose estimation algorithm using affine correspondences instead of point correspondences is proposed.The first-order approximate expression instead of the rotation matrix can reduce effectively the order of the equation and simplify the problem of the relative pose estimation while the relative rotation is small.In this dissertation,substituting the first-order approximate expression of the rotation matrix into the affine matrix constraint and the generalized epipolar constraint,thus new constraints are derived with known gravity direction from the inertial measurement unit.Firstly,the gravity direction is aligned using the pitch and roll angle from the inertial measurement unit,then the new constraints are established from two affine correspondences,which leads to a 4-degree univariate polynomial.Finally,it is easily to solve the relative translation linearly.We compared our algorithms with state-of-the-art algorithms on synthetic and real datasets.The experiments demonstrate that our algorithm is accurate and efficient. |