| Visual measurement has the advantages of high accuracy and non-contact,and it has a wide range of applications in the relative motion measurement guidance of robotic grasping operations,navigation and positioning of aircraft,target detection,target positioning and speed measurement.However,in different specific applications,visual measurement technology needs to face many challenges,such as different target characteristics,limited sensor performance,and various complex observation conditions.For the visual measurement tasks based on the moving observation platform,on the one hand,the motion of the observation platform will make the imaging characteristics of the target,the imaging relationship between the target and the background,and the relative motion relationship between the target and the measurement camera more complicated,which will increase the difficulty of the visual measurement of the target.On the other hand,it also increases the effective observation information of the target,and the sequence observation provides more constraints on the measurement of the target.Based on the needs of engineering applications,this paper studies the key technologies related to the visual measurement of the target pose and motion trajectory based on the moving platform,including camera calibration,target relative pose measurement,point target motion trajectory measurement,and so on.The main research results obtained in this thesis are as follows:(1)A two-step camera calibration method based on line features considering the relationship of line imaging is proposed.Point features and line features are often used for camera calibration.In some applications,line features are more stable and provide richer information,which is suitable for camera calibration.For this reason,this proposed method uses the direct linear transformation to obtain the mapping of a three-dimensional line expressed by the general equation under the pinhole imaging model and mapping the three-dimensional line onto the image plane to obtain the reprojection image line;on this basis,the points on the observation image line and the reprojection line are used to solve the distortion coefficient.Finally,all parameters are jointly optimized.This proposed method makes full use of the reprojection line information,which can obtain accurate calibration results of the camera’s internal and external parameters and distortion coefficients and can be used for subsequent accurate measurements applications.The proposed method can not only correct the distorted line to a straight line,but also correct the point on the line to its true position.The experiments show that the proposed method only uses one image of the calibration target for camera calibration,and can obtain a calibration effect that is equivalent to that of the common calibration method based on multiple images captured from different viewing points.(2)A method of online reconstruction-based pose estimation and tracking using a depth camera is proposed.In the assembly tasks on ships,docks,and so on.,it is necessary to control the forks to accurately insert the adapters.To achieve automated assembly,it is necessary to automatically measure the relative pose and other motion parameters between the fork and the adapter.However,due to the complicated component structure and lighting conditions,and the forks and adapters are both weakly textured surfaces,the point features-based pose estimation method based on RGB image is difficult to apply with the requirement of as few fiducial markers are placed as possible.To solve the problem,the method first uses the online dense 3D reconstruction framework based on the depth camera to reconstruct the surface topography of the fork and the adapter.To a certain extent,the use of an online reconstruction framework could solve the problem of large image noise and holes on depth images captured by the consumer-level depth camera.Then,the pose estimation is carried out with the 3D lines or 3D planes based on the structural characteristics of the fork and the adapter.The proposed method also uses an iterative closest point algorithm that integrates contour and surface information to track the adapter pose,which effectively improves the robustness of adapter pose tracking.The proposed method is mainly based on depth image and the online reconstruction of the targets’ three-dimensional structures,which does not depend on the surface texture of the target,and is less affected by the observation conditions such as illumination,and has strong adaptability to the on-site operating conditions.The proposed method has been successfully verified in actual fork assembly tasks.(3)A differentiable RANSAC(RANdom SAmple Consensus,RANSAC)based selfsupervised pose estimation method is proposed.For targets that have no fiducial marker on the surface and whose shape and structure characteristics are not obvious,the pose estimation based on deep learning has better adaptability.Deep learning pose estimation methods generally require a large amount of real data with pose labels for training to improve the performance of the network model.However,the accurate labeling of a large amount of real data pose label usually requires manual intervention,which is time-consuming and labor-intensive.Self-supervised training is an effective way to solve this label dependence problem.For this reason,the proposed method uses differentiable RANSAC technology to solve the problem of robust pose estimation that is not differentiable,so that the network can be trained end-to-end.On this basis,a reprojection error loss function on the output features is designed for training the network in a self-supervised fashion.After training our proposed network fully supervised with annotated synthetic data,we can further self-supervise the model on unannotated real data for fine-tune training.Evaluations on the challenging pose estimation dataset demonstrated that the proposed self-supervision can significantly enhance the model’s original performance and results in the improvement of elevation results with an 18%,and can obtain evaluation results equivalent to other supervision methods.In addition,with fewer data types used,the evaluation result is still 38.5% higher than other typical self-supervised pose estimation methods.(4)A trajectory measurement method of the flying platform to localize the target based on the elevation soft constraint is proposed.Observing the moving point target from a single flight platform,because it does not meet the usual triangle triangulation observation conditions,it is necessary to use methods such as trajectory triangulation to measure the trajectory parameters of the target,that is,to locate the target and measure the speed.However,with the conditions of weak observation geometry such as large inclination angles,small intersection angles,and the difficulty of the flying platform to complete a good observation track,it is difficult to obtain accurate measurement results using methods such as trajectory triangulation alone.To solve the problem,the target motion trajectory is firstly parameterized according to the characteristics of motion continuity.Then,the proposed method introduces the elevation data of the observation area that is usually available but the sampling density and accuracy are limited,and uses the priori that the moving target is located on the ground(sea surface)in a certain area,and uses the digital elevation of the area to constrain the position of the target,and obtain the initial value of the target motion trajectory parameter.Finally,considering that the accuracy of digital elevation is not high enough,the digital elevation is used as a soft constraint to optimize the target trajectory parameters.The proposed method effectively improves the accuracy and robustness of the flying platform for positioning and speed measurement of moving targets under weak observation geometric conditions and has been successfully verified in a flight test based on a UAV.Under the condition that the distance from the UAV to the target is 10 to 20 kilometers,the positioning error of the motion trajectory measurement is within about 20 m. |