| Relative pose measurement of space non-cooperative targets is one of the important technologies in the field of space on-orbit service,which has important application value for the on-orbit maintenance and operation of spacecraft and the cleaning of space debris.At present,optical cameras or lidar are mostly used to measure the relative pose of non-cooperative targets in space.The measurement method based on optical camera is greatly affected by illumination conditions and spatial background in complex spatial environment,while the relative pose measurement method based on lidar is not affected by illumination conditions.However,because of the narrow beam of lidar and the need to scan targets in space,the detection efficiency of non-cooperative targets is affected.Therefore,aiming at the problems and limitations of single sensor,this paper studies a relative pose measurement method based on the information fusion of monocular camera and 3D lidar.Firstly,a research method of fusion of sequence image and laser point cloud is proposed by using sequence image taken by monocular camera and 3D point cloud scanned by laser radar.On the basis of segmentation of laser point cloud by K-means clustering algorithm,the segmented laser point cloud is projected onto the pixel plane by using camera internal parameters to obtain laser projection points,Further,the area where the laser projection point is located and the laser point cloud depth value are taken as the original data,and the radial basis function interpolation algorithm is used to interpolate the laser point cloud depth value.At the same time,FAST feature points are extracted from the target area where the laser projection points are located,The fusion point cloud containing the real scale,depth and color information of the target can be obtained by using the two-dimensional information of the image feature points and the interpolated depth values.Secondly,a relative pose measurement method of spatial non-cooperative targets based on fusion of point cloud and sequence images is studied.The FAST feature points are extracted from the target area of the sequence images collected by the optical camera,and then the sparse optical flow field is established according to LK sparse optical flow method,so as to realize the matching of image feature points between adjacent frames,and the mismatching of feature points is eliminated based on RANSAC principle.At the same time of establishing optical flow field,Gaussian pyramid model is constructed to satisfy the application of strong constraint condition of optical flow method in actual scene.After matching feature points between adjacent frames,the matching relationship between image feature points and calculated fusion point cloud can be further obtained.The relative pose can be solved by RANSACEPNP algorithm based on 2D-3D matching relationship.Then,in the process of solving relative position and pose,the optimization model of beam adjustment method is constructed according to the principle of minimizing re-projection error to optimize the solution of relative position and pose.At the same time,aiming at the error accumulation generated in the sequence pose solving process,a loop detection method combining image similarity detection and laser point cloud registration detection is constructed,According to the detected loop frame,a loop optimization algorithm is constructed to eliminate the accumulated errors in the pose solving process.Finally,based on the above algorithm principle,the simulation experiment is carried out and compared with the traditional relative pose solving method to verify the effectiveness of the algorithm proposed in this paper. |