| Hand-eye calibration is a basic problem in the field of robotics.It aims to calibrate the pose relationship between the robotic arm and the camera.The camera is divided into two situations:the eye is on the hand and the eye is outside the hand.With the development of vision technology,robot technology relies heavily on visual perception,and the hand-eye calibration method has also turned to sampling and convenient vision technology,which is especially critical in behavioral tasks such as dynamic grasping.Most hand-eye calibration methods determine the hand-eye pose offline using a 2D calibration board with distinct features.There are also some works that use the set 3D calibration objects and point cloud information to further improve the accuracy of hand-eye calibration.However,the hand-eye calibration method using markers fails when the features of the markers are not obvious,and most of these methods are offline,and usually cannot be performed automatically in the working state.When the robot arm encounters a collision or other situation during the execution of the task,which causes the hand-eye pose to deviate,it is usually necessary to stop the task and re-calibrate the hand-eye pose,and in the wild or in space,the re-calibration is complicated and difficult.For the first time,we propose an online automatic hand-eye calibration method in untextured or weakly textured scenes.First of all,the pose parameters of objects on the sequence are used as the data basis for hand-eye calibration,and its accuracy seriously affects the accuracy of calibration.In order to solve the problem of low pose accuracy in untextured 3D object tracking,we build an object pose error model and propose a novel pose refinement network(PR-Net),which significantly improves the camera pose accuracy by The hand-eye calibration provides data support;furthermore,we use the object pose provided by PR-Net and the end pose data provided by the robotic arm as the data basis for hand-eye calibration.According to the biased characteristics of the tracking results of the 3D tracking algorithm,we propose a 3D convergent point constraint for multi-view sights,which eliminates the Z-axis error caused by the tracking algorithm through the convergent point,and optimizes the exact position of the object.Combined with the pose constraints formed by the closed loop of object trajectories,we use the optimization method of alternating iterations of 3D convergence point constraints and closed loop constraints to fully decouple the hand-eye pose and object position,and solve the problem that nonlinear optimization is difficult to converge and easily falls into a minimum value.Experiments show that the average error of our hand-eye calibration method is 1.20 degrees and 23.18mm.The best results are achieved for hand-eye calibration using untextured objects.Our algorithm solves the problem that it is difficult to use textureless/weakly textured objects for hand-eye calibration,so that when the robot arm collides in the work scene and changes the relative pose of the camera and the robot arm,it does not need to stop the task that the robot arm is performing.It is necessary to collect pictures of objects in the visible space of the camera and the pose of the manipulator at the corresponding time.Even if the object has no texture or the texture is weak,the repositioning of the hand-eye pose can be completed.So that the robotic arm can complete the task continuously. |