| Part assembly is a common and important link in the production process,and the programming method of assembly robot has gradually changed from teaching programming and off-line programming to autonomous programming.As a typical and common part type in assembly work,it is of great significance to propose an effective automated assembly method of long shaft based on visual guidance to improve the intelligent application of assembly robots.Based on a six-axis industrial machine and two depth cameras,this paper set up an experimental platform for assembly of long shaft,and carried out the following work for the problems of target segmentation,target recognition,visual positioning and pose estimation in the automated peg-in-hole assembly process of industrial robots:1)Based on the D-H parameters,the coordinate system of the robot is established,and the forward and inverse kinematics of the robot is analyzed.A depth camera system is established to meet the needs of long shaft assembly.The principle of two depth cameras and the method of hand eye calibration are analyzed respectively.The transformation matrixs between camera coordinate systems and robot coordinate system are obtained through experiments.2)According to the requirements of point cloud segmentation in assembly scene,the segmentation methods based on Euclidean clustering and hypervoxel segmentation are studied systematically.After filtering the noise of point cloud based on the bilateral filtering algorithm,the point cloud of robot and loading platform is segmented based on the Euclidean distance between data points.An improved hypervoxel segmentation method is proposed to segment the different objects on the loading platform,considering the assembly scene is lack of RGB color information,in the process of constructing feature vector,the color information component is removed and the curvature feature is added,a 37 dimensional feature space is constructed to measure the similarity between voxels.Experiments show that the optimized method can get better segmentation results.3)The principal component analysis is used to build the local coordinate system of each point cloud subset and complete the coarse registration of point cloud,the iterative closest point algorithm is used to find out the point cloud of long shaft parts.A point cloud registration method based on axis extraction is proposed to estimate the pose of long shaft part: firstly,rough registration of point cloud of long shaft parts is realized by principal component analysis.Secondly,extract the normal and curvature of point cloud,remove the points whose curvature does not meet the requirements and extract the ISS3 D feature points of the filtered point cloud.Then use RANSAC method to estimate the common perpendicular of the normal of the feature points as the central axis of the long shaft part,it is extended to both ends of the long shaft part.From the position relationship between the central axis of the target point cloud and the central axis of the model point cloud,two transformation matrices are acquired.Finally,the precise pose of the part is obtained by comparing the registration error of the iterative closest point algorithm.The experimental results show that the method in this paper can obtain the position and posture information of parts more accurately than only using iterative closest point algorithm.4)The experiment platform is built and the robot vision guided assembly experiment based on depth camera is carried out.Firstly,the hand-eye relationship was calibrated.Secondly,the superiority of the improved hypervoxel segmentation algorithm in the segmentation of the point cloud in the clutter scene is verified,the influence of different segmentation parameters on segmentation results is discussed.Then,the point cloud of long shaft parts is identified by principal component analysis and iterative closest point algorithm.After that,the long shaft parts are grabbed based on the axis extraction algorithm.Finally,the center of the fitting hole of the sleeve part is obtained by the Hough circle transformation to complete the assembly task.Assembly experiments show that the segmentation method and pose estimation method in this paper can be effectively applied to the peg-in-hole automated assembly process,the pose estimation accuracy is stable,and has good adaptability in practical application,which establishes a good foundation for the more extensive application of visual guided assembly method. |