GMAW(Gas metal arc welding)is the most widely used welding method in industrial production,and is widely used in construction machinery,shipbuilding,rail transit and construction engineering.Compared with manual GMAW,robot GMAW has the advantages of high efficiency,high quality and easy management,so it is a trend for robot GMAW to replace manual GMAW.However,the traditional GMAW robot based on teach-and-playback and offline programming cannot recognize the welding seam deviation caused by the size error,positioning error and thermal deformation,which seriously affects the welding quality and automation level of the GMAW robot.Therefore,it is of great significance to study the welding seam recognition and positioning method to improve the welding quality and automation level of GMAW robots.The structured light vision system transforms the welding seam recognition and positioning into the structured light feature points recognition and positioning by actively projecting structured light stripes to the welding workpiece,which has the advantages of noncontact and wide application range.The traditional welding seam recognition and positioning methods based on structured light vision sensing are generally direct positioning method based on structured light stripe centerline detection and target positioning method based on feature matching.However,the strong arc light and spatter in online GMAW seam tracking will seriously interfere with the above methods,so the above methods may recognize and position erroneous welding seam.In addition,the accuracy and efficiency also need to be improved.In view of the above problems,and aiming to improve the accuracy,efficiency and robustness,the welding seam recognition and positioning methods based on structured light vision sensing have been studied by adopting the machine learning and feature detection technologies.The main research work and innovative achievements of the dissertation are as follows:(1)A GMAW robot experimental platform based on stereo structured light vision sensing is designed.On this basis,the effects of camera exposure time,aperture size and structured light wavelength on the structured light image quality are studied,and the relationship between the above setups and the structured light image quality is measurand based on the structural similarity(SSIM)index.The experimental results show that the noise level of the structured light image is the lowest when the red structured light of 650 nm wavelength,the camera exposure time of 1.5 ms and the largest aperture size are used.The optimal level of the above setups is determined through experiments to obtain weak noise structured light images,which can lay a foundation for improving the accuracy and robustness of weld recognition and positioning from the hardware.(2)To solve the shortcomings of lack of robustness,accuracy and efficiency in weld type classification,tacked spot recognition and weld ROI determination,a weld type classification,tacked spot recognition and weld ROI determination method is proposed based on improved YOLOv5.First,the detection requirements of weld type classification,tacked spot recognition and weld ROI determination are transformed into a unified target localization task to improve the efficiency;the next,to improve the localization accuracy of weld ROI,the center component bias between the predicted box and ground truth is added to the original CIOU localization loss function;then,a weighted classification loss function is used to reduce the false positives in fillet and groove welds;finally,a self-template method for padding image border is presented to improve the generalization ability of the trained model.Experimental results show that: the proposed method reaches 100% precision,100% recall,0.91 mean intersection-over-unio,2.4 pixels center component bias of determined weld ROI and 18 ms inference times in the offline,rust,highly reflective and online structured light images,indicating that the proposed method has excellent accuracy,efficiency and robustness.(3)To solve the disadvantage of poor robustness of welding seam positioning,inspired by the idea of locating checkerboard corners based on likelihood calculation,a fillet weld positioning method based on weld likelihood calculation is proposed.First,a convolution kernel and mathematical model with anti-noise and fillet weld enhancement properties are designed to calculate the weld likelihood;then,a weld positioning method based on preselection-and-reexamine is designed to consider both false positives and false negatives.The experimental results show that the false positives and false negatives of the proposed method are 0 in offline,rust and highly reflective structured light images,the false positives and false negatives of the proposed method are 0 and 1% in online structured light images,and the calculation time of a single image is 48 ms,which reveals that the proposed method has strong robustness,and the efficiency can also meet the real-time requirements of seam tracking.The accuracy experiment based on the experimental platform shows that the maximum positioning deviation is 0.52 mm,which can meet the seam tracking with general accuracy,but cannot meet the seam tracking with high precision requirements.(4)To solve the problem that the efficiency does not meet the real-time requirements of extending the fillet weld positioning method based on weld likelihood calculation to the groove weld positioning,a groove weld positioning method based on lightweight deeplab v3+semantic segmentation network combined with fast weld likelihood calculation is proposed.First,an improved lightweight deeplab v3+ semantic segmentation network is presented to segment the weak noise groove weld structured light foreground image;then,a fast groove weld likelihood calculation is designed based on the idea of fillet weld likelihood calculation.The experimental results show that the false positives and false negatives of the proposed method are both 0 in the offline structured light images,the false positives and false negatives are 0 and 0.8% in the online structured light images,and the calculation time is 63 ms,indicating that the proposed method can achieve robust and fast groove weld positioning.Since the proposed method can only achieve the pixel-level accuracy,it cannot meet the seam tracking with high precision requirements.(5)To improve the accuracy of the fillet weld positioning method and the groove weld positioning method so as to meet the requirements of high precision GMAW seam tracking,a weld sub-pixel refinement method based on maximum directional projection is proposed,which can refine the sub-pixel coordinates of fillet and groove welds.The experimental results show that the maximum positioning error is 0.22 pixels and the calculation time is 41 ms in the offline,rusty,highly reflective and online fillet weld simulation images,the maximum positioning error is 0.24 pixels and the calculation time is 90 ms in the offline and online groove weld simulation images,which reveal that the proposed method has good robustness and accuracy,and the calculation time of the fillet welds can meet the real-time seam tracking,but the calculation time of groove welds cannot meet the real-time seam tracking.The accuracy experiment based on the experimental platform reveals that the proposed method can improve the accuracy of weld positioning and has significant advantages.In this paper,welding seam recognition and positioning method based on structured light vision sensing has been studied deeply.Machine learning and feature detection methods are adopted to classify weld type,recognize tacked spot,determine weld ROI and position welding seam,and a complete and systematic weld recognition and positioning method is proposed.The research results reveal that the proposed method has good accuracy,efficiency and robustness,and has a good application prospect in the welding seam recognition and positioning of robot GMAW. |