| In the textile industry,textile defect detection is a key link to ensure the quality of textiles.Traditional textile defect detection methods are mainly based on image processing technology,but there are certain limitations when dealing with complex scenes and diverse textiles.In this regard,this paper aims to use the selectivity of the attention mechanism to help the model weight different parts of the input,extract more important feature information,and make the model make more accurate judgments.At the same time,by optimizing the fusion of shallow information and deep information,the effectiveness of multi-scale feature fusion is improved.This paper focuses on the key processes of textile image feature extraction,multi-scale feature fusion and loss calculation.The main research contents are:(1)Research on textile defect detection method combining attention mechanism and adaptive memetic fusion network.To solve the problems of high cost,low accuracy and slow speed of defect detection in textile production process,a textile defect detection model combining attention mechanism and adaptive memory fusion network is proposed.First,an improved attention module SCBAM is introduced into the YOLOv5 backbone network to build a SCNet feature extraction network to enhance the model’s ability to extract textile defect features;then,in order to enhance the transfer of shallow localization information and effectively mitigate the confounding effect generated during feature fusion,an adaptive memory fusion network is proposed to improve the feature scale invariance while incorporating the backbone The feature information in the network is incorporated into the feature fusion layer.Finally,the CDIo U loss function is introduced into the model to improve the accuracy of detection.Experimental comparisons on ZJU-Leaper textile defect dataset and Tenchi textile defect dataset show that this method outperforms most existing networks in terms of detection accuracy and speed.(2)Research on textile defect detection methods based on contextual sensory field and adaptive feature fusion.The textile defect detection method based on context-sensitive field and adaptive feature fusion is proposed to address the problems of textile defects with large variation in shape and scale and edge uncertainty.Firstly,an improved context-sensitive field module is introduced in the backbone network CSPDarknet53 to make full use of local and global context information to enhance the extraction capability of the backbone network for textile defect features;secondly,a deconvolution-based adaptive feature fusion network is designed to improve the transfer efficiency of shallow localization information and feature scale invariance;finally,the exponential distance Io U is proposed to optimize the calculation of bounding box loss and adaptively weighting the gradient of the bounding box to improve the detection accuracy of the model.The experimental results on ZJU-Leaper and Tianchi datasets show that this method achieves m AP values of 42.5% and 61.5%,which are 2.9 and 3percentage points higher than the original YOLOv5 s,respectively.(3)Study of textile defect detection method based on feature fusion network with attention loss.This method first introduces the simplified spatial pyramid pooling Sim CSPSPPF into the feature extraction network,expands the model receptive field,and fuses local and global feature information to improve the extraction effect of the primary features of textiles while maintaining the detection speed.Then the multi-scale primary features are adaptively fused by an adaptive path aggregation network and a bottom-up pathway is added to optimize the propagation path of shallow information.Finally,a dynamic non-monotonic loss function is used to balance the bounding box regression problem between low-quality samples and highquality samples to further improve the detection accuracy of the model. |