| The instance segmentation of clothing images is a prerequisite for solving the threedimensional reconstruction of clothing and realizing three-dimensional virtual fitting.The edge segmentation effect is good,and the clothing image with low noise after segmentation can make the effect of virtual fitting more real.However,due to the complex edge shape of the clothing itself and the presence of many different structural components in the clothing(such as collars,cuffs,pockets,etc.),traditional segmentation algorithms cannot complete fine-grained segmentation.Therefore,this paper designs a depth-based Learn the segmentation algorithm of clothing image,and investigate and explore its segmentation performance and practical application.First of all,for image segmentation tasks,most of the current methods are based on deep convolutional neural networks(DCNNs),but the receptive field of standard convolution is limited by the size of the convolution kernel,and only the feature information around the convolution kernel can be learned,and can’t establish a global association.Previous studies have also shown that the method based on the attention mechanism can establish long-distance dependence between different pixels in the image,and can further reconstruct the feature map to better complete the image segmentation.However,the existing channel attention and spatial attention mechanisms will greatly increase the parameters of the network structure,which will put higher requirements on the segmentation of high-resolution images and computing power.This paper proposes a new multiattention module(Multi-Attention Module)on the basis of preserving the spatial attention module(PAM)and channel attention module(CAM),reducing the network parameters,and at the same time can learn different scales of context Information to improve the segmentation accuracy of clothing images.By adding this multiple attention structure to the existing segmentation networks Deep Lab V3 and Mask RCNN,a multiple attention Deeplab(MADeeplab V3)network and multiple attention Mask RCNN(MAMask RCNN)networks are constructed.It is verified by two different clothing image data sets,combined with ablation experiments,and compared with the previous segmentation method,it is shown that compared with the baseline network,after adding the proposed multiple attention module,the segmentation performance of the network is all To improve,we use APmask and m Io U values as evaluation indicators.For COCO data sets,AP is generally used as evaluation indicators.For other forms of semantic segmentation networks,Io U values can be used as evaluation indicators.Among them,the APmask for Mask RCNN is increased by 3%,and the m Io U for Deep Lab V3 is increased from 0.51 to 0.56.In order to test the application effect of the proposed segmentation network in actual scenes,this article attempts to segment the clothing images in the product images or real shot images and use them for virtual stitched 3D clothing reconstruction and 3D scanning virtual clothing reconstruction.For virtual stitched 3D clothing reconstruction,firstly,based on the clothing image obtained after segmentation,the image repair network is used to generate the unknown surface of the clothing,thereby solving the problem of two shooting in the traditional virtual fitting.By attaching the generated clothing to the surface of the three-dimensional human body realizes threedimensional clothing reconstruction.The results show that the image restoration network used can generate good front and back sides of clothing and obtain excellent 3D reconstruction effects.For 3D scanning virtual clothing reconstruction,using Sf M(Structure From Motion)algorithm,the real images at different positions are used for 3D reconstruction after removing the background including the human body through the segmentation network.Compared with the reconstruction effect without background removal,the noise level of the 3D clothing reconstructed by the segmentation network in this paper is significantly reduced,and more realistic 3D clothing can be generated for the display of 3D clothing products. |