Font Size: a A A

Generation Of Image From Garment Sketches Based On SSGAN

Posted on:2024-09-14Degree:MasterType:Thesis
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:2531307142481864Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the development of deep learning,research on image generation methods from sketches has gained more and more attention,among which sketch generation of clothing images is widely used in the fields of clothing design and clothing culture.As the garment sketch is a black and white image composed of complex lines,different user groups have very different requirements for the texture of the garment image,and most of the existing research work is for the generation of simple item sketches to images,the generated images will be constrained by the boundaries of the sketch,resulting in the generation of images with dark colours,poor resolution,inability to control the generation of image textures,and inconsistency between the generated images and the sketch lines.In response to these issues,the following research work has been carried out in this paper:1.In response to the current scarcity of paired sketch and image datasets and the small number of publicly available garment sketch and image datasets,a garment sketch and image paired dataset has been produced.To carry out the research work,over 31,000 garment images with a resolution of 512*512 on a white background were collected and the garment image domain was mapped to the sketch domain to obtain the corresponding sketch image.At the same time,a 128*128 sized texture image is taken from each image in the garment dataset to construct the fabric pattern dataset.2.To address the problem that the generated garment images are affected by black and white sketches,resulting in greyish garment colours,Style GAN is used to generate garment images from sketches,and a triple loss function is added to the process of encoding the sketches using the VGG network,enabling the encoder to accurately extract the content information of the sketches,enriching the content and reducing the colour impact.3.In order to address the problem that existing research methods cannot accurately generate garment images with specified textures,this paper proposes the garment image generation model SSGAN(Sketches-based Style GAN).SSGAN uses sketches and textures as input.First,the features of the sketches are obtained using the VGG network;the texture features of the texture patterns are extracted using Res Net to obtain the potential vectors of texture features.Then,the potential vector is used as a constraint and the sketch features are used as input to the U-Net to obtain features with sketch features and texture pattern information.Finally,the features are transformed using the p Sp structure and fed into the Style GAN generator to generate a high resolution image of the garment containing the specified texture.This paper conducts experiments on a self-constructed dataset.By comparing the results with those of other sketch-based image generation methods,the model designed in this paper can obtain better garment generation results under the same experimental conditions,and can well solve the pattern generation problem and improve the overall garment image quality.Compared to the best available method,SSS2 IS,large improvements were achieved in all metrics,including a 16.4% reduction in FID,a 18.1% reduction in LPIPS and a 4.1% increase in SR.
Keywords/Search Tags:Sketches, Clothing image generation, Feature fusion, Generative adversarial networks, Feature extraction
PDF Full Text Request
Related items