Font Size: a A A

Research On Key Technology Of Multi-view Virtual Try-on

Posted on:2022-11-27Degree:MasterType:Thesis
Country:ChinaCandidate:J J JinFull Text:PDF
GTID:2481306779489074Subject:Computer Software and Application of Computer
Abstract/Summary:PDF Full Text Request
In recent years,the research of image-based virtual try-on technology has gradually become a hot topic.Fitting images should have clear garment texture,garment deformation matching the shape of the human body and high-quality multi-view fitting results.However,most of the existing virtual fitting methods are difficult to deal with large clothing deformation due to the lack of sufficient constraints on the clothing area,the generated image cannot fit perfectly with the human body,and the clothing is prone to distortion when the image of the unknown perspective is generated.To address these challenges,this work designs a virtual try-on system that can process various clothing forms,generate fitting images from multiple perspectives and have high quality fitting results at the same time.It mainly includes two parts: single view virtual try-on and multi-view virtual try-on.(1)To address the problems of single view fitting image generation process in the body and clothing are not aligned and try-on image fuzzy.The work puts forward the method of EVTON module generated clothing warped parameters by geometric network firstly.After that,it applies greater weight to the perceptual loss of the non-clothing area,this ensure the key area in the clothing distortion errors can dynamically adjust and it avoid excessive distortion or interpolation errors.In the fitting stage,the garment mask is firstly combined with the rough image rendered by the network to obtain the Hadamard product,and then the final fitting result is synthesized to avoid ambiguous texture content in non-garment parts by combining the mask and the distorted garment.At the same time,additional perceptual loss is imposed on the generating part of the tried on clothing and other parts of the body are re-weighted,which promotes the fine texture of the clothing area.(2)To avoid generating blurred back fitting images,the proposed PSR-Net gradually deduces the try-on results from the back-side perspective through the progressive method.Based on the input location information of human clothing,this method uses partial convolution to recognize the region and supervises to identify the valid and invalid regions of human clothing by itself,and it aggregates the information flow through three progressive modules with shared parameters.This strengthens the representational learning of clothing and human body information,so as to construct the mapping of human clothing area from one perspective to another,ensuring clear clothing structure and texture.To establish the correlation between the parts and texture details from the two perspectives,the CFR module is constructed to transfer the semantic attention information of high-level human clothing to the low-level semantic information,ensuring the consistency of clothing texture from the two perspectives.Experiments on two datasets reveal that the proposed method generates higher quality fitting images than other methods,and solves the problems of self-occlusion and blur better.Finally,the two methods are integrated with the body parser Open Pose and JPP-Net,and the overall system evaluation results demonstrate that the system has robustness and high availability.
Keywords/Search Tags:Geometric matching, Virtual Try-on, Clothing generation, Semantic Reasoning, Feature Reconstruction
PDF Full Text Request
Related items