Font Size: a A A

Research On Image Style Transfer Optimization Method Based On Deep Learning

Posted on:2021-01-11Degree:MasterType:Thesis
Country:ChinaCandidate:W Y PangFull Text:PDF
GTID:2428330611955208Subject:Engineering
Abstract/Summary:PDF Full Text Request
In the field of image processing,image style transfer is a technology that uses computer technology to process image colors,contours,lines and other information to change image effects.With the development of machine learning in recent years,the use of deep neural networks for image style transfer has achieved good results.However,traditional image style transfer methods still have problems such as insufficient style expression,insufficient separation of content and style,poor performance of highresolution pictures,and artifacts on low-resolution pictures.Based on the deep neural network,this paper designs two new networks in the field of image style transfer in the field of image processing on the basis of studying the existing network,so that the performance of the image after the style transfer is better.The main work of this paper includes three aspects:(1)This paper adds a classifier and a conversion block to the style transfer conversion network based on the feature extraction of the VGG network,and designs a new model to improve the original network.The network composed of the classifier and the bad block is added to the parameter training process of the generating network in the deep neural network,so that the parameters of the generating network are optimized in the direction of a better style transfer effect,and a better style transfer model is obtained.Generate style transfer conversion results with better stylized effects.(2)Based on the code-decoder model proposed by Artsiom et al.,this paper designs a new style transfer model,proposes a new style encoder,and adds the original single encoder to a new content encoder and Style encoder.The style features extracted by the style encoder can allow the generator to perceive changes in the picture style,and the generated picture style is more consistent and realistic.In addition,the previous methods generally only use one style image for training,and this model can extract multiple style images at the same time and integrate their style characteristics for migration.(3)In the improved model of(2),four improved loss functions are proposed,and the loss function is made more accurate by adding style encoders and decoders.The four loss functions are conditional confrontation loss,reconstruction loss,content Loss and style loss are used to correct the model during training.And when judging the effect of style transfer,not only qualitatively different comparative tests are used to make judgments based on human aesthetics,but also quantitative methods are used to calculate the deception rate of style transfer to give objective judgment standards.
Keywords/Search Tags:deep learning, style transfer, neural network, loss function
PDF Full Text Request
Related items