| Image style transfer is an important branch of computer vision,and nowadays,it has been widely used in digital media,animation production and other fields.In practice,two images are usually input into the style transfer model together,and the content structure of the content image and the style texture of the style image are extracted by a feature extractor separately,finally,the stylized image can be synthesized.However,traditional style transfer methods cannot accomplish this task in real time and with high quality.The style transfer method based on deep learning has the advantages of high speed and high quality.Studying the method of style transfer based on deep learning will be helpful to lower the threshold of artistic creation and enable everyone to participate in artistic creation.Currently,there are some methods that can complete arbitrary style migration in real time,however,the stylized images generated by these methods have some problems,such as content structure distortion,style confusion and image blurring.Aiming at solving these problems,this thesis improves the existing network structure and designs several methods to improve the content structure control ability of the deep learning style transfer model.The main content of this thesis is divided into the following parts:(1)This thesis introduces the technical background of the field of image style transfer,expounds the necessity of studying the method of style transfer,image style transfer methods are divided into traditional technical methods and deep learning based methods,the current state of research on deep learning style migration methods is also described in detail according to different implementations,introduces the network structure and the underlying principle of convolution neural network,generation of confrontation network and attention mechanism.(2)A style transfer model based on contour extraction and attention mechanism is proposed,which realizes real time and high-quality style transfer of art based on the basic framework of generation of confrontation networks.A contour extraction module is designed in the network,which can be used not only for forward propagation of the network,but also for calculating the structure loss of the generated images,these losses update network parameters through a back propagation process,and also guide the network training,this kind of method solves the problem of content structure deformation in the generated stylized images.The high-order feature matching module is also introduced into the network to improve the matching precision of style feature and content feature.Experimental results show that this method can improve the quality of the generated stylized image,retain a clear content structure in the output image,and improve the degree of stylization.(3)A lightweight style transfer model based on salience detection and multi-scale attention mechanism is proposed.In this model,a mini encoder and a decoder are used to extract and reconstruct image features respectively,which reduce model parameters and complexity.In order to keep the good effect of lightweight network,the layer attention mechanism is introduced into the model to fully extract the low-level information from the encoder.a monotonic system strategy is used to inject style signals into the network during the image feature reconstruction phase.Style signal loss and saliency loss are introduced to guide network training.Experimental results show that the image generated by this method not only has very complete content structure,but also has fine and harmonious style texture. |