| Facial inpainting is an important research topic in terms of computer vision.The traditional inpainting method mainly stems from the content information of the image itself,which penetrates and fills the information of the intact part of the image into the missing part.The repair results often have problems such as obvious boundary traces,lack of details,inconsistent structure,and fuzzy repair content.However,for complex facial image reconstruction,it is even less satisfactory.As the development of technology,deep learning has increasingly become a research base in the term of computer vision,which has made tremendous progress in facial inpainting technology.Especially when the generative adversary network was proposed,based on its own strong learning ability,the effect of facial inpainting has made substantial progress.However,due to the complexity,diversity and authenticity of the face,the current facial inpainting technology still faces many unsolved problems and challenges,such as: blurred repair areas,skin color difference,unreasonable facial structure,and inconsistent organ features.In this article,based on the generative adversarial network model,the above problems are solved by improving its generator network and discriminator network.The main work is as follows:(1)Improvement of generative adversarial network model.Make improvements to the generator.Choosing U-Net network as the basic network structure of the generator,its unique structural features and powerful feature extraction ability can effectively extract image information.At the same time,the expansion convolution and DMF modules are introduced into the U-Net network model,which complement each other.While enlarging the receptive field of the model,the parameters of the model are localized to improve the learning ability of the model.Improve the discriminator.The discriminator selects a global discriminator to supervise the overall structure of the generated image,ensure the rationality of the overall structure of the image,improve the discriminant quality of the model,and improve its discriminant process to ensure consistency between the theory and practice of the discriminator.Eventually,Mish activation function is choice for replacing the original activation function in the network.(2)Production of datasets.In order to improve the repair effect of the model,this article selected a total of 6000 real facial images composed of the FFHQ dataset and Celeb A dataset.Due to the different sizes of the images in the two datasets,Photoshop was used to uniformly cut the image size into 256 * 256 pixels.Create a random occlusion mask of 128 * 128 pixels and merge it with the dataset to generate the missing image dataset required in this article.(3)Optimization of loss function.In this paper,mean square error,VGG feature matching loss,discriminator feature matching loss and joint loss of confrontation loss are used as the loss function of the model.Through experimental validation,the accuracy of the model proposed in this paper is about 35%.Through qualitative and quantitative analysis,the method proposed in this paper is compared with two classical facial inpainting methods.The conclusions are as below.Firstly,qualitative analysis is conducted to demonstrate that this method has certain advantages in terms of overall structure rationality and consistency of contextual content.Then,quantitative analysis was conducted,and the results showed that the method proposed in this article had good improvements in four evaluation indicators: peak signal-to-noise ratio,structural similarity,image perceptual similarity,and mean square error. |