| In today’s society,human beings have to receive and process massive information every day,most of which are digital information.Among them,digital images have become the main carrier for human beings to release and receive information.Digital image is composed of a series of digitized pixels.People can modify the number at will,and naturally,they can process the digital image at will.Although image processing can shine in photography,advertising and other fields,the malicious use of image processing technology will fabricate facts and lead to public opinion,and image tampering belongs to this category.Therefore,the blind forensics technology of tampering image has attracted more and more attention.At present,the tampering image detection algorithm based on deep learning has made great progress,but many of them need to be pre-processed to achieve the purpose of extracting inherent features.At the same time,they only focus on the detection and location of tampered images and ignore the tampering mask.Code extraction.In order to solve this problem,this paper proposes a tampered image detection,localization and mask extraction network that combines FasterRCNN and Full Convolutional Network(FCN).The network uses cascading RPN to modify the alignment of feature maps and anchor points in traditional RPN networks.At the same time,in order to increase the sensitivity of the network to background tampering,bilinear interpolation is used to replace the largest pooling layer in the ROI pooling layer.Finally,perform bounding box regression and classification.Input the regression bounding box and the characteristics of the cascaded RPN network into the FCN network to extract the tampering mask.In the training process of the network model,because there are few tampering data sets,this paper first uses the tampering data set composed of COCO data for pre-training,and then uses three standard data sets CASIA,COVER,and Columbia to fine-tune and test the network.During generation and enhances the quality of font generation.(2)From the perspective of color texture conversion,this paper proposes artistic font generation combining style and structure discriminations(ArtFontNet).Based on the FTFNet network,the style discriminator and the structure discriminator are used to jointly supervise and guide the generator.The style discriminator supervises the color and texture information of the entire font image.The structure discriminator extracts the glyph structure and texture distribution of the generated image through the Canny edge detection operator,and guides generator to fit out more realistic and complete rendering effects.Instead of deconvolution,the resize-convolution combined bilinear interpolation and convolution is used in the upsampling layer of the generator,which suppresses artifacts and checkerboard effects.This network enhances the detail fidelity and style accuracy of generated artistic fonts.Through the comparison of experiments,the two models proposed in this paper are compared with the existing methods.They are intuitively analyzed from two dimensions of visual evaluation and objective evaluation.Both the strokes details and glyph structures of the fonts are greatly improved,Which proves that the models in this paper have remarkable performance in font generation tasks and have good application value. |