| Ink painting is the essence of Chinese culture and art,and now it has been active on the world stage as a representative of the Chinese image.However,it is not easy for non-professional people to recognize and redraw a specific ink style,who might require a long time of learning and training.Combining traditional art with computer technologies enables people to automatically generate ink painting style,which effectively lows the requirements of artistic creation and allows more people to know and love ink art.In addition,it has high theoretical research significance and application value for expanding the field of computer application and promoting traditional national culture.In recent years,stylization algorithms based on deep neural networks have achieved significant breakthrough in the field of image style generation and recognition by extracting high-level semantic features of images,the existing neural network methods,however,cannot achieve ideal results for ink painting styles.As a unique art form in China,ink painting is obviously different from western painting in terms of drawing content,skills and tools such that the existing neural network algorithms cannot effectively learn and extract ink style features.In view of the problems in the application of deep neural network to ink style generation and recognition,Therefore,this thesis aims to address the existing problems of deep neural network based ink style generation and recognition.The main research work and innovation points are summarized as follows:1.To address the problem of poor structure and bad representative of the desired style for the generated ink-painting images by the existing neural network methods,we propose an ink-and-wash style image transfer method based on continuous structural similarity patch matching.First,we introduce the structural similarity index(SSIM)to calculate the similarity between all content patches and style patches in the activation space.Then a local style patch choosing procedure is applied to maximize the utilization of all ink style patches.Moreover,we constraint the spatial position of the content image to make the swapped style patch more continuous at the same time.Finally,the final stylized image is obtained through an efficient feed-forward inverse network.In order to allow the feed-forward network to support a wide range of images,a database containing 80,000 natural images and 40,000 ink painting images is established to train this feed-forward inverse network.The experimental results show that the method in this paper is fast and can achieve consistent ink style transfer results,and it can obtain better ink style effects and higher computational efficiency compared with other methods.2.For the problem of flickering and spatiotemporal discontinuity in the inkpainting style video generated by the existing neural network methods,an ink-painting style video transfer method based on continuous optical-flow patch is proposed.First,we builds the content patches swap weight for style patches based on the optical flow field between neighbor content activation space.Then,a correction stage is introduced to fix problems produced by the optical flow based weight computation and to determine the optimal matching between all content patches and corresponding style patches.Finally,a pre-trained feed-forward inverse network is used to fast transfer the determined activation space to the final stylized image.The experimental results show that our method can ensure the perception of ink and wash style on the spatial structure,moreover,it can efficiently generate high quality and temporal coherent stylized results.3.For the problems of low saturation and unclear edges in ink painting colorization based on the existing neural network methods,we developed a colorization method of ink painting based on the conditional generation confrontation network.First,we extract the low-dimensional local features containing ink painting stroke features and input them to the generation network.We subsequently add ink painting data set to the pre-trained VGG-19 classification network for fine-tuning to obtain the highdimensional semantic features of ink painting.Then we stack the high-dimensional features into the generation network to enhance the global accuracy of colorization.By discriminating the generated colorized image and the real image in the form of patches,we can generate a rich color image while reducing the defect area,where the discriminator network uses Patch Gan.In the experimental part,the comparative evaluation is carried out from both subjective and objective aspects.Our method in this paper is better than other comparison methods in terms of color overflow control,color richness and artistic expression.4.To address the problems of low accuracy and only applicable for few types of ink-painting style classification based on existing neural network methods,we present a new ink painting style classification method based on a multi-branch deep residual network.First,we use the high-level layers of the convolutional neural network to extract the Gram matrix of the image,which can represent the style features of the artwork.Then,according to the characteristics of Chinese ink painting,we use an edge detection algorithm based on the fully convolutional neural network and the deep supervision network to obtain the stroke features of the painting.Finally,the acquired style feature map,stroke feature map and original image input into the three-branch deep residual network to train and obtain the classification results of genres and authors.Experiments show that the ink painting style classification method proposed in this paper is better than the existing representative methods in recall and precision rates. |