Qin Opera is a treasure of Chinese culture,which provides valuable clues for the exploration of Han culture and the pursuit of Chinese classical art.Today,although Qin Opera is an intangible cultural heritage of our country,its audience is gradually becoming smaller and older.In order to change the current situation,this paper proposes a deep learning-based method of portrait cartoonization and image animation generation,and applies it to Qinqiang characters.Through the integration of traditional Qin Opera culture and modern information technology,the cultural innovation of Qin Opera can be realized,and the ancient Qin Opera art can be rejuvenated with new vitality.The main research progress of this paper includes:(1)In the research on portrait cartoonization of Qin Opera characters,the original image conversion model U-GAT-IT only converts images of a single category and the generated cartoon image is easy to lose the identity information of the original face image,this paper proposes a Multi-Class U-GAT-IT portrait cartoonization method to achieve multi-class image conversion.Firstly,the autoencoder and category labels are used to extract image category features in the model,and they are fused with the style features generated by U-GAT-IT to complete multi-category and multi-style image conversion.Second,two stacked up-down sampling convolution blocks are added to the model to enhance the feature extraction and reconstruction capabilities of the model.Finally,inspired by the AdaLIN normalization function.In this paper,the Indirect-AdaLIN normalization function is proposed and applied in the feature fusion module,the purpose is to make the generated cartoon images better retain the semantic content of the input images and maintain the original facial identity features of the cartoon images.The experimental results show that,compared with the U-GAT-IT model,the proposed method can achieve multi-class portrait-to-cartoon conversion on a small amount of unpaired data.In addition,the model can obtain smaller FID and KID on the first Qinqiang cartoon portrait dataset(face2qincartoon)and image conversion public datasets(such as selfie2 anime,horse2zebra,etc.)value.This shows that the model in this paper is not only suitable for the study of portrait image to cartoon image conversion,but also has certain universality for other types of image conversion.(2)In the research on the face image animation generation of Qin Opera characters,the animation effect of the original image animation model Monkey-Net on the face image is not ideal.During animation,the facial deformation of the human face is large and the problem of facial features is prone to shift.This paper proposes a Face-MonkeyNet face image animation generation method based on the Monkey-Net network.By adding a face keypoint detection model to capture face keypoints and input them to the dense motion network to generate dense optical flow when facial motion,an action transfer network is used to combine dense optical flow graphs and appearance information extracted from source images to generate target video frames.The experimental results show that,compared with the Monkey-Net model,the method in this paper can complete high-quality animation generation for Qinqiang cartoon portrait images and ordinary face images,and the target video can restore the facial expressions and actions of the faces in the source video as much as possible,the facial features are positioned accurately throughout the video,the facial expressions are coherent and smooth,successfully achieved the animation effect of the face image.(3)In order to display the research results more intuitively,this paper designs and realizes the portrait cartoonization and image animation system of Qinqiang characters.Through simple and clear operation,the system can not only complete the one-key conversion from the face image to the Qinqiang cartoon portrait,but also fully demonstrate the animation effect of the Qinqiang cartoon portrait according to the specified video. |