| The artistic video stylization transforms a given video into different artistic styles and it can be widely used in video entertainment, movie and animation production. However the existing video stylization algorithms can simulate limited video artistic styles. Although some algorithms can achieve multi-style video processing, these algorithms are complex and difficult to implement. To solve this problem, we propose the multi-styled video artistic processing algorithm based on texture advection. We use the flow field guided texture synthesis to compute the texture layer that represents the artistic style. Different artistic styles can be obtained by different inputs of texture samples. Through the controllable texture synthesis guided by the flow field, we can better simulate the video styles captured by the changes of synthesized textures. The advection of the texture layer under the guidance of optical flow field will cause texture distortion between different frames. To solve this problem, we propose the texture repair based on local texture synthesis which makes the stylized video smooth temporally and improve the quality of video stylization. Furthermore, in order to improve the speed of video stylization algorithm, we use GPU acceleration in the stage of video abstraction by adopting CUDA parallel computing framework. We propose a fast parallel computing for video abstraction and make the computation speed substantially fast than the speed of the computation based on the calculation of CPU. We produce multiple video artistic styles with satisfactory experimental results, including the styles of oil painting, watercolor painting and stylized lines drawing. Finally, we implement the multi-style video stylization system based on texture transmission. This system receives the video, sample texture and optical flow field data as input resources. It outputs the video stylization results through texture layer synthesie, transmission and repair.In summary, the main contributions of this paper are summarized as follows:(1) Texture-advection based multi-style video stylization:We transfer and synthesize the different styles of sample texture to the texture layer, which is combined with the source video frames to get the desired video stylization. The direction field is utilized to achieve "anisotropic" texture synthesis to produce the video stylization of high quality that captures the changes of directions.(2) Local texture inpaint to keep temporal coherence:We design and implement proper local texture synthesis for fast and efficient repair of the limited distorted region in the texture layer to assure temporal coherence of video stylization.(3) Accelerated texture inpaint and video abstraction based on GPU:In order to reduce the texture inpaint processing time, we accelerated the texture inpaint based on GPU. Then we also performed morphological operation processing on GPU parallelly with CUDA to accelerate the video abstraction, thus enhancing the efficiency of video abstraction significantly.(4) The design and implementation of multi-style video stylization system:We use the Qt framework to build the interface of our multi-style video processing system. The system combines the CUDA parallel processing framework to accelerate the calculation of video stylization based on programmable graphics hardware. |