Font Size: a A A

Design And Implementation Of Dynamic Background Synthesis And Coordination System

Posted on:2024-01-30Degree:MasterType:Thesis
Country:ChinaCandidate:S Z LiFull Text:PDF
GTID:2568306944463004Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Nowadays,short videos are widely spread,and news and teachingsharing short videos are gradually prevailing.The production of these types of videos usually requires integrating and splicing materials from different sources into the background video,which may have inconsistent front and back colors and appearances.Unlike the virtual backgrounds in conference systems,dynamic background changes are more complex and may have blurred edges,which poses greater challenges for the coordination of the front and back.Existing professional editing software does not provide relevant solutions and requires users to adjust frame by frame,consuming time and effort.With the vigorous development of deep learning,there are many solutions that can solve the problem of inconsistent color synthesis images well.Due to limitations of computer power and algorithms,image algorithms are usually applied to each frame of the video for automation processing,but the resulting video has obvious flicker issues and unsatisfactory visual effects.Firstly,a hybrid loss-guided video harmonization algorithm was proposed,aimed at improving the harmony between the foreground and background in dynamic background synthesis videos.It uses temporal information to reduce the jitter of video processing results and optimizes edge feature extraction with guided filtering to enhance the quality of harmonization.The model includes a feature extractor,a guided filtering module,a cascaded parameter regressor,and six image filters(namely white balance,brightness,contrast,saturation,highlights,and shadows filters),decomposing the harmonization process into multiple overlapped subtasks of image editing.In this paper,the CIELAB color difference formula is used to calculate the inter-frame chromatic difference,and the L2 loss is used to calculate the similarity between the harmonization result and the input frame content.And the color consistency loss and content loss are summed in a 3:2 ratio to compute the hybrid loss.It maintains spatial perception while enhancing temporal consistency.Experimental results show that the algorithm can generate harmonized results with intraframe consistency and inter-frame smoothness and stability when given input composite video frames and corresponding masks,and the inference speed is better than that of previous pixel-level harmonization processes.The model effectively solves the flickering issue during frame-by-frame harmonization processing.Secondly,to fill the gap of editing tools,this article integrated the aforementioned algorithms,designed and implemented a dynamic background synthesis coordination system.Users can select video segmentation objects through simple interactions and intelligently extract foreground targets from subsequent frames according to the mask of the first frame,and then naturally replace the segmentation objects in the new background video.In addition,the system provides users with editing functions such as duration editing,video cropping and GIF generation,making the entire process smoother and effectively enhancing the efficiency of video editing.This system has a very wide range of applications,which can be used for short video production and social media dissemination and is expected to become an important technology in the field of video editing.
Keywords/Search Tags:video harmonization, deep learning, guided filter, color consistency, video editing
PDF Full Text Request
Related items