| Rain is a common severe weather phenomenon,which seriously degrades video and image quality,affecting the effective extraction of natural scene content by the outdoor vision system.Video deraining task has attracted wide attention since the urgent demand of high-quality video in recent years and has become a current research hotspot.The current methods for video deraining task can be roughly divided into two categories: traditional optimization modeling and deep learning-based approaches.The video contains massive information,and traditional model methods need to establish complex prior constraints to describe the task model.The amount of parameters contained is too large and it takes a long time.Although deep learning methods solve some disadvantages with traditional model method,it also appear the insufficient usage of spatial and temporal information,causing background detail information loss.Based on the above-mentioned problems,we introduce spatial and temporal information parameters,establishing a joint rain removal model based on the spatial-temporal prior.Solving this model iteratively through Half-Quadratic optimization,a sequential deep unrolling framework is substantially presented.Combining the traditional model prior and deep learning methods and using a learnable network structure,we establish three basic modules to specifically solve the model,achieving the purpose of accurately removing the rain and reserving the background details.At present,a large number of subjective and objective evaluations show that the proposed method is significantly better than other state-of-the-are video rain removal methods in rain removal effects and background detail restoration,and it also fully proves the reliability and effectiveness of our framework.To more accurately describe the distribution of rain and restore the background details,a pure network method is designed to more effectively solve the video rain removal task,called multi-frame deraining network with temporal rain decomposition and spatial structure guidance.The rain layer is separated into location and intensity levels,characterizing the rain streak features.A learnable network structure is defined to describe the distribution of rain,and the location guide map is used to assist the single-frame rain removal block.A spatial-temporal fusion module is constructed with a detailed(edge)guidance map to preserve spatial information in the temporal domain,restoring background detail information and improving the quality of the frame.Many evaluated experiments fully demonstrate our superiority on video deraining task compared with other state-of-the-are video rain removal approaches,verifying the effectiveness of our network.We also extend this network to the video denoising task.The network also get the good recovery effect,further verifying the generalization ability of our method. |