Font Size: a A A

Event-Based And RGB Image De-Obscuration Imaging Research

Posted on:2024-02-14Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y DaiFull Text:PDF
GTID:2568307103973899Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Removing foreground occlusion from images is a popular task in the field of computer vision.Synthetic aperture imaging(SAI)methods aim to recover the complete image from the multi-view occluded frames.Traditional SAI methods accomplish the image deocclusion task by manually extracting feature points,however,they do not perform well in dense occlusion situations.Deep learning-based SAI methods use neural networks to achieve the de-occlusion effect,but are still limited by the constraints of conventional cameras to completely remove the occlusion.With the development of event cameras,the event-based SAI method came into being,the event data is low redundant and can well overcome the interference of occlusion information.Nevertheless,the event data contains less information about the scene,resulting in unclear texture details in the reconstructed images.In addition,there is no open source event-based occlusion dataset.This thesis starts from the existing problem of the de-occlusion task,constructs a large-scale occlusion dataset,and proposes an event-based de-occlusion algorithm.The algorithm is then optimized to propose a de-obscuring model based on both event and RGB image modalities,which performs better compared to the current best de-occlusion algorithm.The main work of this paper is as follows:(1)It should be noted that there is no public dataset for occlusion removing using both event data and RGB image data.Therefore,we collect a large-scale image occlusion dataset with 400 groups of data,including both event data and RGB frames,named Occlusion-400,which uses fence and baffle as the occlusions.The Occlusion-400 dataset is the largest occlusion dataset containing both event and frames available.The occlusion scenes are diverse and complex,which poses a huge challenge to the de-occlusion algorithm.(2)An event-based de-occlusion algorithm is proposed.The algorithm disperses foreground occlusions by refocusing the event data and calculates event frames using an event representation based on an exponential decay factor.The quality of the reconstructed images is enhanced by adding pluggable multi-scale convolution kernel modules to the generative network.Our proposed de-occlusion algorithm is compared with the current best event-based de-occlusion method on the Occlusion-400 dataset for experiments,and our de-occlusion model achieves better results.(3)Although the event-based de-occlusion algorithm successfully removes the dense foreground occlusion,the texture detail of the reconstructed image needs to be improved.Therefore,we optimize the above proposed algorithm to jointly complete the removing occlusion task by taking advantage of the event and RGB images,and propose the event and RGB image based de-occlusion algorithm,named EF-SAI.The algorithm first uses the adaptive refocusing algorithm to refocus the RGB images and also refocus the event data,and then segments the event data based on the image timestamps and represents them as event frames.The two modal data are input to the generative network,which consists of a two-branch feature extraction module,a feature fusion module and an image reconstruction module in turn.The superiority of our proposed EF-SAI algorithm is verified by comparing experiments with the current best methods in the field of de-obscuring tasks.
Keywords/Search Tags:image de-occlusion, event camera, synthetic aperture imaging, dataset
PDF Full Text Request
Related items