Font Size: a A A

Research On Deep Neural Networks For Multi-focus Image Fusion

Posted on:2020-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:X P GuoFull Text:PDF
GTID:2428330572480082Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Since the depth of field(DoF)of the camera's optical lens is often limited,the images captured by these devices cannot reveal all objects clearly in the scene.Multi-focus image fusion(MFF)is a commonly used technique to obtain the all-in-focus image by merging two or more partially focused images in the same scene,in which each object is focused.The key of this task is the accurate focused region detection among source images.To this end,based on the deep neural networks,this thesis concentrates on the study of MFF.On this basis,two novel MFF algorithms have been proposed.The main works and contributions of this thesis are listed as follows.First,aiming at the problem of the training dataset used in the previous convolutional neural network(CNN)based method does not consider the situation of the natural multi-focus images' foreground and background are focused respectively,and the ground truth is not pixel-level labelling:also,the size of output score map is inconsistent with the input,not accomplishing the focused region detection task,a novel MFF algorithm based on the fully convolutional neural network(FCN)is proposed.First,in view of that there is no public large-scale multi-focus image dataset with ground truth at present,a method for synthesizing multi-focus image pairs is introduced,and a multi-focus image database is synthesized to train the FCN effectively.Second,an FCN with single branch is harnessed to model the MFF,and the whole image is adopted to train the model.The well-trained FCN could accomplish the pixel level focused region detection,and do not need any modification at test phase.Further,the fully connected conditional random field(CRF)is utilized to refine the output of FCN to enhance the fusion performance.Experimental results show that the proposed algorithm outperforms the current mainstream multi-focus image fusion algorithms.Second,aiming at the problem of not considering the matching relationship between fusion decision map and source images or lacking of accurate focused region detection existing in the deep neural networks based methods,a novel method based on conditional generative adversarial network is proposed.Inspired by the image-to-image translation task,the proposed method considers the MFF as translation task from the multiple images to single image.To satisfy the requirement of dual input and single output in MFF,the encoder part in the generator is set as a Siamese network.In order to train the model more stably and generate high quality confidence map indicating the focused character among source images,the least square generate adversarial network(LSGAN)loss function is employed to replace the original generate adversarial network(GAN).Also,a disk shape point spread function(PSF)is used to simulate the defocus scene more realistically.On this basis,a large-scale multi-focus image dataset is synthesized to train the proposed network effectively.The fusion performance is further improved by the convolution conditional random field(ConvCRF)optimization technique.The experimental results show that the proposed algorithm has a superior focused region detection ability than the current mainstream spatial-based algorithms.Also,compared with recent state-of-the-art algorithms in terms of both the visual perception and quantitative assessment,the proposed method shows better performance.
Keywords/Search Tags:Multi-focus image fusion, Deep neural networks, Fully convolutional neural network, Conditional generative adversarial network, fully connected conditional random field, convolution conditional random field, point spread function
PDF Full Text Request
Related items