Font Size: a A A

Research On Pixel-Level Fast Fusion Methods For Multi-Source Images

Posted on:2021-02-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:L X ZhangFull Text:PDF
GTID:1368330605453793Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Due to the limitations of image acquisition equipment,a single image cannot fully display the information of the scene.Therefore,the idea of image fusion is proposed.Image fusion is to use the redundancy and complementarity of information between images to extract "interesting" feature information in the source image and merge it into a comprehensive and clear image,which further improves the spatial perception ability of the image and facilitates image analysis and processing.It is more suitable for human visual recognition and computer subsequent detection and classification.With the advancement of science and technology,the fast pace of life,and multiple types of images,the real-time and ubiquity of image fusion have become to be the key issues.In this paper,four fast image fusion methods are designed and implemented for the multi-focus image fusion,medical image fusion,infrared and visible image fusion,considering the three aspects of the accuracy of features,the ubiquity and real-time of methods.The main research contents and innovations are as follows:(1)In order to solve the problem of block artifact,an adaptive differential evolution algorithm is proposed based on the content relevance of neighborhood.The adaptation of the partition is realized by adaptively adjusted the scaling factor and crossover factor of population evolution algorithm.And the optimal size of the block is obtained iteratively by heuristic search strategy.Then,according to the Laplace pyramid transform,a multi-focus image fusion method(DE-LP)based on adaptive differential evolution algorithm is designed.The approximate coefficients are divided into regions by an adaptive differential evolution algorithm,the focus of the region is calculated by SML to form the fusion decision map,and the approximate coefficient is fused by pixel-by-pixel weighted rule.And the detail coefficient is fused by the fusion strategy combining the region gradient energy and the optimal decision map.Experimental results show that the DE-LP method is able to produce clear images without block artifacts as well as edge discontinuity,and is superior to the selected methods in both subjective visual effects and objective evaluation.(2)In view of the diversity of multi-source image features,the activity level measurement algorithm based on the regional accumulation gradient contrast is proposed to accurately extract the structure information and boundary information of the image,according to the adaptive differential evolution algorithm.In order to achieve the real-time of fusion method,a fast multi-source image fusion method(RAGC)based on the region-accumulation gradient contrast is designed.The optimal size of the region is calculated by the adaptive differential evolution algorithm,and the region-accumulation gradient contrast of each pixel is calculated.Then,the initial decision map is constructed by comparing the region-accumulation gradient contrast of the corresponding pixels of two images.With the optimization by morphology and image guided filtering algorithm,the final decision map is obtained.Finally,the input images are fused by pixel by pixel weighted rule.Experiments show that RAGC is capable of extracting feature accurately and the result is clear in structure,which solves the problem of detail blur and block artifacts to a certain extent.And RAGC is suitable for the fusion of various types of images and has a certain ubiquity.Running of RAGC is fast,which meet the requirements of real-time processing.(3)In order to remove a small amount of artifacts in the fusion results of some medical images by RAGC,PCNN model is adopted for feature extraction,in which the total of firing of neurons is to determine the clarity of image.the bigger the total of firing is,the clearer the image.In order to improve the accuracy of the features of multi-modal image,all parameters of PCNN model are dynamically linked with the static features of image,which realizes that different parameters of PCNN model are set in different images,and different features are extracted.Arming to improve the performance of execution,the PCNN model is simplified by the Spiking Cortical Model,which not only reduces the coupling of the PCNN model,but also reduces the number of parameters,from 9 to 5.For multi-modal images,a simplified PCNN model fusion method with automatic-setting parameter in NSST is designed.The high frequency coefficients are fused by the simplified PCNN model with automatic-setting parameter to calculate the total number of firing for each coefficient.The low frequency coefficients are fused by the combination of regional energy and gradient energy.Experimental results show that this method solves the problem of artifacts and is superior to the selected classic method.the fused image of this method is accurate in local details,clear,well contrast.(4)In order to avoid the limitations of artificial feature extraction,the CNN model is adopted to extract image features by big data-driven adaptive learning,which improves the accuracy of the features.For avoiding the loss of spatial information,an improved CNN model based on up-sampling is proposed,which consists of six layers of superimposed small convolution operations.The multi-layer design not only expands the receptive field,ensures the invariance of translation,but also reduces the number of training parameters,and improves the calculation speed.A fusion method based on improved CNN model is proposed for multi-focus images.The improved CNN model of the input image is separated into focus area and non-focus area,and the focus areas are intergraded by pixel-by-pixel weighted fusion strategy to obtain fusion image.Experimental results show that the fusion results of this method are clear in detail,complete in structure,no distortion in contrast,and no artifacts in the picture.This method effectively avoids grayscale discontinuity,artifacts and other problems,and it is better than traditional methods.With GPU,this method realizes parallels computing and improves execution efficiency.
Keywords/Search Tags:Adaptive region segmentation, Adaptive differential evolution, Constrast, SPCNN, Improved CNN, Multi-scale transform, Multi-source image fusion
PDF Full Text Request
Related items