Font Size: a A A

Research And Implementation Of Variational Multiscale Remote Sensing Image Fusion

Posted on:2019-12-04Degree:MasterType:Thesis
Country:ChinaCandidate:C P YuFull Text:PDF
GTID:2382330548985946Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the increasing military and agricultural use of remote sensing images,the requirements for the spatial and spectral resolution of the image are getting higher and higher.Remote sensing images acquired by a single sensor cannot meet these requirements due to the hardware limitations of remote sensing satellite sensor.Therefore,the images acquired by many kinds of sensors need to be fused.These images have certain spectral or spatial structure complementarity.The fusion of the redundant and complementary information in these images can help to analyze the scene more accurately.In this dissertation,the self-similarity of images and the image fusion algorithm are used to obtain the intermediate image with guaranteed spectral information as the initial spectral constraint,and the weighted gradient constraint and L1/2 norm constraint are constructed to compose the objective function and get a single layer of the fusion image by solving the minimum value of the objective function.Using the advantages of small scale estimation to make up the multi-scale framework to get the final image.The work of this paper is divided into several parts.First of all,aiming at the limitation of using the original low-resolution multispectral image as the initial spectral constraint item in the current fusion image algorithm,we propose to use the image's self-similarity to up-sample the original low resolution image and construct the spectral constraint term.By minimizing the value of the objective function,the target similar image block corresponding to the current original image block is solved,and the high frequency difference value of the target image block is obtained,thereby enriching the detail information of the original image block and ensuring the spectral information of the image.Secondly,by using the gradient difference of each channel,the weighted gradient constraint is constructed and the appropriate amount of detail information is injected into different channels.The weighted gradient constraint not only ensures the structural differences between the channels of the image,but also ensures that the structural information of the fused image is consistent with the panchromatic image.At the same time,the L1/2 norm is used to ensure the overall trend of image gradient distribution.Finally,based on the advantages of small-scale estimation,this paper constructs a multi-scale fusion framework.By constructing a scale pyramid,a layer-by-layer resolution increase for the initial input image is achieved.Because of the small scale between each level,the pyramid model has a relatively small amount of data to be estimated during iteration in the hierarchical fusion objective function,and also reduces the error calculation due to the algorithm.In addition,the validity of the proposed algorithm will be validated on the Pleiades,QuickBird,WordView2 and WordView3 satellite image datasets and compared with current popular fusion algorithms.The fusion algorithm will be carried out respectively from subjective visual and objective quantitative analysis evaluation.Experimental results show that the proposed algorithm is superior to other algorithms in subjective visual and objective quantitative analysis.
Keywords/Search Tags:image fusion, image self-similarity, weighted gradient constraints, L1/2 norm constraint, multi scale
PDF Full Text Request
Related items