Font Size: a A A

Multisource Remote Sensing Image Fusion Based On Tensor Representation And Deep Learning

Posted on:2021-03-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y H XingFull Text:PDF
GTID:1482306050963649Subject:Intelligent information processing
Abstract/Summary:PDF Full Text Request
In recent years,many countries have made significant progress in space technology.More and more military or civilian remote sensing satellites have been launched,and the remote sensing data obtained by them have been significantly improved in spatial resolution,temporal resolution and spectral resolution,which have provided convenience for resources and environment,modern agriculture,public security and military reconnaissance.For China,an earth observation remote sensing satellite system which integrates resources,environment,ocean and national defense has been formed.However,due to the limitations of imaging mechanism,hardware technology and launch cost of satellite sensor,remote sensing satellites cannot obtain images with high spatial resolution and high spectral resolution at the same time.Therefore,the contradiction between the acquisition difficulty of high-resolution remote sensing images and the demand for practical application motivates the researchers to apply information fusion technology to remote sensing image processing tasks,which is also the research foundation of this dissertation.Multisource data fusion technology refers to the integration of data from different sources through signal analysis and image processing technologies to improve the accuracy,confidence and richness of data.This dissertation focuses on the fusion of panchromatic and multispectral images,together with the fusion of multispectral and hyperspectral images.Aiming at the problems of spatial information loss and spectral distortion in the fusion process,we fully explore the high-dimensional characteristics and hierarchical features of the data to be fused and design several fusion frameworks based on the tensor representation,deep metric learning,generative adversarial network,et al.The main contributions of this dissertation are summarized as follows:1.Aiming at the spatial loss and spectral distortion problems exist in traditional multiresolution analysis(MRA)based multispectral and panchromatic images fusion methods,this dissertation proposes a multispectral and panchromatic images fusion model based on multiscale geometric support tensor filtering.First,we generalize the least-squares support vector regression to its tensor form and derive tensor filters from the structural risk minimized cost function.The tensor filters have directional property due to the presence of the Ridgelet kernel and they are also generalized to multiple scale.Then,the multiscale geometric tensor filters are utilized to extract the multiscale geometric features.Finally,these features are fused in different scales and directions to obtain the fusion results.The spatial information loss and the spectral distortion are greatly reduced owing to the multiscale and multi-directional feature extraction.Experimental results on the QuickBird and GeoEye-1 data show that the proposed method outperforms the comparison methods.2.A tensor spatial-spectral joint correlation regularized multispectral and hyperspectral images fusion method is proposed.Start with the tensor representation,we explore the lowrank characteristics in different dimensions and construct a series of dimensiondiscriminative low-rank tensors.The fusion problem is then modeled as a sparse spectral gradient regularized discriminative low-rank tensor recovery problem,and the problem can be solved through the alternative direction multiplier method.Experimental results on the Pavia data and the Washington data verify the effectiveness of the proposed method in terms of visual inspection and quality evaluation indices.3.The dissertation proposes a deep metric learning based multispectral and panchromatic images fusion method.Considering the diversity of geometry in remote sensing images,we first classify the patches roughly through the geometric clustering method,and then take the classification results as priors to train the corresponding number of neural networks.During training,the relationships between inputs and outputs are utilized to pre-train the stacked sparse auto-encoders and then the classification results are treated as labels to further finetune the networks.During testing,the panchromatic patches are taken as the inputs,and they are correctly classified by deep distance metric.Because that the multispectral and panchromatic patches describe the same scene,the multispectral patches share the same manifold structures as panchromatic patches.Finally,a specific multispectral patch can be reconstructed through the other patches in the same manifold to obtain the final fusion result.Experimental results on the QuickBird,GeoEye-1 and WorldView-2 datasets show the effectiveness of the proposed method.4.Aiming at the fusion deviations produced by the error accumulation in the fusion process,this dissertation proposes a progressive compensation generative adversarial networks based multispectral and panchromatic images fusion method.The method obtains the fusion results through two-step fusions.Firstly,the multispectral image is pre-sharpened by a deep multiscale guidance adversarial network.Then,based on the pre-sharpened result,a couple of adversarial neural networks are utilized to further compensate the spatial and spectral residuals and to obtain the final fusion result.Aiming at the special network structure,a joint compensation cost is also designed to train the triple generative adversarial networks simultaneously.Experimental results on the QuickBird and WorldView-4 datasets verify the effectiveness of the proposed method.5.Available deep convolutional neural network based fusion methods always produce fusion results that have detail loss and spectral distortion.In order to solve these problems,this dissertation proposes a collaborative fusion framework for multispectral and panchromatic image fusion.The method considers the dual-collaborative relationships in the fusion process,i.e.,spatial-spectral collaboration and intra-spectral collaboration,and utilizes the wavelet transform to separate the high-frequency and low-frequency components of panchromatic features.Then,the low-frequency components of panchromatic features are transformed and utilized as the guidance to guide the graph convolutional neural network based spectral adjustment process.Finally,the high-frequency components of panchromatic features are combined with the adjusted features of each multispectral band to obtain the high-resolution multispectral images.The fusion experimental results on the WorldView-4 datasets demonstrate that the proposed method outperforms the traditional fusion methods and the available deep learning based fusion methods.
Keywords/Search Tags:Multisource image fusion, tensor representation, deep learning, convolutional neural network, generative adversarial network
PDF Full Text Request
Related items