As sensing technology has developed,satellite remote sensing systems have achieved more powerful image acquisition capabilities.The types of remote sensing images acquired are diverse,and their spatial and spectral resolutions have been significantly improved.Integrating the complementary advantages of various remote sensing images from different sources and enhancing the cooperative interpretation ability of multisource images is a key issue in the application of remote sensing technology.This dissertation focuses on panchromatic and multispectral image fusion(pansharpening).Considering the important role of image spectral information and spatial structure information in various practical applications of remote sensing images,this dissertation takes the imaging mechanism of remote sensing images as guidance,aiming at the shortcomings of existing methods,and considers the spectral compensation mechanism,spatial structure enhancement mechanism,multilevel structure information and spatial spectral fusion based on a variational model.The high-performance methods of panchromatic and multispectral image fusion are proposed,which significantly improves the fusion accuracy.(1)To solve the problem that most remote sensing image fusion methods based on deep neural networks are constrained by network depth,this dissertation proposes a pansharpening method based on a multilevel dense network.A lightweight dense connection block is designed,which is suitable for the pansharpening fusion task.Then,a multilevel backbone network structure is designed based on a lightweight dense block.With a multilevel long jump connection,the depth of the network is deepened.The experimental results show that the proposed multilevel dense connection depth neural network fusion method improves the nonlinear expression ability of the network,enhances the spatial structure information,and improves the spectral fidelity of the fused image.(2)To solve the problem of the current deep neural network-based methods lacking the ability to mine the imaging mechanism of remote sensing images,this dissertation proposes a remote sensing image fusion method based on a generative adversarial learning mode considering the use of structure enhancement and spectral compensation.First,the spatial structure information of the panchromatic image is extracted by superimposing the horizontal and vertical directions by using the first-order forward difference operator to obtain the enhanced spatial structure features.In the generative adversarial learning mode,the superimposed structure information in two directions is used as the input of the generator and discriminator,and the optimization objective function for enhancing structure information is redesigned.The optimization objective function restricts the training process to reduce the loss of spectral information and spatial structure information in the fused image.In addition,it is worth emphasizing that a new spectral compensation structure is designed in the generator network structure,which is employed to add low-resolution multispectral images as a supplement to spectral information in the convolution process.The experimental results show that the combination of structure enhancement and spectral compensation can reduce the loss of spectral and spatial structure information in the fused image.(3)To alleviate the problem whereby the current deep neural network-based methods use only one level of panchromatic image structure information,this dissertation proposes a remote sensing image fusion method combining multiresolution structure and multistream fusion using a generative adversarial network.After obtaining the various structural information of panchromatic images,a multistream network structure suitable for the input of various structural information is designed for the generator.For the discriminator,a variety of structural information is also used to improve the discrimination ability.To compensate for the spectral loss caused by the emphasis on structural information learning in the generator network,this dissertation also uses a long jump connection to directly transmit the low-level spectral features to the high-level features to compensate for the spectral information.Experiments show that the method gives the fused image more abundant spatial structure and spectral information.(4)To alleviate the problem whereby the fusion process of deep neural networkbased methods is not interpretable,this dissertation proposes an interpretable deep neural network remote sensing image fusion method driven by a variational model.First,based on the observation model and remote sensing image domain knowledge,the physical transformation relationship between the fusion image and the input images is given.According to the physical transformation relationship and prior knowledge,the variational model of the fusion process is established.Then,the variational model is solved by proximal gradient descent,and the solution steps are further expanded into modules of the deep neural network.The solving process is identical to the backbone structure of the deep neural network.Finally,the designed deep neural network uses the training data for end-to-end learning to obtain the remote sensing image fusion model.The method proposed in this dissertation not only solves the problem that remote sensing image fusion based on variational methods is not easy to reproduce but also explains the modules in the structure of remote sensing image fusion networks based on deep neural networks. |