| In recent years,with the rapid development of space technology and the intensive launch of remote sensing satellites,massive remote sensing images have been widely used in environmental monitoring,modern agriculture,battle reconnaissance,urban planning,and other fields.Due to the technical limitations of satellite sensors,a single sensor can not directly obtain high spatial resolution multispectral images.Remote sensing satellites usually use two kinds of sensors to collect multispectral images with low spatial resolution and panchromatic images with high spatial resolution.In order to fuse panchromatic and multispectral images and obtain high quality multispectral images,the fusion technology of multispectral and panchromatic images comes into being.At present,fusion methods based on deep learning is particularly attractive.Different from traditional fusion methods,the fusion methods based on deep learning can automatically extract image features and have strong robustness,which can produce high-quality fusion images.However,there are still many problems in these methods,such as incomplete features extraction,insufficient feature fusion,and ignoring the difference and redundancy between spatial and spectral features.In order to solve the above problems,this dissertation proposes four kinds of improved deep neural networks to improve the spatial and spectral resolution of images.The main researches in this dissertation are summarized as follows:1)In order to effectively extract the global information in the image and fully fuse the extracted features,a multispectral and panchromatic image fusion method based on multiscale spatial-spectral interaction is proposed.Firstly,a multiscale Convolutional Neural NetworkTransformer encoding network is constructed to extract different scale local-global features from low spatial resolution multispectral and panchromatic images,respectively.Secondly,a spatialspectral interaction attention network is designed to fully integrate spatial and spectral features,which reduces the redundancy between features and enhances their complementarity.Finally,a multi-scale reconstruction network is proposed to reduce the loss of information in the fusion process.Experiments on Quick Bird satellite and Geoeye-1 satellite data show that the fusion images of the proposed method is better than those of the comparison methods.2)Most fusion methods based on deep learning use the same network to extract spatial and spectral features in low spatial resolution multispectral images and panchromatic images.However,the spatial and spectral features often have different attributes,and the same network cannot effectively represent the spatial and spectral features in images.In addition,the information redundancy between spatial and spectral features leads to spatial or spectral distortion in the fused images.In order to solve the above problems,a multispectral and panchromatic image fusion method based on disentangled network with dual attention is proposed.The proposed method constructs a three-branch network.The network uses three encoders with different structures to disentangle the multispectral and panchromatic images into spatial features,spectral features and common features.And the network introduces local-global spatial attention and spectral interdependency attention into the corresponding encoders to capture the spatial features in panchromatic images and spectral features in low spatial resolution multispectral images.Then,the panchromatic image,low spatial resolution multispectral image and high spatial resolution multispectral image are obtained by recombining the extracted spatial features,spectral features and common features through different decoders.In addition,the maximum coding rate is used to reduce the redundancy between spatial features,spectral features and common features,while enhancing the complementarity between the above features.The fusion results on Quick Bird satellite and Geo Eye-1 satellite data show that the proposed method o is superior to traditional methods and deep learing-based methods in terms of visual inspection and quality evaluation indices.3)In order to further reduce the redundancy between the spatial and spectral features and improve the correlation between the common features in the low spatial resolution multispectral and panchromatic images,a multispectral and panchromatic image fusion method based on disentangled representation via mutual information is proposed.This method disentangles panchromatic and low spatial resolution multispectral images in terms of sensor-specific and common features,respectively.The panchromatic and low spatial resolution multispectral images are cross-reconstructed by cross-coupled Transformer to facilitate the disentanglement of common features and sensor-specific features.At the same time,in order to enhance the complementarity between the common features and the sensor-specific features,the self-coupled Transformer is used to self-reconstruct the panchromatic and the low spatial resolution multispectral images.In addition,the mutual information between common features of low spatial resolution multispectral and panchromatic images is maximized to improve the correlation between common features from different images.Then,the mutual information between the common features and sensor-specific features from the same image is minimized to reduce the redundancy among them.Finally,all disentangled features are integrated by a fusion Transformer to generate the high spatial resolution multispectral image.The fusion experiments on Quick Bird satellite and Geo Eye-1 satellite data show that the fusion effect of this method is better than that of the comparison method.4)To solve the problem of the low relevant between spatial details in panchromatic and multispectral images and over-injection for spatial details,a multispectral and panchromatic image fusion method based on detail injection network is proposed.The method first proposes the adaptive multi-scale dilated convolution module.This module can extract richer features from different scales adaptively.Then,an information injection module is designed to enhance the spatial information of low spatial resolution multispectral images and maintain the spectral information at the same time.This module consists of spectral attention module,spatial attention module and convolution module.Finally,the experimental results on Quick Bird satellite and Geo Eye-1 satellite data show that this method can effectively improve the quality of fusion images. |