| Hyperspectral images store different spectral features of the observation scene,which can be used in many fields.However,with rich spectral information,the spatial resolution of hyperspectral images is generally low.Therefore,the fusion of hyperspectral images and multispectral images has become one of the important issues in remote sensing image processing.At present,deep learning has achieved good results in the field of images,also has excellent performance in remote sensing image fusion,however,the existing algorithms ignore two problems:1)the scale gap between the original hyperspectral image and the multispectral image,and 2)the attention to spectral information.In this paper,targeted solutions to the above problems are proposed,and the solutions are integrated into the design of convolutional neural network.The main contents of this paper are as follows:Firstly,for the problem that the scale difference between the original hyperspectral image and the multispectral image is too large,we introduce the concept of multi-scale feature level fusion,construct a multi-level fusion structure,fuse the feature maps of the hyperspectral image and the multispectral image at multiple scales,and realize the robust representation of all channels through the redundancy between channels.Meanwhile,for alleviate the problem of blurry prediction caused by2 loss,our algorithm introduces1 loss and2 loss to form the final regularization term.Secondly,the 2D convolutional neural network model only considers the spatial correlation of channels in the image while ignoring the spectral correlation when extracting spectral data,the algorithm in Chapter 4 of this paper uses 3D convolution to replace the 2D convolution in the algorithm in Chapter 3.The 3D convolution kernel slides in the spatial dimension as well as in the spectral dimension.The correlation between channels can be used to refine the extraction of spectral data and complete the reconstruction of spectral information.Finally,for the problem of lack of spatial information of high order features in the deep layer of the network in the process of image reconstruction,we add cross-layer connection branch in the Chapter 4,send the fusion results of each scale to the final reconstruction stage through a branch,and use the shallow features to assist the image reconstruction.In this paper,the proposed algorithm is tested on two datasets collected by different satellite sensors,and ablation experiments and comparative experiments are set to analyze the effectiveness of the algorithm.The experimental results show that the performance of the proposed algorithm is better than several state-of-the-art methods. |