| Remote sensing image fusion aims to fuse multiple remote sensing images corresponding to space to obtain high-quality remote sensing images with richer information.Among them,hyperspectral images and multispectral images are highly complementary because of their information,which has become the key object of remote sensing image fusion research.In recent years,the development of hyperspectral remote sensing is in full swing,but due to hardware limitations,it is difficult to obtain hyperspectral images with high spatial resolution directly.Therefore,software methods are needed to combine hyperspectral images with low spatial resolution and multispectral with low spectral resolution.The information of the image is fused to obtain a more accurate remote sensing image.The traditional fusion method is based on manually designed data features,so it can not get good results,and the deep learning-based fusion method uses end-to-end training to avoid the definition of data features and can obtain better fusion results.However,because the fusion model based on deep learning is designed based on spatial features,the structure is complex and there are many parameters,which require high data and makes it difficult to train and prone to overfitting.Under this background,based on the deep learning-based remote sensing image fusion method,this paper proposes a pixel fusion method of remote sensing images.The pixel fusion method of remote sensing images combines hyperspectral and multispectral images at the information level on a pixel-by-pixel basis.To perform fusion,that is,first encode information of the pixel spectrum,and then achieve fusion on the pixel coding.Since spatial features are not considered,pixel fusion can reduce the requirements of the fusion model on training data and reduce the complexity of the model.On this basis,this article proposes two remote sensing image pixel fusion models and supporting training methods.The main work contents are summarized as follows:1)A remote sensing image fusion method based on non-local compressive network is proposed for hyperspectral and multispectral remote sensing image fusion.The non-local compressive network first uses a non-local compressive encoder to encode hyperspectral pixels,then uses an approximate encoder to encode multi-spectral pixels,then uses a fusion model to fuse the two codes,and finally uses a shared decoder to encode the fusion Decode the pixels to obtain high-quality hyperspectral images.Since features are extracted for pixels,non-local compressive networks can obtain better fusion results.2)A separate enhanced training method is proposed to expand the source of training data for the pixel fusion model.On the basis of using the original data for holistic training,separate enhanced training disassembles the end-to-end non-local compression network into different codec combinations for training.Each codec combination learns the nonhyperspectral pixel separately.Local compression coding and approximate mapping of multispectral pixel to hyperspectral pixel coding.The split-enhancement training phase does not require multi-spectral images that match hyperspectral images to form a training set,thus allowing the model to be trained on data from more abundant sources.3)A fusion method of remote sensing images based on deep sparse network is proposed for the efficient fusion of hyperspectral and multispectral remote sensing images.The deep sparse network simulates the working mode of the mammalian nervous system,and has the characteristics of hierarchical sparse coding,same-layer competition,and top-down feedback.Through the simulation of the neural system,the deep sparse network can reduce the model parameter redundancy,and can be more fully trained on a smaller dataset,thereby improving the fusion effect on the small dataset. |