| Remote sensing images contain rich surface information,which has great research value and practical application value.With the development of modern remote sensing image fusion technology,remote sensing images are widely used in various fields of military and civilian production,effectively promoting the development of the national economy.Earth remote sensing satellites usually provide two different types of images,one is panchromatic image with high spatial resolution but no color of ground objects,and the other is multispectral image with rich color information of ground objects but low spatial resolution.At present,satellite sensors directly acquire panchromatic images and multispectral images.Due to cost reasons and technical constraints,it is expensive to directly acquire multispectral images with high spatial and high spectral resolution,and the acquisition method is complex.Based on this,a large number of remote sensing image fusion methods emerge in endlessly,which are used to fuse the spatial spectrum information in panchromatic and multispectral images to generate multispectral images with high spatial and hyperspectral resolution.However,traditional remote sensing image fusion methods are prone to cause spectral distortion,poor spatial information preservation,and the quality of fusion results of existing deep learning remote sensing image fusion methods is uneven.It is necessary to train deep networks separately for different types of satellites,which limits the practical application in actual production and life.In view of the above problems,this thesis improves the network structure design for the remote sensing image fusion method of deep learning,as follows.Existing DNN-based(deep neural networks)pan-sharpening methods for remote sensing images need to train the network separately for remote sensing images of different satellites to obtain satisfactory fusion results.To solve these issues,we propose the pan-sharpening on two-stage deep network(TSN)in this thesis.For different satellites,a branch of networks is constructed,in which the spatial enhancement network(SEN)is shared to improve the spatial details in the fused images.A spectral adjustment network(SAN)is employed to capture the spectral characteristics of the specific satellite.Through SAN,the spectral information is refined to produce the final fusion results.Such a framework can integrate the datasets from different satellites together for the sufficient training of SEN.Due to the simple but efficient structure,SAN can achieve the training on the few-shot datasets,which improves the flexibility and generalization of the proposed method.The experimental results show that the proposed method can produce better fusion results than traditional methods and deep methods on Quick Bird,Geo Eye-1 and World View-4datasets,and better fusion results than traditional methods on few-shot datasets.Aiming at the problem that existing deep learning methods need to train each type of satellite separately,TSN uses two-stage deep network to solve.In order to further train a multi-satellite shared network,this thesis proposes the united multi-satellite training with attention for pan-sharpening(UMTA).Integrate data sets from different satellites,make mixed data sets to train the whole network,and treat different kinds of satellites as different domains.The feature extractor extracts the features of different domains,and the domain classifier obtains the domain invariant features of different domains through confrontation.At the same time,the attention mechanism is introduced to capture the spectral information of a specific satellite.The spectral information of different domains is refined to improve the final fusion effect.In this case,the relationship between different domains of the training data is fully utilized at multiple levels to achieve better panchromatic sharpening performance.The experimental results show that the proposed method can produce better fusion results than traditional methods on Quick Bird,Geo Eye-1 and World View-4 datasets,and some indicators are better than the fusion results of deep methods. |