Font Size: a A A

Remote Sensing Image Fusion Based On Dual-Stream Interactive Deep Network

Posted on:2024-04-10Degree:MasterType:Thesis
Country:ChinaCandidate:A F WangFull Text:PDF
GTID:2542307058982259Subject:Master of Electronic Information (Professional Degree)
Abstract/Summary:PDF Full Text Request
With the rapid development of remote sensing imaging technology,many high-resolution satellites have been successfully launched.More and more remote sensing images have been acquired by these satellites and been used in various fields such as polar glacier research and biomass prediction.However,it is difficult to improve the spatial resolution and spectral resolution of remote sensing image simultaneously because of the physical tradeoff of imaging sensors.For a given sensor,the spatial resolution of panchromatic image is higher than that of low spatial resolution multispectral image.However,low-resolution multispectral images consist of multiple bands(such as 4 bands)and contain abundant spectral information;in contrast,panchromatic images have higher spatial resolution than multispectral images,but there’s only one band.The low-resolution multispectral image is composed of many bands and contains abundant spectral information.In this background,the spatial and spectral information of panchromatic and low-resolution multispectral images are fused by remote sensing image fusion technique(also called Pansharpening)to generate high spatial resolution multispectral images.At present,the methods widely used in remote sensing image fusion include component substitution method,multi-resolution analysis method,variational optimization method and depth learning method.Among the most rapidly developed deep learning methods,there are still some problems such as long network training time and poor fusion effect.To solve these problems,this paper proposes two remote sensing image fusion methods based on depth neural network:1.Pansharpening method based on deep neural network has attracted much attention because of its powerful representation ability.In order to combine feature maps from different subnetworks effectively,a novel pansharpening method based on a spatial and spectral extraction network(SSE-Net)is proposed.Different from other deep neural network-based methods that directly concatenated the features from different subnetwoks,an adaptive feature fusion module is designed in SSE-Net to fuse the feature information content of different subnets.In the network,the spatial and spectral features of low-resolution multispectral and panchromatic images are extracted by using subnetworks.Then,the fusion network composed of adaptive feature fusion modules fuses features of different levels to generate the required high-resolution multispectral images.In the fusion network,features from different subnets are fused adaptively to reduce the redundancy between subnetworks.In addition,the spectral ratio loss and the gradient loss are also defined in the design of loss function to ensure the effective learning of spatial and spectral features.Spectral ratio loss can capture the nonlinear relationship between bands in low-resolution multispectral images to reduce spectral distortion in fusion results.2.The model-driven deep neural networks have obtained satisfactory performance in the pansharpening task owing to their sufficient interpretability.Inspired by the back-projection mechanism,a new back-projection driven model,spatial-spectral dual back-project network(S~2DBPN)is proposed,to fuse the low spatial resolution multispectral and the high spatial resolution panchromatic images by exploiting the back-projection in spatial and spectral domains.Specifically,the proposed S~2DBPN is made up of a spatial back-projection network,a spectral back-projection network,and a reconstruction network.In the spatial back-projection network,spatial down-and up-projection modules are derived from back-projection,which is responsible for the projection of the low-resolution multispectral image into the spatial domain.By analogy with the spatial back-projection,the degradation between high spatial resolution multispectral and panchromatic images is redefined as spectral down-and up-projections.Then,the spectral back-projection network is constructed for the projection of the panchromatic image along the channel dimension.Finally,the features from spatial and spectral back-projection networks are integrated to produce the desired high-resolution multispectral image through the reconstruction network.To sum up,two remote sensing image fusion methods based on deep neural networks are studied in this paper,and a large number of experiments are conducted on the above two methods using reduced-scale and full-scale datasets from Geo Eye-1 and Quick Bird satellites.Experimental results show the effectiveness of the proposed method.
Keywords/Search Tags:Remote sensing image fusion, Pansharpening, Multispectral image, Panchromatic image, Back projection
PDF Full Text Request
Related items