Font Size: a A A

Remote Sensing Image Fusion Algorithm Based On Cross-layer Information Transfer Fusion CNN

Posted on:2021-01-23Degree:MasterType:Thesis
Country:ChinaCandidate:M L WangFull Text:PDF
GTID:2392330629952698Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the rapid development of remote sensing technology,various types of satellite sensors have been applied to satellites to obtain different types of remote sensing images.However,due to the limitation of radiant energy,remote sensing satellite sensors cannot obtain images with have high spatial resolution and high spectral resolution in the same area,we can only obtain multispectral images(MS)with rich spectral information but low spatial resolution and panchromatic image(PAN)with high spatial resolution but low spectral resolution.However,in practical applications such as vegetation identification,environmental detection,and lithology analysis,which rely on the spatial information to descript textures and spectral information to classify.In order to effectively fuse MS images and PAN images to obtain high-quality fusion images that meet the actual requirements,remote sensing image fusion technology came into being.Traditional remote sensing image fusion algorithms need to artificially select different fusion rules for different types of pictures,and the fusion quality is severely related to the image decomposition method,decomposition level and the fusion rules selected at each level,as a result,the fusion quality is extremely unstable.In recent years,convolutional neural network(CNN)has been widely used in image segmentation,computer vision recognition,image classification and other fields.Due to its characteristics such as weight sharing and local connection,it has better performance than traditional methods.U-Net is a convolutional neural network with a symmetric structure.It uses a cross-layer information connection method to crop lowlevel feature maps with higher-level feature maps to process,thereby it can better retain feature map information from multiple scales,and reduce the loss of information,it solves the problem of pixel-level segmentation of medical images more efficiently and accurately.Inspired by U-Net,in order to overcome the shortcomings of the traditional remote sensing image fusion method and take the advantages of convolutional neural networks in processing images,a remote sensing image fusion algorithm based on CLIT-FCNN(Cross-Layer Information Transfer Fusion CNN)was proposed.Cross-layer information transfer is used to fuse feature maps of different scales to reduce the loss of image information and improve the quality of the fused image.The well-trained network can implicitly represent a robust end-to end remote sensing image fusion rule.The network model proposed in this paper uses a large amount of real remote sensing image data as a training dataset,and the dataset includes different types of features.The validity and robustness of the proposed algorithm were verified by using multiple sets of DEIMOS-2 satellite,QuickBird satellite and GF-2 satellite remote sensing image data,and the quality of the fused image was evaluated by a combination of visual perception and objective evaluation criteria.Experimental results demonstrate that the CLIT-FCNN algorithm as compared to some state of art traditional methods can effectively combine the spatial information of panchromatic image with the spectral information of multispectral image,and the algorithm has strong stability.
Keywords/Search Tags:Remote Sensing Image Fusion, convolutional neural network, Machine Learning, Stability
PDF Full Text Request
Related items