Font Size: a A A

Research On Fusion And Evaluation Of Multispectral And Panchromatic Remote Sensing Images

Posted on:2021-01-26Degree:MasterType:Thesis
Country:ChinaCandidate:D GuoFull Text:PDF
GTID:2492306047986439Subject:Systems Engineering
Abstract/Summary:PDF Full Text Request
Remote sensing image fusion is one of the core technologies of remote sensing image application.Its purpose is to make up for the disadvantages of different image sources and merge multi-source images into one image with higher clarity and recognizability.Due to the characteristics of large amount of data and high redundancy of remote sensing image,the analysis and processing of remote sensing image is more complex.Traditional remote sensing image fusion focuses on how to fuse high-resolution panchromatic image(PAN)and low-resolution multispectral image(MS)together.Spatial information comes from pan and color information comes from MS,which fails to make full use of the spatial information of MS image,so the improvement of image quality after fusion is limited.With the development of deep learning technology,the introduction of deep learning into remote sensing image fusion has attracted more and more attention.The inspiration of introducing deep learning technology into the field of remote sensing comes from super-resolution reconstruction,such as Super-Resolution Convolutional Neural Network(SRCNN),which inputs images of different resolutions into the neural network,and outputs the reconstructed images after learning from the network,and achieves good results.However,the network also has some shortcomings,such as too few convolution layers,which cannot fully learn the features of input images.On the basis of previous studies,this paper designs a multi-resolution deep learning fusion method based on image segmentation U-shaped network(U-Net)to solve the problems of insufficient extraction of MS spatial information and feature extraction in deep learning network.The spatial features of Pan image and MS image are extracted in multi-scale,and the better color information is preserved.The fusion result with higher spatial resolution and better color information is obtained.Finally,a multi-dimensional fusion evaluation factor based on deep learning is proposed to solve the inconsistency problem of fusion evaluation indexes.The factor considers the scoring of different basic evaluation factors,sets weights for each basic evaluation factor through attention mechanism,and gives the evaluation results of fusion image after comprehensive consideration,which can overcome the disadvantages of basic evaluation factors Consistency problem to guide the fusion network learning better mapping.The main contents of this paper are as follows:(1)The convolutional layer of traditional spatial network is not deep enough to extract enough feature information,but blindly increasing the depth of the network will cause difficulties in network training.In response to the above problems,this paper proposes a multi-level convolution module,which extracts features from different levels of images,and designs a special jump connection method,which makes the information dissemination in the entire network more efficient,and the network can more fully extract the feature information of the image.(2)Aiming at the fusion of multi-source images,this paper superimposes the MS image and PAN image into the network after sharpening and upsampling,reducing the complexity of the network and reducing the amount of network parameters.(3)Input images with different scales and the same region can not only increase the utilization rate of training samples,but also make the network extract the features of images from different perspectives.This paper proposes a new image fusion network based on U-Net,which extracts the features of input images from different scales and then fuses them.(4)In view of the inconsistency analysis between different fusion indicators and the naked eye results,this paper proposes an evaluation index based on deep learning.The index allocates the weight of different basic indicators by introducing attention mechanism,that is,assigning more weight to important indicators and reducing the attention to unimportant indicators.The experimental results show that compared with three traditional methods and three deep learning-based methods,the fusion network proposed in Chapter 3 has improved the fusion results after introducing the deep learning evaluation indicators proposed in Chapter 4,which proves that The evaluation factor can fully absorb the advantages of different basic indicators and well overcome the inconsistency between different indicators,which has certain practical value.
Keywords/Search Tags:remote sensing image fusion, deep learning, fusion method, evaluation index
PDF Full Text Request
Related items