Font Size: a A A

Research On Hyperspectral Image Based Multi-source Remote Sensing Image Fusion

Posted on:2020-08-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:C R GeFull Text:PDF
GTID:1362330602967983Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Hyperspectral remote sensing images contain detailed,abundant,and continuous spectrums,which can characterize the unique attributes of ground objects.Hyperspectral images have broad application prospects in disaster detection,forest supervision,urban planning and construction,agricultural production estimation,and other fields.However,the spatial resolution of hyperspectral images is not high,and there are phenomena such as spectral aliasing and "different objects with the same spectrum",which seriously affect the classification accuracy in complex scenes.Hyperspectral and other source remote sensing images have rich complementary and redundant information.The full use of this information can improve the performance of intelligent processing algorithms and can better serve a variety of applications.However,there are still many problems with multi-source image fusion based on hyperspectral images: For the fusion of hyperspectral and panchromatic images,how to solve the problem of inaccurate extraction of endmember subspace in the fusion model;For the fusion of hyperspectral and LiDAR images,how to perform sample calibration in complex scenes,and how to extract spatial features and how to use spatialspectral-height information to improve the accuracy of feature classification,that is,to design a fusion framework.For the above problems,this dissertation studies the combined effects of the hyperspectral unmixing model and the hyperspectral pansharpening model and extracts a more accurate hyperspectral endmember subspace.In the fusion of hyperspectral and LiDAR images,this dissertation studies the sample calibration based on multi-source remote sensing images and studies spatial feature extraction and fusion framework design and evaluates the proposed method through multiple real remote sensing data sets.The main innovative work of the dissertation is as follows:(1)For the problem of the low spatial resolution of aerospace hyperspectral images,this dissertation proposes a hyperspectral and panchromatic image fusion method using a combination of hyperspectral sparse unmixing model and hyperspectral pansharpening model.The sparse unmixing is used to extract the endmember subspace composed of the spectral library.The extracted endmember subspace can alleviate the problem of inaccurate endmember subspace extraction of low spatial resolution hyperspectral images.For the Moffett image,the ERGAS of the proposed method is 3.65,which is smaller than that of the traditional vertex component analysis subspace algorithm,3.73.(2)In order to improve the classification accuracy of ground objects,the fusion of airborne hyperspectral image with high spatial resolution and LiDAR image was studied.To solving the problem of sample calibration in complex scenes,this dissertation proposes a sample calibration method using multi-source remote sensing images,which can be applied to actual data.In order to improve the problem that the speed of the traditional generalized graphbased(GGF)fusion algorithm is slow in hyperspectral and LiDAR image fusion,this dissertation uses superpixel segmentation to remove samples with large spectral difference in local neighborhood,and then uses the local pixel neighborhood preserving embedding to maintain spatial locality and reduces the dimensions of multi-source stacked features quickly.Test results on the 2012 Houston dataset show that compared with GGF,the proposed method improves classification accuracy by 0.5%,and the fusion speed is 51.6% faster.This dissertation proposes to use the stacked features of the canopy height model(CHM)and the digital terrain model(DTM)to represent LiDAR image features,replacing the digital surface model(DSM)in traditional algorithms.The test results on the Rochester dataset show that the overall classification accuracy of the stacked features of CHM and DTM is 84.11%,which is much higher than the overall classification accuracy of DSM,69.71%.(3)For the problem of spatial feature selection and fusion framework design in the hyperspectral and LiDAR images fusion,this dissertation proposes a residual fusion strategy for the first time and designs a new fusion framework using extinction profile features(EP)and local binary pattern features(LBP)and collaborative representation classifiers.Residual fusion strategy is used to avoid the high dimension of stacked features and the rough results of maximum voting,which can correct the classification result of the single-source features.The test results on the 2012 Houston dataset show that the proposed method improves classification accuracy by 2.35% compared with GGF,and proves that the stacked features of EP features and LBP features can obtain higher classification precision than single features.The proposed fusion framework can be generalized to any spatial feature and be generalized to any classifier based on collaborative representation.(4)In the hyperspectral and LiDAR image fusion,for the problem of spatial feature selection and fusion framework design,the deep residual network is used to extract deep features from the stacked features of EP features and LBP features.This dissertation designs three fusion frameworks based on deep residual networks: deep feature fusion,probability reconstruction fusion,and probability multiplication fusion.For the deep feature fusion framework design,the features are stacked in the hidden layers of the deep learning network to avoid high dimensionality caused by the direct feature stack.The frameworks of probability reconstruction fusion and probability multiplication fusion directly fuse the probability matrix to avoid the rough result of maximum voting.At the same time,the parameter selection problem is solved by the validation set or dot product of the probability matrixes.Test results on the 2012 Houston dataset show that the proposed method improves the classification accuracy by 4.97% compared to GGF.The proposed three fusion frameworks in this dissertation do not need to adjust parameters in the fusion process,are easy to implement,can be generalized to any spatial feature,and can be generalized to any deep learning network structure used for 3D data classification,and have important realistic and application significance.
Keywords/Search Tags:Hyperspectral image, LiDAR, panchromatic image, deep residual network, residual fusion, spatial feature, sample calibration, hyperspectral pansharpening
PDF Full Text Request
Related items