Font Size: a A A

The Unsupervised Deep Learning Methods For Representation And Classification Of Re-Mote Sensing Images

Posted on:2018-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:Y YuFull Text:PDF
GTID:2392330623950698Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Remote sensing images have already played an increasingly important role in practical tasks such as military reconnaissance,mapping,environmental monitoring,geographic information systems,and precision agriculture.The key process for these tasks is the feature representation for remote sensing images.However,due to the limitation of the representational ability,researches have focus on the deep models to obtain more discriminative features from remote sensing images.Among these deep models,convolution neural networks(CNNs),which are the most famous one,have achieved expressive results in many computer vision tasks.However,the good performance of CNNs requires large amount of labeled samples.While in traditional remote sensing scenes,marking data is usually time-consuming and costly.Even in some special circumstances,obtaining the label of remote sensing data is impossible.Therefore,based on convolutional neural network,this work mainly focuses on unsupervised learning method,which don’t need labeled data for training,and tries to solve the problem caused by the lack of labeled samples in real-world application.The contributions of this paper are as follows.First,this paper proposes a Balanced Data Driven Sparsity(BDDS)method to improve the unsupervised feature representation of CNNs.This method analyzes the distribution of the extracted features from remote sensing images and finds that the distribution of different kinds of remote sensing image blocks is quite imbalance.The training of the model in the random way makes the model sub-optimal since the imbalance of the training samples forces the model to response to the features with large amount,such as the smooth features.This shows negative effects on the representation of remote sensing images.The proposed BDDS method forces the training samples to be balance and diversity,so that each category of samples can be responded by the model and the representational ability of the model can also be improved.Second,this paper presents an Unsupervised Convolutional Feature Fusion Network(UCFFN),which realize the unsupervised learning of the deep model as well as the information fusion in the feature level.UCFFN shows two advantages in theory.First,since the unsupervised learning cannot use the label to get effective feedback information,the training process cannot distinguish if the output of the information obtained in the former layer is useful for the sequential layer.Therefore,it will lead to the loss of effective information in the training process.However,the proposed UCFFN can recapture the information.Second,UCFFN can combine the different levels of abstract information,which is obtained by different depths of the deep model,to further improve the representational ability for remote sensing images.Third,experiments conducted on the real-world remote sensing image dataset(UC Merced Land Use Dataset)validate that BDDS is an effective method to improve the ability of supervised feature representation of deep model.And UCFFN achieves a classification accuracy of 88.57% on the dataset,it shows the effectiveness of the proposed methods.It is proved that the proposed methods can obtain comparable or even better results with fewer layers when compared with other deep models.Moreover,the results on the multispectral dataset(Brazilian Coffee Scenes)also shows the ability of generalization of UCFFN.
Keywords/Search Tags:Unsupervised Deep Learning, Remote Sensing Image, Sparse Representation, Feature Fusion, Convolutional Neural Networks(CNNs)
PDF Full Text Request
Related items