| With the development of hyperspectral data classification technology,how to classify hyperspectral data with a high accuracy is becoming a hotspot in the field of remote sensing,such as agriculture,forestry monitoring,environmental monitoring and other fieldsTraditional machine learning classification algorithms has strong dependency problem on large data sets,causing a low classification accuracy problem.At the same time,due to the complicated labeling and processing procedures of hyperspectral data,it is bound to face the problem of insufficient data in reality.It is difficult for researchers to obtain a large number of labeled hyperspectral remote sensing data sets.Traditional classification algorithm can obtain high accuracy only be trained by large-scale data.For this reason,this article takes the classification of hyperspectral data under small samples as the research object,and studies how to obtain faster and more accurate and useful information from hyperspectral data through a small number of labeled samples or unlabeled samples.The main research is as follows:First of all,this article learns knowledge of small sample learning,and learns several traditional hyperspectral image classification methods and deep learning methods at the same time,and analyze the shortcomings of these traditional classification methods.According to the number of labeled samples of the target scene this paper is divided into two parts:In the first part,a small number of labeled samples are used to design a hyperspectral data classification algorithm based on a deep separable relational network.First of all,in order to make the model more conducive to training,a deep separable convolution is introduced to reduce the computational cost of the model and reduce the model’s dependence on computing power.Secondly,the Leaky-Re LU activation function is introduced into each layer of neural network to improve the training efficiency of the model and enhance the model’s ability to handle complex environments.Finally,the cosine annealing learning rate adjustment strategy is introduced to prevent the model from falling into the local optimal solution and enhance the robustness of the model.Compared with the traditional method,the algorithm in this paper has a higher classification accuracy.In the second part,using unlabeled samples,an algorithm for hyperspectral data classification based on cross-scene adaptive learning is designed.First,on the basis of unsupervised domain adaptive technology,cross-scene knowledge transfer is carried out to reduce the difference between the source scene and the target scene.At the same time,deep hyperparameter convolution is used to deep embed the model to improve the convergence speed and characteristics of the model Extraction capacity.Then learn the Manhattan distance in the Manhattan metric space in order to reduce the computational cost of the model,and finally introduce the weighted K nearest neighbor classifier for classification,and assign weights related to the Manhattan metric distance to the cluster samples to improve the imbalanced hyperspectral image data.Processing ability.Experiments show that this method can effectively process non-training samples. |