Hyperspectral images have the characteristics of high resolution,a large number of bands,and a large amount of data.They have been widely used in precision agriculture,geological surveying and mapping,mineral exploration and environmental monitoring.With the development of deep learning,scholars began to use neural networks to classify hyperspectral images and achieved good results.However,the existing neural network-based hyperspectral image classification methods have problems such as large network parameters,disappearance of gradients,and insufficient use of spatial information,which limit the improvement of classification accuracy and operating efficiency.For the above issues,this article has carried out relevant research work,the main contents include:(1)In order to solve the problem of feature redundancy in hyperspectral images,this thesis proposes a random forest-based recursive feature elimination method(RF-RFE)for feature selection.Based on the random forest model,the feature with the lowest importance score is successively removed in the manner of sequence backward selection until the required number of features is reached,and the feature subset with the best classification performance is obtained.(2)In order to solve the problem of insufficient training samples due to the high cost of manual labeling,this thesis proposes a data enhancement method based on the generation of confrontation network DCGAN.While expanding the training data,the spatial characteristics of the hyperspectral image are also fully obtained.Learn.Use DCGAN to generate a new waveband,and combine it with the optimal waveband selected by the feature to form new experimental data.(3)Adopt the strategy of adversarial training to construct the convolutional neural network model framework.The experiment uses the strategy of adversarial training to train the ResNet-18 network,and then uses the trained network to identify the test sample pixel block,obtain the category label of the center test pixel,and calculate the recognition accuracy.(4)In order to solve the problem of large amount of parameters and low computational efficiency of deep neural networks,this thesis proposes a network lightweight method based on knowledge distillation.The trained ResNet-18 network model is used as the teacher network,and the neural network containing 2 convolutional layers is used as the student network,and the knowledge learned by the teacher network is transferred to the student network.After 200 iterations of training,the recognition accuracy of the student network reached 99.64%.(5)Compare this method with traditional machine learning methods(such as KNN,support vector machine).The experimental results show that compared with the traditional machine learning method,the method in this thesis has higher recognition accuracy and operation efficiency,and has certain promotion and application value. |