| The rapid development and large-scale application of optical remote sensing have generated a massive amount of hyperspectral remote sensing images that require processing.The classification of various targets in hyperspectral images is an essential part of hyperspectral image processing.Hyperspectral images contain rich spectral and spatial information,providing powerful data support for supervised learning methods.Many efficient classification methods have been produced by employing convolutional networks to extract and integrate spatial and channel information from hyperspectral images.However,the hyperspectral classification task still faces challenges such as insufficient local feature extraction and a shortage of available training samples.Therefore,the focus of this paper is to enhance the model’s extraction ability and alleviate the limitation of model training due to the scarcity of available samples.Specifically,a high-spectral-resolution image classification model named AMSFE-CNN is designed with a multi-scale feature extraction and fusion attention mechanism.The model is a dual-branch network that combines 3D and 2D convolutions.The spectral residual branch is used to preserve complete spectral information,while the other branch extracts multi-scale local and global features through a pyramid convolution and an Inception module guided by attention.To demonstrate the classification performance of the proposed model and the rationality of each module’s design,we conducted comparative experiments and ablation experiments on three datasets,including Pavia University,Indian Pains,and Salinas.The overall classification accuracy of AMSFE-CNN on PU,IP,and SV datasets is 99.71%,97.75%,and 99.99%,respectively,with only 94 and 16 misclassified samples out of the tested 207,400 and 111,160 samples in PU and SV,respectively,yielding errors lower than 1/7000;this fully demonstrates the model’s outstanding classification performance.To address the problem of a shortage of available samples for model training,we propose an Asymmetric Google Transfer Learning Network(AGTLNet)that can train universal parameters from natural image data.This network is concatenated with Inception structures containing asymmetric convolution blocks.Experimental results showed that the best classification performance of the model was achieved by transferring the frozen parameters before the second transfer node to the hyperspectral image task.In situations where the sample size is limited,the accuracy of the AGTLNet model tested on the SV dataset can reach 80%,which is higher than the classification accuracy of SVM and other non-transfer learning networks.The introduction of a transfer strategy in the AGTLNet model solves the difficulty of model training in the hyperspectral image classification field to some extent,effectively improves the classification performance of the model under a small number of samples,and validates the effectiveness and feasibility of the transfer strategy in the hyperspectral classification field. |