Font Size: a A A

Research On The Interpretability Of SAR Target Classification Deep Neural Network

Posted on:2024-01-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y JiangFull Text:PDF
GTID:2568307079465294Subject:Electronic information
Abstract/Summary:PDF Full Text Request
In recent years,deep learning techniques have developed rapidly and have demonstrated excellent performance in various fields.In the SAR target recognition task,compared with the traditional method based on feature engineering,the deep network scheme also has a huge improvement in accuracy and precision.However,the high performance of the deep network mainly depends on the parameter fitting of a large amount of labeled data.It is difficult for people to understand its internal working mechanism and decisionmaking logic.This opacity limits the application of the deep network solution in key areas with high reliability requirements.At the same time,it is also a bottleneck that hinders further research on deep learning.Research on the interpretability problem of deep networks aims to help people understand the behavior of the network and increase the transparency of the system.The main application scenario of SAR target recognition is the military field,which requires extremely high reliability and stability of the system.Understanding the internal logic of the network and building an interpretable model is of great significance for the actual deployment of deep network systems in the field.Therefore,this thesis focuses on the SAR target classification task,and conducts research on the interpretability of deep networks from both passive and active perspectives.The main work is as follows:(1)In order to explore the salient regions in the image that play a key role in the correct classification of the network when the deep network classifies SAR targets,this thesis proposes a class activation mapping algorithm for SAR image input.This method uses the noise that obeys the Rayleigh distribution similar to the SAR image background clutter distribution to perturb the input image,which makes the model decision wrong,and uses the change degree of the activation value of the network neuron output during the decision flipping process to measure the correctness of the channel to the model.Importance of judgment.Finally,the channel importance is used as the weight coefficient for each feature map combination to be mapped to the input space to generate a saliency map for locating key regions.Compared with Grad-CAM++ and Score-CAM algorithms,for the deep network of SAR image classification task,this method can locate more accurate salient regions.(2)One of the reasons why deep networks are considered opaque is that they rely on a large number of parameters to fit data laws when solving problems,while ignoring the problems and data and the physical connections behind them.An interpretable system needs to consider the specificity of specific problems and the support of domain background knowledge.Aiming at the small difference between similar military artificial targets,this thesis proposes a network structure based on the physical characteristics of SAR targets,which uses the SAR-SIFT algorithm to extract the local invariant features of the targets,and uses the bag-of-words model to construct a data set The feature dictionary and the feature vocabulary vector of the test image are generated,and the SAR target is classified by combining the image global features extracted by CNN.Experiments show that this method can not only improve the recognition accuracy,but also improve the robustness of the model when the test image is disturbed,and by comparing the feature points before and after the image disturbance,the reason for this improvement can be explained,increasing the transparency of the network.
Keywords/Search Tags:SAR target classification, Deep networks, Interpretability, SAR background characteristics, SAR target features
PDF Full Text Request
Related items