| Due to its all-day,all-weather,long operating range,and high-resolution imaging abilities,Inverse Synthetic Aperture Radar(ISAR)can obtain fine images of non-cooperative targets(aircraft,satellite and ships)without being limited by lighting and meteorological conditions,thus has found wide applications in military and civilian fields.Specifically,high resolution ISAR image can provide abundant and distinguishable shape,structure,size,and motion information,thereby playing significant roles in radar target automatic recognition.However,traditional methods first manually design feature extractors based on target characteristics,and then design classifiers for recognition.This process requires a lot of expert knowledge,and the cumbersome and time-consuming steps yields to a low level of intelligence.In recent years,Deep Learning(DL)has realized data-based automatic feature extraction and recognition due to its powerful automatic feature extraction capability,thereby avoiding the tedious steps of feature extractor design and achieving great success in both optical and Synthetic Aperture Radar(SAR)Automatic target recognition.Due to the unique imaging mechanisms,however,e.g.,time-varying effective rotation vectors and image accumulation angles,the ISAR image usually has unknown deformations such as stretching,compression,and rotation,which pose difficulties to robust feature extraction,accurate recognition,and hinders direct application of the existing deep learning methods.At the same time,ISAR target recognition also faces challenges of insufficient mining and fusion of multi-domain information,which includes polarization,phase information,and High Resolution Range Profile(HRRP).To tackle these issues,this dissertation designs effective deep networks to achieve deformation robust recognition of ISAR images.The main content of this dissertation can be summarized as follows.1.To cope with the problems that traditional ISAR image recognition methods are not endto-end trainable and have poor recognition performance,a Deep Convolutional Neural Network(DCNN)based ISAR image recognition model is designed to achieve robust feature extraction through its unique position independent local feature extraction.Then,in view of DCNN’s inability to process image position and direction and limited robustness to image deformation,a deformation feature extraction and recognition method based on capsule network is proposed,which improves the recognition accuracy of ISAR images by more than 2%.2.In response to the problems that existing ISAR image recognition methods fail to adaptively adjust target deformation and make full use of polarimetric information,a deformation robust recognition method,i.e.,Spatial Transformer-Multi Channel-Deep Convolutional Neural Network(ST-MC-DCNN)is proposed.This method takes the ISAR images under three polarization modes as input for three channels,and adaptively adjusts the image deformation of each polarization channel through a double-layer spatial transformation module.Then,robust hierarchical feature extraction and fusion are achieved through multi-channel DCNN,and the recognition results are output by the classifier.For the electromagnetic simulated,fully-polarized ISAR images of four satellites,the proposed method improves the recognition accuracy by more than 6% compared to DCNN.3.In view of the shortcomings of the spatial transformation module in deformation adjustment,e.g.,boundary effect,and the small Receptive field of traditional CNN which only focuses on local features while ignoring global features conducive to recognition,a robust ISAR image recognition method based on the Inverse Spatial Transformer-Attention Augmented Convolutional Network(IC-ST-AACN)is proposed.The network adaptively adjusts image deformation through an inverse synthesis space transformation module,and extracts local and global features of the image through an attention enhanced convolution module.Finally,the recognition results are output by the softmax classifier.Experimental results show that compared to traditional DCNN methods,the recognition results of IC-STAACN can be improved by at least 11%.4.In response to the inability of spatial transformation module and the inverse synthesis spatial transformation module in accurate deformation adjustment,as well as the insufficient information exploitation of complex-valued ISAR image and its HRRP sequence,a Hybrid Convolutional Self-Attention Based Two Channel Network(HCS-TCN)is designed to extract and fuse information buried in the complex ISAR image and the corresponding HRRP sequence.The network achieves adaptive adjustment of scaling,rotation,and combined deformation for both the ISAR image and HRRP through a precisely adjustable spatial transformation network module.Then,features of the ISAR image and HRRP are extracted through the complex convolutional neural network module and the horizontal stripe attention module,respectively.Finally,target recognition is achieved by designing the fusion loss.The recognition results of electromagnetic simulated ISAR images of four satellites show that HCS-TCN can improve the recognition accuracy by more than 5%,and is robustness to ISAR image deformation. |