Choroidal atrophy is a common symptom in eyes with high myopia and pathological myopia,which is also an important basis for the pathological myopia grading.Fundus image can reveal choroidal atrophy well.Therefore,the automatic segmentation of choroidal atrophy in fundus image is of great significance for the prevention,diagnosis and treatment of the related fundus diseases.Choroidal atrophy has different forms in different stages,mainly including parapapillary choroidal atrophy,diffuse choroidal atrophy and patchy choroidal atrophy.Large differences in scale and shape and blurred boundaries make the automatic segmentation of choroidal atrophy very challenging.This thesis studies the choroidal atrophy segmentation in fundus image based on encoder-decoder structure convolutional neural network.The main work and contributions are summarized as follows:An encoder-decoder based global and local feature reconstruction network(GLFRNet)is proposed for the choroidal atrophy segmentation.A global feature reconstruction(GFR)module is proposed to enhance the global context feature capture ability of the network and reduce the semantic gap between hierarchical features.A local feature reconstruction(LFR)module is proposed to dynamically up-sample the features in the decoder stages and recover the spatial information.The GFR modules are embedded in the skip connection of GLFRNet,and the LFR modules are used as the decoder to recover the resolution stage by stage.GLFRNet solves the imbalance between the semantic information and spatial information of feature maps from the global and local perspectives respectively,and effectively improves the segmentation performance of the choroidal atrophy.In order to achieve the efficient segmentation of choroidal atrophy,a lightweight choroidal atrophy segmentation network based on dynamic up-sampling is proposed,which is named as LighteningNet.Different from the classical U-shaped network,LighteningNet adopts fast multi-level feature fusion(MFF)module and Dual-branch feature-guided upsampling(DFU)module in the decoder stage.The MFF module fast fuses the multi-level features from the encoder,and the DFU module dynamically guides the up-sampling of the feature maps and enables the network to obtain the global receptive field.In order to further improve the segmentation performance of LighteningNet,a multi-level feature based knowledge distillation method is adopted.The pre-trained GLFRNet is used as the teacher network to provide feature guidance in the training of the student network LighteningNet,which significantly improves the segmentation performance of LighteningNet,retaining its high computational efficiency and realizing the best trade-off between segmentation accuracy and efficiency.800 fundus images from ISBI 2019 pathological myopia challenge and 600 clinical fundus images from Shanghai General hospital are adopted to evaluate the segmentation performance of the proposed GLFRNet,LighteningNet and LighteningNet improved via knowledge distillation strategy(KD-LighteningNet).The number of training set,validation set and test set is 825,275 and 300 respectively.The Dice coefficients of the GLFRNet with ResNeSt50 as backbone and the LighteningNet reaches 83.51%and 81.26%,respectively.After distilling the knowledge from GLFRNet,the Dice coefficient of the KD-LighteningNet reaches 83.43%,which greatly compresses the network scale and improves the inference speed(111.19 FPS,which is 7.9 times that of GLFRNet)with minimal loss of segmentation performance. |