| Recently,using deep learning to process computer vision tasks has become a research hotspot,and it has been widely used in land cover classification from multiple-source remote sensing images.However,when the source and target images are from different sources,which means the data distribution of the two is different,limited by the generalization of deep neural network,the accuracy of remote sensing image segmentation and classification will be dramatically decreased.This paper studies a domain adaptive semantic segmentation based on generative adversarial networks and self-training to improve performance of deep neural network when performing cross-domain tasks.Firstly,this paper proposes a generative adversarial network based full-space domain adaptation for land cover classification using new unlabeled target remote sensing images that are enormously different from the labelled source images.In the algorithm,the source and target images are fully aligned in the image space,feature space,and output space domains in two stages via adversarial learning.In stage I,the source images are translated to the target-stylized images through image space and feature space domain alignment,which are then used to train a fully convolutional network for semantic segmentation and simultaneously align the source and target image in the output space to classify the land cover types of the target images.The experiments we conducted on a multi-source satellite images dataset in Wuhan city and a cross-city aerial images dataset in Potsdam and Vaihingen demonstrated that our method exceeded the recent generative adversarial network-based domain adaptation methods by at least 6.1% and 4.9% in the mean intersection over union(m Io U)and overall accuracy(OA)indexes,respectively,which significantly boosted the model’s performance on the target domain images.Then,this paper proposes a rectifying pseudo label self-training algorithm based on information entropy uncertainty estimation for Domain Adaptive Semantic Segmentation.The method calculates the information entropy of image prediction results and uses it as an uncertainty estimate to correct pseudo labels for self-training.We conducted experiments on three sets of images dataset,and it show that this method can further improve the segmentation accuracy of the target images on the existing generative adversarial network-based domain adaptation methods,and it does not need to build any additional modules or consume additional computing power. |