Font Size: a A A

Generating Typical Land Feature Remote Sensing Image Samples Based On Generative Adversarial Networks

Posted on:2024-09-20Degree:MasterType:Thesis
Country:ChinaCandidate:Y S GongFull Text:PDF
GTID:2542307079970149Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the emergence of deep learning techniques,data-driven intelligent learning methods,which rely on a substantial amount of data,have become the technological backbone supporting the extensive application of remote sensing imagery across various domains.However,there are notable deficiencies in remote sensing image datasets pertaining to specific regions or target categories,such as limited sample sizes and inadequate diversity in image styles.Consequently,this hampers the accuracy of deep semantic segmentation network models in recognizing targets on remote sensing imagery and their generalizability.Hence,addressing the current pressing application demands and cuttingedge technological challenges necessitates the exploration of remote sensing image sample generation techniques to enhance data quantity and diversity.In this thesis,we address challenges encountered in the generation process of remote sensing images for typical land cover categories,including vegetation,buildings,water bodies,roads,and backgrounds.Specifically,we tackle issues such as irregular and complex building contours,presence of artifacts in water bodies,and lack of realism in texture details for vegetation and roads.To overcome these challenges,we propose a novel sample generation model called MTGAN(Multi-Task GAN),which is based on the Pix2 Pix model.MTGAN incorporates a global-local architecture,a shared encoder module,and a local generation enhancement module,specifically designed for generating multiple land cover categories.Our model successfully achieves the generation of remote sensing image samples with clear land cover boundaries and realistic texture details.These generated samples provide valuable data support for improving the accuracy and generalization capacity of remote sensing image semantic segmentation network models.The main research achievements of this thesis are summarized as follows:(1)To enhance the perceptual realism of generated images and improve the network’s ability to learn the color and texture of remote sensing images,this thesis builds upon the classic Pix2 Pix network and introduces the Pix2Pix++ model.The Pix2Pix++ model incorporates perceptual loss and texture matching loss to enhance the color,texture,and perceptual realism of the generated images.Moreover,to address issues such as insufficient generation capability for land cover features with low sample proportions and poor quality of generated complex land cover images within the Pix2Pix++ framework,this thesis further introduces a Global-Local Generative Adversarial Network(GLGAN)model.The GLGAN model,built upon the Pix2Pix++ model,specifically focuses on improving the generation of complex multi-land cover remote sensing images.(2)In order to optimize the GLGAN model and balance the training process between the global generator and the local generator,this thesis further introduces the stableGLGAN model.The stable-GLGAN model combines the feature extraction capabilities of the shared encoder with the global and local generators,forming a compact backbone network.Moreover,addressing the issue in the stable-GLGAN where the local generator is affected by the interference of global contextual feature information,leading to insufficient quality in generating specific classes,this thesis proposes the MTGAN model.The MTGAN model optimizes the local generator by improving the generation approach for local images,thereby enhancing the overall quality of the generated images.(3)In terms of quality assessment of the generated images,this thesis conducts an interpretability analysis of the generated images using class activation maps based on the UNet segmentation network.Additionally,a comparative analysis is performed to examine the impact of adding generated images of different magnitudes to the UNet segmentation network on the improvement of accuracy,considering datasets with varying sizes.this thesis explores the effectiveness of the generated images from multiple perspectives,providing valuable insights for quality evaluation in the context of remote sensing image generation.
Keywords/Search Tags:Deep Learning, Remote Sensing Images, Generative Adversarial Networks, Sample Generation, Data Augmentation
PDF Full Text Request
Related items