| Remote sensing technology is the cornerstone of modern earth observation research,widely used in natural resource management,urban planning,agriculture and forestry,and other fields.With the continuous development of remote sensing technology,the acquired remote sensing image data is becoming more and more abundant.However,clouds in the atmosphere often obscure ground information,which poses difficulties for subsequent image analysis.Traditional cloud removal methods based on image features suffer from instability,low accuracy,and long processing time.With the development of deep learning techniques,research and application of deep learning-based cloud removal methods in remote sensing imagery have gained widespread attention.Deep learning methods possess strong adaptability and intelligence,enabling them to learn and extract feature information from vast amounts of data,thereby achieving automatic cloud removal in remote sensing images.This thesis proposes cloud removal methods based on generative adversarial network,which utilizes its powerful image generation capability and supervised learning characteristics using paired datasets to achieve more automated cloud removal.In response to the existing problems of current cloud removal methods,such as residual cloud in the resulting images and difficulty in reconstructing ground features when the cloud coverage is thick and extensive,this thesis proposes the use of improved generative adversarial networks for cloud removal in remote sensing images.The specific research content of this thesis is as follows:(1)To address the issue of cloud residue,a generative adversarial network based on3 D attention and dense residual connections is designed to remove clouds from images.Multiple residual dense connection modules are built inside the generator,where local features fusion is used to combine shallow and deep features,and residual learning between adjacent modules further improves information flow,resulting in continuous memory effects while ensuring stable network training.Additionally,a 3D attention module similar to the human visual system is incorporated into the generator,where spatial and channel attention coexist and occur simultaneously,enabling the network to more accurately focus on cloud regions.An attention loss term is added to the loss function to assist in updating attention weights.Finally,the improved model is compared with existing deep learning methods on the RICE dataset,and the results demonstrate superior cloud removal performance of the proposed model.(2)To address the problem of difficulty in reconstructing ground information,existing cloud removal models have difficulty establishing a connection between cloud area information and surrounding surface information due to inadequate fusion of local and global features.Therefore,this thesis proposes a convolutional-transformer dual-branch generative adversarial network with a feature fusion module to integrate local and global features.The module uses stripe pooling and deconvolution for downsampling and upsampling to extract local spatial information at different scales and global semantic information.During decoding,the model extracts multiscale features from both branches and adds perceptual loss to the loss function to achieve image detail restoration.The improved model is tested on the RICE and WHUS2-CR datasets and compared with existing deep learning methods.The experimental results show further improvement on multiple image evaluation metrics and accurate reconstruction of ground features. |