| In recent years,with the advancement of remote sensing technology,detecting changes in buildings from remote sensing images has become an important way to detect surface changes,which is of great significance for exploring land resource utilization and determining post-disaster building damage.Due to its powerful feature extraction and end-to-end rapid detection capabilities,deep learning has been widely applied to change detection tasks.However,some existing deep learning-based algorithms have some shortcomings when faced with complex building features,such as insufficient ability to capture building edge information,poor detection ability for small-scale targets,and difficulty in distinguishing adjacent buildings.To address these issues,this article is based on deep learning methods to study changes of buildings in high-resolution remote sensing images,and the main work is as follows:1)This paper summarizes and compares traditional algorithms with deep learning-based algorithms,points out the shortcomings of traditional methods and the advantages of deep learning methods.Especially in feature extraction,the deep learning methods are significantly superior to traditional methods.In addition,this paper introduces the theoretical knowledge of deep learning to better understand the algorithm proposed in this article.2)In response to the problem that some change detection algorithms have inadequate utilization of information from the original image(such as details of building edges)and poor detection performance for small-scale buildings or buildings with small change areas that are interfered by other buildings,this paper proposes an attention-based multi-scale input-output network named AMIO-Net.It enhances the network’s ability to utilize feature information from the original image through a multi-scale input-output structure.Additionally,it uses a pyramid pooling attention module(PPAM)and a Siamese attention mechanism module(SAMM)to enhance the detection ability for small targets while fully considering global context information.Experiments on three public datasets(LEVIR-CD,Google,S2Looking)show that compared with classic methods such as FCN and Seg Net,the F1 score of AMIO-Net is improved by 5.28%-15.32%,11.31%-21.53%,and 12.29%-26.13% on the three datasets,respectively.Compared with advanced methods such as SNUNet and STANet,the F1 score of this algorithm is improved by 2.27%-6.56%,6.83%-24.08%,and6.81%-15.73%,respectively.3)To address the issue that some algorithms have poor detection performance for irregularly shaped buildings and difficulty distinguishing changes between different buildings in close proximity due to insufficient feature extraction capabilities,this paper proposes a feature enhancement network(FENETUEVTS)that combines a UNet encoder and a visual transformer structure to detect building changes in highresolution remote sensing images.The model combines a deep convolutional neural network with part of the vision transformer structure(VTS),and has strong feature extraction capabilities for various buildings,where VTS mainly provides spatial correlation for buildings with different levels of feature maps.This paper designs an enhanced feature extractor composed of spatial and channel attention module(SCAM),u-shaped residual module(USRM),strengthened feature extraction module(SFEM),and self-attention feature fusion module(SAFFM)to improve the feature extraction ability of the network for buildings of various shapes and their edge details.In addition,to reduce information loss during feature maps merging,a cross-channel context semantic aggregation module(CCSAM)is designed to perform information aggregation on the channel dimension.To verify the effectiveness and advancement of the proposed model,extensive comparative experiments were conducted with eight other state-of-the-art algorithms(such as SNUNET,BIT,Change Former,etc.)on three public change detection datasets(LEVIR-CD,WHU-CD,CDD).The results show that the F1 score of FENET-UEVTS has increased by 3.68%-13.5%,3.24%-23.63%,and 3.98%-44.69%on three datasets respectively while KAPPA coefficient has increased by 3.86%-14.21%,3.69%-28.48%,and4.34%-47.07% respectively. |