| Remote sensing image change detection is a technology that analyzes and detects two or more images of the same geographical location acquired in different periods,so as to obtain the change information of land features.It has been widely used in the fields of land use monitoring,disaster assessment,ecological environment monitoring and geographic data update.With the rapid development of sensor technology,change detection based on multi-sensor optical remote sensing images has become a research hotspot in the field of remote sensing.Due to the differences of sensor imaging,different patterns of manifestation are shown in multi-sensor optical remote sensing images for one scene,leading to a more obvious problem of "pseudo change".The traditional multi-sensor optical remote sensing image change detection methods can no longer meet the requirements.Therefore,this paper conducts research on change detection of multi-sensor optical remote sensing images based on deep learning technology.The main research contents are as follows:(1)In view of the more prominent problem of " pseudo change" in multi-sensor optical remote sensing images,this paper proposes an object-level change detection method MFEDUNet++ for multi-sensor optical remote sensing images combined with UNet++ and multistage difference module.Firstly,multi-scale feature extraction difference module is proposed by this method to enhance the ability of the model to identify "pseudo change".On this basis,multi-scale feature outputs by UNet++ network are used for multi-angle meticulous depiction.Adaptive evidence credibility indicator is proposed as well.At last,image segmentation and Dempster-Shafer theory are combined to design weighted Dempster-Shafer evidence fusion,so as to achieve mapping from pixel-level output of deep network to object-level results.Experiment was conducted to four sets of high-resolution multi-sensor optical image datasets from different regions,and contrastive analysis was conducted to multiple methods of advanced deep learning.The results revealed that,the overall accuracy and F1-score of the proposed method reached more than 91.92% and 63.31%,respectively,under different conditions of spatial resolution and temporal phase difference,which were significantly better than the comparison methods in both visual analysis and quantitative evaluation.(2)In view of the fact that the MFED-UNet++ framework does not consider the interaction between the global information of the front and rear temporal images,which reduces the ability of the model to process the scene when the dual temporal background information is needed to assist the location,an object-level change detection method GSDE-UNet++ combining UNet++and the global structure differential enhancement module is proposed.This method first proposed a global structure differential enhancement module in conjunction with Transformer to fully mine the spatiotemporal context information contained in the dual-temporal remote sensing image and further improve the feature extraction ability of the changed region.On this basis,a new loss function B-WFNP Loss is proposed to enable the network to strengthen the learning of areas prone to " pseudo change" and " pseudo invariance" during training.Experiment was conducted to four sets of high-resolution multi-sensor optical image datasets from different regions,and contrastive analysis was conducted to multiple methods of advanced deep learning.The results revealed that,the overall accuracy and F1 score of the proposed method reached more than 91.97% and 64.43%,respectively,under different conditions of spatial resolution and temporal phase difference,which were significantly better than the comparison methods in both visual analysis and quantitative evaluation. |