Font Size: a A A

Building Damage Detection From Vhr Satellite Images For Disaster Emergency Monitoring Based On Deep Learning Method

Posted on:2022-06-15Degree:MasterType:Thesis
Country:ChinaCandidate:W J ZhangFull Text:PDF
GTID:2480306740455304Subject:Surveying the science and technology
Abstract/Summary:PDF Full Text Request
The construction area is the main place for human activities,and it is also the area with the most serious casualties and property losses when disasters occur.Therefore,after a disaster occurs,quickly and accurately assessing the damage of buildings in the disaster area is of great significance for emergency rescue,decision-making,and post-disaster reconstruction.Remote sensing technology has become one of the main technical means of disaster monitoring and assessment due to its ability of large-area earth observation,fast speed and short period of obtaining surface information.In particular,high-resolution remote sensing images can provide finer texture and spatial information for the interpretation of damaged buildings,and it's getting more and more accessible,which provides important data support and guarantee for the identification and accurate positioning of damaged buildings.The deep learning model represented by the convolutional neural network(CNN)can automatically learn hierarchical feature expression from the low-level visual features to the high-level semantic features from the training samples,avoiding the incompleteness of traditional manual design features and excessive dependence of prior knowledge.The generalization ability of the models have also been greatly improved.Therefore,the CNN can provide technical support for accurate extraction of damaged buildings.However,most of the current research works directly apply the deep learning models suitable for natural image understanding to the extraction of damaged buildings.There is a lack of specific thinking on this task,which is mainly manifested in the following two aspects:(1)In terms of models: many methods extract damaged buildings directly based on semantic segmentation models or target recognition models,ignoring the high labeling requirements for training samples(pixel level and object level labeling).(2)In terms of data: In addition to high-resolution remote sensing images,many methods additionally use the vector data of predisaster buildings as constraints,which may limits the application scenarios of these methods.In addition,most of the methods also ignore the problem that the remote sensing images acquired within a short period of time are generally different when the actual disaster occurs.In response to the above problems,this paper starts from the actual situation of limited data and image differences.For two different high-resolution remote sensing image situations,using only easy-to-obtain image-level labeled samples as far as possible,designed frameworks for extracting damaged buildings that highly matches the characteristics of the images and the form of sample annotations.The main research work and results of this paper can be summarized as the following two aspects:(1)In the case of using only post-disaster high-resolution remote sensing images,this paper proposes a weakly supervised method based on category heat maps to extract damaged buildings.This method combines the idea of repeated scene classification and category heat map,and the pixel-level interpretation results can be directly obtained by the model trained with image-level labeled samples,which makes up for the inaccuracy of scene classification to some extent.Compared with the methods based on semantic segmentation models and target recognition models,this method only requires image-level labeled samples,which greatly reduces the difficulty of labeling samples.Compared with the scene classification methods based on image-level labeled samples,the proposed method has better localization ability and higher extraction accuracy of damaged buildings.The experimental results show that the method is better than other comparison methods in almost all quantitative evaluation indexes.The average F1 score reaches 50.59%,which is 4.15% higher than the repeated scene classification method,and the average overall accuracy reaches 92.58%.(2)In the case of joint use of pre-and post-disaster high-resolution remote sensing images,this paper proposes a method to extract damaged buildings based on multi-scale scene change detection.In the absence of pre-disaster building vector data,the method directly uses a semantic segmentation model to extract buildings in the pre-disaster images,effectively avoiding the interference of non-building geographical objects.Then,combined with the multi-scale segmentation method and siamese network model,the multi-scale damaged building detection is realized.Finally,in order to improve the classification accuracy,the extraction results of each scale are automatically fused.The method takes into account the scale differences between objects in remote sensing images,and at the same time reduces the registration requirements between the two real images.Experimental show that compared with other methods,the proposed method achieves the highest results in almost all accuracy evaluation indexes,with an average F1 score of 64.73%,which is 5.47% higher than the instance segmentation method based on the pixel-level labeled sample,and the extraction efficiency is also improved by nearly 60% compared with the repeated scene classification method.In addition,the experiment also analyzed the impact of different scene classification networks on the extraction accuracy of this framework.The results show that the introduction of pre-disaster scene information is of great significance for improving the performance of damaged building recognition.Reasonable integration of higher-level features can also effectively improve the scene change detection performance.
Keywords/Search Tags:High-resolution remote sensing images, Damaged building extraction, Weakly supervised learning, Scene change detection, Siamese network
PDF Full Text Request
Related items