Font Size: a A A

Spatiotemporal Fusion Of Remote Sensing Images Based On Convolutional Neural Network

Posted on:2022-09-01Degree:MasterType:Thesis
Country:ChinaCandidate:X Y ZhangFull Text:PDF
GTID:2492306575465844Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
In practical applications,remote sensing images with high temporal and high spatial resolution have more research value.However,due to the limitations of current equipment and technology,single-source remote sensing images cannot have high temporal and high spatial resolution at the same time.So the spatiotemporal fusion of remote sensing images has gradually entered everyone’s field of vision.Spatiotemporal fusion of remote sensing images is mainly to fuse images from two satellites or sensors.Usually,people use Landsat and MODerate-resolution Imaging Spectroradiometer(MODIS)images for spatiotemporal fusion.In recent years,the upsurge of deep learning has swept across all major research fields.Similarly,spatiotemporal fusion methods based on deep learning are also emerging in endlessly.These spatiotemporal fusion methods have achieved excellent results,but there are still problems such as limited data sets and insufficient accuracy of the results.In response to these problems,this thesis proposes two spatiotemporal fusion methods based on convolutional neural networks(CNN).The research contents are as follows:1.For the current CNN-based spatiotemporal fusion methods,their fused images are usually too smooth.This thesis proposes a spatiotemporal fusion network based on attention and multi-scale mechanisms.The network only uses three input images,and uses difference images of MODIS participate directly in network training.The feature maps of the image at different scales are extracted by adding a multi-scale mechanism.Then the extracted feature maps are fused at different scales.Finally,the feature information of the image at different scales is obtained.The addition of the attention mechanism refocuses on important features.By assigning weights to the feature map in the channel domain and the spatial domain,the network can better learn the important features in the remote sensing image.This thesis uses two classic data sets to conduct experiments,which proves that this method can achieve better results on objective indicators such as root mean square error and correlation coefficient.2.Then this thesis proposes a lightweight spatiotemporal fusion network.In the entire network architecture,the feature maps are no longer directly added for fusion,but the concat layer is used to stack the feature maps,thereby reducing the amount of network calculations.At the same time,the use of a multi-scale convolution structure to extract the feature information of the input image in different receptive fields is beneficial to extract the feature information of different sizes in the remote sensing image.The use of dilated convolution alleviates the computational burden caused by a large convolution kernel.By controlling the dilation rate to expand the receptive field,the global and local information of the image can be extracted.The skip connection is added to avoid losing important features in the deep network,so that the network can combine the deep and shallow feature information to perform the final image reconstruction.This method not only increases the spectral quality of the fused image to a certain extent,but also greatly reduces the image over-smooth problem in terms of visual effects.
Keywords/Search Tags:remote sensing image, spatiotemporal fusion, convolutional neural network, temporal change, spatial detail
PDF Full Text Request
Related items