| Rice is one of the main crops in China and plays a pivotal low in China’s grain production.Obtaining the planting area of rice has important reference value for monitoring its growth status and estimating rice rice price.Applying convolutional neural networks to the field of rice identification of remote sensing images not only significantly improves the accuracy of recognition,but also greatly reduces the consumption of labor and resources,and gradually becomes one of the important technologies in the development of precision agriculture.In this paper,the Landsat 8 OLI multispectral image of Shenyang City is fused with panchromatic and near infrared images respectively.The texture,color and geometric features of six landforms,including rice,water,other vegetation,bare land,mountains and buildings,as well as the normalized vegetation index and spectral characteristic curves,are analyzed before and after image fusion.The image features of each landmark are extracted,and then three groups of images are used.Based on the classification results of image support vector machine,three groups of rice distribution label maps were drawn.Finally,the rice planting areas in Shenyang were identified by image segmentation technology of full convolution network,and the planting area was estimated.Through this study,the following research results are obtained:(1)This paper uses the Gram-Schmidt spectral sharpening fusion method to fuse the Landsat8 OLI multi-spectral image with the panchromatic image,and the spatial resolution is improved to 15m,and the image sharpness is obviously improved.In order to further distinguish rice from water in paddy field,this paper introduces the 10m near-infrared image of Sentinel-2A satellite and multi-spectral image fusion.The fusion results show that the variation of the spectral curve of water is different from that of the other five species.The spectral reflectance not only does not increase,but decreases a lot.This difference in the law of variation makes the difference between water and rice in the image after fusion,and effectively distinguishes between water and rice.(2)Firstly,the classification method of support vector machine is used to preliminarily classify six kinds of land features in Shenyang.In the process of SVM classification of original multispectral images,nine groups of experiments were carried out,namely,in June,July and September,three groups of training samples from different regions were selected for each month,and cross-classification experiments were carried out.The errors caused by inappropriate selection of classification periods and sample areas were avoided,and the accuracy of classification results was improved.The experimental results showed that 6.In the month,the classification accuracy of test group 3 is the highest.Then,using the selected area of sample group 3 as training sample,the same classification experiment was done for the fusion PAN image and the fusion NIR image in June,and the preliminary classification results were obtained.Finally,according to the texture,color and geometric features,normalized vegetation index and spectral characteristic curve,the classification results of the original multi-spectral image,fusion PAN image and fusion NIR image are corrected and modified.Three groups of label maps are drawn,and the network training label map library is established.(3)This paper uses a full convolution network to identify rice planting areas in Shenyang.U-Net model is used as the training network.By referring to the improved deconvolution layer,the correlation between adjacent pixels is increased,and pixel-level image segmentation is realized.Based on the label map,the network parameters are adjusted and established.Rice planting area identification model.The classification results of the fusion NIR images have the highest comprehensive evaluation,the overall accuracy is 85.6780%,and the Kappa coefficient is 0.8656.After further calculation,the rice planting area in Shenyang in 2015 is1032.98km~2,accuracy verification was carried out by sampling survey method with an accuracy of 85.3%.The accuracy was verified by the method based on the number of square pixels.The accuracy was 86.37%. |