| Clouds play an important role in understanding the hydrological balance of nature and various events in the earth atmosphere.The essence of realizing the measurement of sky weather observer is to use the working principle of artificial observation for reference and use various imaging equipment to replace human eyes to generate cloud images.Among them,ground-based cloud images can show the local cloud state in more detail and clearly.Compared with satellite cloud images,although the imaging range is smaller,local weather prediction is more accurate and has practical guiding significance.Therefore,in the whole sky imaging based on the ground,the distortion of cloud image should be reduced as much as possible,and the real shape of cloud should be preserved for subsequent analysis.Due to the high price of fisheye lens and obvious distortion,it will cause some interference to cloud recognition and classification.In response to this problem,this paper adopts scanning sky imaging technology,image stitching method guided by feature points.First,the camera lens is controlled to rotate and shoot,so as to achieve no dead angle collection in the surrounding environment,and ensure that there are overlapping parts between adjacent samples.Then extract the point features in each image,select the ORB algorithm with excellent real-time performance,and improve the traditional RANSAC algorithm.By increasing the proportion of the internal points in the initial sample set,and selecting points in order,it reduces the running time of feature point matching is improved and the accuracy is improved.Afterwards,fusion is used to eliminate splicing traces.Finally,the stitched image is used as a reference image to stitch one by one.Practice has shown that scanning-based stitching imaging can be realized relatively quickly.In addition,cloud detection is a key step to make full use of ground-based cloud images.The accuracy of cloud image segmentation affects the subsequent analysis of image data.With the development of deep learning,semantic segmentation can also be applied to identify cloud regions in images.Some fully-supervised and weakly-supervised learning networks are also used to segment cloud images,but both of these require a large number of training samples and generally require manual labeling..To solve this problem,this paper refers to the Labelme software to propose an automatic sample labeling method.Using the dark channel characteristics of the cloud,the binary image obtained by threshold segmentation after the dark channel processing is used as the training sample,which can retain more cloud details.In practical applications,it is found that the parameters of the existing cloud image segmentation network are too large and the training speed is slow.In order to solve this problem,this paper draws on the "Encoder-Decoder" framework idea of SegNet,replaces the backbone model of SegNet with MobileNet,which is known as "light",and designs and constructs a lightweight ground-based cloud image segmentation network model Mobile-SegNet,and Improved activation function.Experiments show that compared with the classic semantic segmentation network model,this model has a segmentation accuracy of 90.24%and an average intersection ratio of 79.48%.Although there is no obvious advantage in accuracy,it greatly reduces the amount of parameters and running time,laying the foundation for the actual deployment of cloud detection.In response to the actual needs of scanning all-sky imaging and cloud detection,we designed and implemented all-sky mosaic imaging and cloud detection tools.Using Qt Creator to realize the software front-end interface part,based on Python3.7,opencv3.4.1 to realize the software back-end logic,the software can effectively realize the independent selection of images and stitching,and detect the cloud area in the RGB image. |