Font Size: a A A

Research On Litchi Detection And Picking Point Location In Field Environment Based On Deep Learning

Posted on:2021-03-19Degree:MasterType:Thesis
Country:ChinaCandidate:J R ZhongFull Text:PDF
GTID:2543306464499204Subject:Engineering
Abstract/Summary:PDF Full Text Request
China is the world’s largest producer of litchi production.The development of litchi picking robots to automate litchi picking can improve picking efficiency and reduce labor costs.Because it is different from citrus and apple,the litchi fruit in the natural environment is fragile,small and crispy,picking by adsorption and other methods will damage the fruit and therefore it is more suitable for litchi to use a gripping picking robot.During the operation of the gripping picking robot,it is necessary to detect and identify the fruits and branches of litchi branches,and then locate the picking points on the branches to achieve the picking of litchi.To this end,this paper studies the detection of litchi fruits and the positioning of their picking points in the wild environment.The main tasks completed are:(1)Litchi detection Based on Convolutional Neural Network is studied in the following steps.In the wild environment,the picture of litchi will be affected by factors such as light,occlusion,and distance.To this end,an improved object detection algorithm based on YOLOv3 model is proposed for the detection of litchi fruit,which is called YOLOv3_Multiple Net and contains two steps.Firstly,the original Adam optimization function is replaced by the improved Adamax optimization function.Secondly,combined with the idea of densely connected convolutions,the second residual block is applied with densely connection and each residual block is connected to improve feature reuse capabilities.Experiments show that the improved algorithm achieves better results than the classic YOLOv3 algorithm Dark Net53 feature extraction network.The m AP is 0.84339,which is higher than the classic YOLOv3_Dark Net’s 0.78503.The m AP of the close-up litchi fruit dataset is 0.9784,which is higher than the classic YOLOv3 0.9583;m AP of the Vision Litchi Fruit dataset is 0.6036,which is 0.5086 higher than the classic YOLOv3_Dark Net.Compared with the SSD and Faster-RCNN network models,the m AP value of YOLOv3_Multiple Net is about 5.3 percentage points higher than the SSD algorithm and about 1.2 percentage points higher than Faster-RCNN.(2)Litchi cluster detection method based on density clustering is studied in the following steps.Firstly,combined with the object detection and the detection results of the vision litchi image,using k-means algorithms to find the fruit dense area,and then determine the priority order of picking according to the fruit dense quantity.Secondly,using three density clustering algorithms DBSCAN,OPTICS and Mean Shift to replace k-means and analysis the results.Experiments show that the k-means algorithm works best at k = 8,and its average ARI value reaches 0.7.The density clustering algorithm performs better than kmeans,and the average ARI values of the DBSCAN,OPTICS,and Mean Shift algorithms reach 0764,respectively 0.763 and 0.768.(3)Litchi branch segmentation method based on semantic segmentation is studied in the following steps.Due to the difficulty of locating litchi branches in the wild environment,an improved semantic segmentation algorithm based on Deep Lab v3+ model is proposed to perform semantic segmentation of litchi branches,which is called Deep Lab v3+_Res DenseFocal network.By using Focal Loss to replace the Cross entropy Loss function and combining dense convolution and residual block ideas,a Res Dense-Focal network is proposed as a feature extraction network for Deep Lab v3+.Experiments show that the best performance is when Deep Lab v3+ is Res Dense-Focal.The m Io U value of the overall sample is 0.797248.Compared with ResNet-CE,Dense Net,and Xception,the m Io U values are 0.9%,1.8%,and 12.9% higher.As for the simple sample,the m Io U value is 0.8481,which is almost the same as the other.As for the medium sample,the m Io U value is 0.8111,which is 0.7%,2.7% and 14.8% higher.As for the complex sample,the m Io U value is 0.7703,which is 0.6%,1.3%,and 15.3% higher.(4)Location of litchi picking point is studied in the following steps.Firstly,the picking points in the two-dimensional image is determined by calculating the largest circumscribed matrix of litchi based on the experimental results of object detection and clustering,and combining with the effect of semantic segmentation.Secondly,with the RGB image and depth information obtained by kinect,the 3D coordinate values corresponding to the 2D picking points are calculated by using the camera calibration method of Zhang Zhengyou,and then the positioning of the picking points of litchi is completed.
Keywords/Search Tags:Litchi picking robot, object detection, density clustering, semantic segmentation, picking point location
PDF Full Text Request
Related items