| In the agricultural field,with the rapid development of information technology,traditional manual labor is gradually being replaced by intelligence machinery.The introduction of agricultural robots has not only improved labor efficiency and crop yields but also promoted the agricultural economy.As we all know,the navigation system is the key for agricultural robots to work in the field,which is very important for precise operations in fields.Therefore,the research on the navigation system of agricultural robots has attracted more and more attention.In this thesis,based on the video image data collected by an agricultural robot,an autonomous real-time navigation system of the agricultural robot is designed successfully based on the deep semantic segmentation model.Firstly,to improve the green characteristics of low-resolution crop images,a novel color index is proposed based on the linear combination of multiple color indices,in order to enhance the green pixels in the images and weaken the soil and background pixels in the images.Moreover,to verify the superiority of this proposed approach,this thesis combines it with the multiple spatial information based fuzzy c-means clustering to conduct the segmentation experiment for the crop images.The results reveal that the approach recombining some color indices,being insensitive to sunlight,shows higher segmentation performance than the traditional ones.Secondly,considering the high real-time requirements of agricultural robot video image segmentation,this thesis modifies a deep aggregation network based real-time semantic segmentation model(DFANet)to improve the segmentation accuracy of crop row images.Especially,the filtered color images and the gray images transformed are treated by using the new color indices combination method proposed in this thesis as the input of DFANet.In the experiment,the modified DFANet is adopted for the segmentation and extraction of crop rows.The experimental results reveal that the modified DFANet improves the segmentation accuracy of crop images in complex and changeable environments.Thirdly,to improve the accuracy of navigation,based on the crop row segmented by the above approach,the Canny edge detection operator and the FAST corner detection algorithm are combined to extract the corner points on the crop row in images.Then the least square method is applied to perform straight-line fitting according to the corner coordinates to extract the centerline of the crop row.After that,considering the centerline of the crop row and its parallel edge lines,the agricultural robot navigation information(distance and angle parameters)is extracted,thereby the navigation strategies of the agricultural robot are designed based on these navigation lines.Fourthly,to reduce the operating load of the agricultural robot,this thesis divides the navigation system into two modules,named as the navigation control module and the navigation calculation module.Among them,the navigation control module is deployed on the agricultural robot,while the navigation calculation module is deployed on the cloud server.More importantly,an effective real-time interactive system is designed for data transmission between the above two modules.Finally,in order to test the effectiveness of the above-mentioned navigation system and the interactive performance of the real-time system,this thesis simulates the real crop row in an indoor environment.Then,the navigation performance of the agricultural robot is evaluated from three aspects: real-time interaction,stability and safety,driving speed and turning angle of the agricultural robot.The experimental results validate the effectiveness of the proposed navigation method. |