| As an important economic crop and strategic material in my country,cotton is widely used in many fields.China,as one of the few major cotton industries in the world,is limited by factors such as the accuracy and efficiency of the cotton topping machine to identify cotton top buds.During topping operations,peach bumps and missed hits often occur,resulting in cotton harvest yields.It is difficult to be guaranteed.The cotton top bud recognition technology currently used in production practice is mainly based on traditional image processing and detection methods,but the performance of traditional detection methods still cannot meet the demand.Nowadays,many research fields use deep learning methods to replace traditional detection methods,and experiments have proved that the use of deep learning methods can effectively improve the recognition accuracy.Therefore,this paper introduces the deep learning target detection method into the cotton top bud identification and detection technology,which has the effect of accurately and quickly identifying the cotton top bud.The research purpose of this paper is to ensure the recognition accuracy of the model under the premise of compressing the model network as much as possible,so as to facilitate the deployment of the model to mobile devices.The main research content innovations are as follows:(1)Aiming at the problem that the YOLOv4 algorithm network model is complex,large in size,and large in parameter,which is not suitable for generalization to embedded systems,this paper uses the model compression method of channel pruning and shortcut module pruning for the YOLOv4 algorithm.Use the scaling factor γ of the BN layer to sparse the model to find the channels with low importance;use the global threshold to find the mask mask of each convolutional layer,and use the mask after the union to perform channel pruning;on this basis Evaluate the importance of a CBM module in front of each residual block,sort from high to low the mean γ of the BN layer,set the number of shortcut modules that need to be pruned,and prun the layer with the lower mean γ according to the number of pruning.According to the experimental results,it is determined that a model with a channel pruning rate of 85% and a pruning number of shortcut modules of 6 is adopted.The model size is reduced by 97.04%,the parameter amount is reduced by 97.11%,and the mAP value on the test set is 0.946.It shows that the combination of channel pruning and layer pruning can greatly compress the volume and parameter amount of the model,and speed up the reasoning time of the model,but the problem is that the recognition accuracy is reduced.(2)Aiming at the problems of reduced recognition accuracy,missed detection,and inaccurate recognition position when the neural network recognizes the terminal buds in a complex background after pruning.This paper proposes to embed the SE module and CBAM module in the attention mechanism into the pruned lightweight network,weighting the channel features and spatial features to improve useful features and suppress useless features.Finally,it is determined to embed two SE modules and a CBAM module.Because these two modules are lightweight modules,only 0.63% of the parameter amount is increased,and the mAP value of the obtained model on the test set is 0.971.This method reduces the missed detection rate and improves the recognition accuracy.(3)The cotton top bud recognition system was developed based on the framework PyQt,which is a combination of the programming language Python and the graphical user interface library Qt.The system mainly includes functions such as loading the trained model,setting different thresholds,identifying the top buds,and displaying the results,which can visualize the results of the top bud recognition more intuitively.The experimental results show that the model of the pruning network with the channel pruning rate 85%+shortcut module pruning number 6 combined with the SE module and the CBAM module,which greatly compresses the model while ensuring the accuracy of network recognition.The size is reduced by 97.04%,the amount of parameters is reduced by 97.09%,the reasoning time is reduced by 0.004 s,and the mAP value on the test set reaches 0.971.At the same time,the improved model can meet the requirements of use in extreme light intensity(such as overexposure or extremely weak light)and under occlusion conditions,which improves the robustness of the network.This laid the foundation for the subsequent deployment of the deep neural network to the cotton topping machine to identify the top buds. |