| Convolutional neural network model has made great progress in computer vision,intelligent vehicle driving and other fields.However,the explicit redundancy of co nvolutional neural networks in the process of parameterization has become a recognized feature,which makes many models based on convolutional neural netw orks difficult to deploy in low memory environments,and limits the application of many deep learning methods in practice.To solve this problem,researchers considered a single model compression method,such as pruning,quantification,low-rank decomposit ion and knowledge distillation.However,a single operation on the model cannot make full use of the a dvantages of various methods of model compression.In view of the above problems,this paper proposes a hybrid compression model method based on pruning a nd tensor decomposition.At the same time,the combined compression operation of pruning and quantific ation is carried out for the target detection model YOLOv3,which realizes the efficient identification of insulator,spacer and shock hammer in transmiss ion lines.Firstly,this paper proposes a hybrid model compression method based on pruning and tensor decomposition.In this paper,the rank of the output feature map is used as an important basis for judging the filter to prune the model,which can effectively consider the information of the entire network,and then Tucker decomposition is carried out on the convolution layer to achieve further compression.Experiments were carried out on the CIFAR10 dataset.The experimental results show that compared wit h the single pruning and tensor decomposition methods,the hybrid compression method achieves a parameter compression rate of up to 85.8 % on Resnet56,and the Flops is reduced by 82.1 %.At the same time,the accuracy of Resnet56 after compression is redu ced by less than 2 %.In this paper,the same test is done on the VGG16 model,and the experimental re sults verify the feasibility of the hybrid model compression method.Secondly,a multi-objective transmission line identification method based on the lightweight YOLOv3 model is presented.The YOLOv3 model is compressed by pruning and quantization.First,model pruning was carried out on YOLOv3.At the same time,it was found through research data that the weights of the two networks of YOLOv3,the backbone network and the feature pyramid network are different.If the pruning is performed directly,it will cause excessive pruning or incomplete pruning.In order to,To solve this problem,a scheme of pruning different networks is proposed.Finally,the model is quantized,and the floating-point 32-bit weight is converted into an int-type 8-bit weight,thereby reducing the amount of computation to achieve model compression and acceleration.The final experimental results show that the proposed lightweight YOLOv3 model has a good effect in the identification of transmission lines. |