| Convolutional neural network is a typical feedforward neural network,inspired by the biological visual perception mechanism.In recent years,convolutional neural networks have been widely used in computer vision and related fields.With the increasing complexity of computer vision field,people’s demand for feature processing capabilities is also enhancing.In this context,convolutional neural networks tend to be more and more complex.However,this requires large storage space and high computing power so that convolutional neural networks with complex structures cannot be deployed on edge devices with low storage and low computing power and cannot balance accuracy and real-time performance.Based on this,this paper studies how to train convolutional neural networks stably and efficiently in resource-constrained environments,proposes an image recognition method based on the Resnet stack ensemble and an image recognition method based on multi-teacher ensemble distillation,and further applies the proposed algorithms to the Cifar10 dataset.The main work of this paper is as follows:The first chapter introduces the background knowledge of image recognition task and convolutional neural network.The research status at home and abroad has been investigated.Finally,we introduce the research content and chapter arrangement of this paper.In the second chapter,we give the relevant preparatory knowledge,mainly including convolutional neural network,ensemble learning algorithm and knowledge distillation.In chapter 3,an image recognition method based on Resnet stack ensemble is proposed.The accuracy of the simple convolutional neural network in image recognition is not stable enough so that the stack ensemble combined with the convolutional neural network is used for image recognition.The basic model selects the Resnet convolutional neural network,and the meta model selects the Softmax classifier.At the same time,it is compared with the Bagging and Boosting ensemble.Finally,the effectiveness of the algorithm is validated on the Cifar10 dataset.In chapter 4,an image recognition method based on multi-teacher distillation ensemble is proposed.Knowledge distillation is used to compress convolutional neural network parameters.The teacher model selects Resnet20,Resnet32 and Resnet44 neural networks,and the student model selects the Resnet14 neural network.Specifically,in the process of knowledge distillation,the student model is trained according to the pre-trained teacher model,and the knowledge output by multi-teacher models is further integrated by averaging.Finally,the effectiveness of the proposed multi-teacher ensemble distillation algorithm is validated on the Cifar10 dataset.The experimental results show that the multi-teacher model has a more generalized classification performance than the single teacher model.Conclusions and future works are presented in Chapter 5. |