Font Size: a A A

Research On Regularization Of Neural Network Based On Dropweight Algorithm

Posted on:2020-12-26Degree:MasterType:Thesis
Country:ChinaCandidate:H J YuanFull Text:PDF
GTID:2428330596995339Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
In neural networks,in order to further improve the expressive ability of the network model and extract higher-level features of the input signal,the structure of the neural network has a deeper and wider development trend.At the same time,a deeper and wider network will bring a lot of weight parameters to be trained,especially when the training data is insufficient,millions of weight parameters will easily make the networ k over-fitting.Therefore,it is urgent to propose an effective method to prevent over-fitting.In the past,it used to train multiple models to make combinations to prevent the model from over-fitting,but this would lead to a very time-consuming process of the model training and testing.At present,many regularization methods are often used to improve the generalization ability of neural networks,such as L2 regularization,Batch Normalization,Dropout,Dropconnect and so on.In the process of training,Dropout algorithm randomly chooses to ignore the response of a certain proportion of neuron nodes,so that the updating of network weight no longer depends on the joint action of fixed nodes.However,it has a certain randomness in judging whether the neurons in the hidden layer are activated,and the inactivated neurons can not participate in the updating of the weights,ignoring the strong and weak limitations of the ability of neurons to act.The limitations of the Dropout algorithm are related to the fact that the activated neurons are given a small activation level.In this paper,a new regularization method of neural network,DropWeight algorithm,is proposed to further improve the model's ability to prevent over-fitting,so as to enhance the generalization ability of the model.It randomly chooses whether the neurons in the full-connected layer are activated by Bernoulli distribution with a certain probability,and then introduces a variable of neuron activation degree to assign a smaller activation value to the unactivated neurons.That increases the complexity of the connection in the training network,and makes the update weight of every network rely on the hidden nodes with the strong or weak capability.Based on the standard image data sets of Mnist and CIFAR-10,the experiments of image classification are carried out on multi-layer perceptron and convolution neural network.Experiments show that when the network uses the DropWeight algorithm on the fully connected layer,each network can find a fixed optimal activation level value of-0.4 and the tuning value range [-0.5,-0.1].Compared with No-Drop,Dropout and other algorithms,the DropWeight algorithm shows certain advantages in improving the classification recognition rate of the network and reducing the over-fitting ratio of the network.The DropWeight algorithm not only further enhances the model's ability t o prevent overfitting,but also enhances the model's ability to learn and express,thus enhancing the generalization ability of the entire network model.
Keywords/Search Tags:neural networks, overfitting, regularization, DropWeight, generalization
PDF Full Text Request
Related items