Font Size: a A A

Improved Neural Network Model And Its Application Based On Prior Probability And Random Parameters

Posted on:2022-07-07Degree:MasterType:Thesis
Country:ChinaCandidate:X DengFull Text:PDF
GTID:2480306320454284Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
BP neural network,as an important data classification method,has been widely applied in intelligent processing of data,classification,pattern recognition,decision support and other fields.This thesis has the following research based on the neural network model.Firstly,aiming at the problem of random initialization of weights and biases in BP neural network,some studies have used random numbers following standard normal distribution to initialize weights and biases,but the rationality of this initialization has not been explored.In this thesis,simulation experiments are used to explore the influence of random weights and biases on neural network performance under different distributions.The results show that the initialization variance of the normal distribution should not be too large,otherwise it will reduce the performance of the network;different initial distributions will affect the performance of the network.For different data,random initialization following the standard normal distribution may not be the optimal choice.Then,aiming at the problem of data imbalance and the large difference in the scale of each category leads to the low classification accuracy of smaller categories,the improved neural network model based on prior probability is proposed.The reciprocal of the prior probability of each type of sample is used as the The coefficients of the class data are added to the objective function of the neural network to improve the recognition rate of small class samples.Using simulation experiments to compare with classic neural networks and support vector machines,the algorithm is more effective.Finally,for the extreme learning machine algorithm,the network node parameters are randomly selected,and the uncertainty of the parameters may affect the performance of the entire network.For this reason,this thesis puts forward a algorithm of two-stage extreme learning machine.The first stage,obtain the initial random input weights,biases and the calculated output weights through the extreme learning machine;the second stage,under the condition of keeping the output weight constant in the limit learning algorithm,the initial input weights and biases were optimized with the minimum error function as the objective function.Finally,using UCI data to perform simulation experiments on functional regression and classification problems,the results show that the two-stage extreme learning machine algorithm has higher performance.
Keywords/Search Tags:The neural network, Random weights and biases, Unbalanced data, Prior probability, Two-stage extreme learning machine
PDF Full Text Request
Related items