| Extreme Learning Machine(ELM)is a machine learning method for training Single-hidden Layer Feedforward Neural Networks(SLFNs).It has obvious advantages in solving problems such as classification and regression.ELM is an improvement of Backward Propagation(BP)algorithm,which not only overcomes the shortcomings of slow training speed of the BP algorithm effectively,but also has good generalization performance.With these advantages,ELM is widely used in speech recognition,fault diagnosis,stock price prediction,image processing and other practical fields.In order to improve the prediction or classification accuracy of the algorithm,scholars improved the algorithm on the basis of ELM and proposed Regularized Extreme Learning Machine(RELM).However,in practical application,the data is often doped with noise and outliers,which degrades the performance of the algorithm.Since RELM cannot deal with noise and outlier specifically,resulting in overfitting problems in the model,which greatly reduces the generalization performance of the algorithm.Therefore,on the basis of RELM related theory,this paper mainly improves RELM from the perspective of loss function and membership function,and the work content is as follows:1.L2,1-norm robust regularized extreme learning machine(L2,1-RRELM)based on CCCP algorithm is proposed.L2,1-RRELM gives constant penalties to noise and outliers to reduce their adverse effects by replacing the square loss function with a non-convex loss function.At the same time,L2,1 norm was used to replace L2 norm of structural risk in RELM.Different hidden layer neurons were divided according to their importance,and unimportant neurons were deleted to make the model sparse.Aiming at the non-convex property of L2,1-RRELM,the concave-convex method is used to solve the model,and the convergence of the algorithm is proved.L2,1-RRELM was applied to artificial datasets and UCI datasets with different levels of noise for experimental verification.The experimental results show that L2,1-RRELM has good generalization performance,strong robustness and high anti-noise ability.2.L1-norm robust regularized extreme learning machine with asymmetric C-loss(L1-ACELM)is proposed.The traditional RELM uses the squared loss function as the empirical risk.Due to its unboundedness,the model lacks robustness to noise and outliers.Meanwhile,because the loss function is symmetric,the algorithm has low generalization performance when processing asymmetric data.In order to solve these problems,a bounded,non-convex and asymmetric C-loss function is proposed,which is used to replace the square loss in RELM.By adjusting the asymmetric parameters,the influence of noise and outliers on the algorithm is reduced and the generalization performance of the algorithm is improved.At the same time,a sparse mathematical model is obtained by replacing L1-norm with L2-norm.In the process of solving the problem,the half-quadratic optimization algorithm is used to solve the problem,and the convergence of the algorithm is proved.L1-ACELM was applied to artificial datasets and UCI datasets with different types of noise for experimental verification.Experimental numerical analysis shows that in most cases,the proposed method has better generalization ability than the traditional regression method.3.Regularized fuzzy least squares twin extreme learning machine(RFLSTELM)is proposed.Compared with traditional ELM,RFLSTELM only needs to solve two small-scale linear equations,which shortens the training time to a certain extent.In order to avoid the overfitting problem of the model,regularization terms are added to the model.At the same time,the membership function is introduced to assign different weights to each sample point to weaken the influence of noise points on the model.RFLSTELM was applied to NDC datasets of different sizes and UCI datasets for numerical experiments.The experimental results show that RFLSTELM has good classification performance. |