Font Size: a A A

A Kind Of Estimate For The Learning Rates Of Regularized Regression Algorithms

Posted on:2012-05-27Degree:MasterType:Thesis
Country:ChinaCandidate:J X ZhangFull Text:PDF
GTID:2120330335978432Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Learning theory is a new and a developing discipline coming from small sample learning and many other related fields such as neural network learning, regression, classification, density estimation and pattern recognition, et al. Since it satisfies the empirical risk minimization principle and has better control in big margin linear classification and regression, it therefore has better applications in data modeling, optimization, classification and predication and therefore has become a new field of being particular noticed.The paper deals with the quantitatively estimate for the learning rates of the regularized regression algorithm. We begin with the hardε-super plane method and therefore give the learning algorithm with theε-insensitive loss and then give the learning algorithm with a general loss, with which and the subgradient of the convex analysis we give the representer theorem for the solutions. This fact equivalently turns the general algorithm into the regression learning algorithm on the finite-dimensional Euclidean space and the rates estimate is then concluded as the convergence rates estimate of the solutions of the statistical programming. The rates are divided into the sample error and the approximation error. The sample error is given by the Markov inequality and a K-functional, the approximation error is shown explicitly.
Keywords/Search Tags:regularized regression algorithm, reproducing kernel Hilbert spaces, learning rates
PDF Full Text Request
Related items