Font Size: a A A

ERM And Least Square Regularization Learning With Unbounded Sampling

Posted on:2011-12-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:C WangFull Text:PDF
GTID:1100360305983424Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Along with the continuous development of the theory and applications for the computer science, it becomes an important research problem that how to understand and handle with the data information by using computers, for recent scientific technology. And the main research content of statistical learning theory is to find some regular relations between the input and output data, from some given sample data.Empirical risk minimization (ERM for short) algorithm is an important method to deal with this problem. From the laws of large numbers in statistics, we know the sum of the errors for each single sample can asymptotically approximate the expectation of the error, while the sample size tends to infinite. This makes us able to minimize the sum of the errors for limited samples, i.e., the empirical risk, to approximate the best function.However, the ERM algorithm are always ill-posed, to solve this problem, the regularization method is introduced in learning theory. This method is originally from the study of inverse problem. And here we just use a form of it that adding a regularized term in the ERM algorithm, to eliminate the effect of ill-posed problem.In the previous analysis of the statistical learning theory, people usually con-sider the problem as follow. The samples are drawn from a (some for the case the samples are not identical) distribution function. And then by substituting these samples into the algorithms to solve an approximation for the best function in some sense. Here the distribution function(s) are always chosen to be uniform bounded in the output component. However, this assumption is somewhat strong, so that even the frequently used Gaussian distribution can not satisfy. The main contribution of this thesis is, to reduce this assumption by a weaker one. That is, the distribution (or conditional distribution) only needs to have some moment incremental bounds. In this case, we utilize some unbounded probability inequal-ities, to derive a learning rate at best the same with the classical uniform bounded problems.
Keywords/Search Tags:Learning theory, Empirical risk minimization (ERM), Least square (LS) regression, Regularization, Reproducing Kernel Hilbert space (RKHS)
PDF Full Text Request
Related items