Font Size: a A A

Kernel-Based Algorithms In Statistical Learning Theory

Posted on:2013-01-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y L FengFull Text:PDF
GTID:1220330377951814Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Statistical learning theory plays an important role in some subjects of various areas such as science, engineering, and finance. As a research field, it provides theoretical foundations for algorithms in machine learning. Generally speaking, statistical learn-ing aims at learning function features or data structures from observations. By making use of kernel methods, data can be mapped into a high dimensional feature space, in which various methods can be employed to find relations. In this thesis, we mainly fo-cus on several different kernel-based learning algorithms in the framework of statistical learning theory.Firstly, we study the q-norm regularized least-squares regression with depen-dent samples. We conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponen-tially decaying a-mixing condition and when the regularizer takes the g-penalty with0<q≤2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition, and the capacity of balls of the reproducing kernel Hilbert space.Secondly, we concentrate on the coefficient-based regularized regression problem. The lq-regularized least-squares regression problem with1≤q≤2and data depen-dent hypothesis spaces is addressed. Algorithms in data dependent hypothesis spaces perform well with the property of flexibility. We conduct a unified error analysis by a stepping stone technique. An empirical covering number technique is also employed in our study to improve the sample error. Compared with existing results, we make a few improvements:First, we obtain a significantly sharper learning rate of type (?)(m-θ) with6arbitrarily close to1under reasonable conditions, which is regarded as the best learning rate in learning theory. Second, our results cover the case q=1, which is novel. Finally, our results hold under very general conditions.Finally, we address the pairwise ranking problem via a kernel-based learning ap-proach. Various settings for the pair-wise ranking problem are compared. We adopt a preference-based two-stage setting while the empirical data is generated in a differ-ent manner. For the first learning stage, we learn a preference function by reducing ranking to classification. Learning results concerning the learnability of the ranking rule we learned are presented as in classification. For the second stage, we present an optimization algorithm to produce a scoring function that might be used to yield an ordering.
Keywords/Search Tags:Dependent sample, reproducing kernel Hilbert space, q-norm regularizer, coefficient-based regularization, pair-wise ranking, learning rates, learning theory
PDF Full Text Request
Related items