Font Size: a A A

Researches On The Theory And Applications Of Kernel-based Approximation Methods

Posted on:2018-04-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:J B LiFull Text:PDF
GTID:1310330518471772Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Since the beginning of the 21st century,in various fields of scientific researches and ap-plications,the data is widely collected and stored at an unprecedented rate.In the meanwhile,the capabilities and methods of the data processing have been changing in each passing day.The kernel method,as one of the bright star,is studied by many experts and scholars.When dealing with high-dimensional space,we will face the so called ’Dimension disaster’ problems and other issues.However,the kernel function could be used to compute the inner product in high-dimensional space,which promotes the kernel method into machine learning filed.Also,the statistical learning theory,mainly due to Vapnik,has greatly promoted the research and de-velopment of support vector machine based on kernels,which makes the kernel functions spread widely.At the same time,kernel based approximation method has a wide range of applications in classification,regression and data processing fields,among which Tiknonov regularization learning problem are well studied.Based on the approximation method,this thesis studies some researches on the theory and applications of kernel-based approximation methods.1.In many practical problems,data collected from sampling sites always get the random error because of the signal delays,inaccurate measurements and many other known or unknown factors.We study a generalization of the classical piecewise linear approximation with the random sampling data.In the assumption of the distribution of the sampling data,we get the convergence results and the error estimates.2.The problem of numerical differentiation is generated in many mathematical models and practical problems.It plays an important role in scientific research and application.We provides a regularization method for computing numerical derivative.Study a learning algorithm for numerical derivative by a regularized scheme with l1 regularizer.The algorithm is formulated in kernel space and sampling from noisy data.Numerical results for various cases show that this selection method for the regularization scheme is effective and stable.3.Ranking problem has been applied in many fields in recent years,and it’s somewhat different from the traditional classification or regression problem.We do some research on the ranking problem based pairwise approach.Based on the empirical eigenfunction,we propose a ranking methods with application of l1 regularization.Also,the expression of the algorithm is explicit.We also proof the convergence and sparsity of the algorithm.4.We study some applications of kernel approximation methods,and also discuss the necessity of low rank approximation of kernel matrix if the sample collection is large.In addi-tion,the financial model has a crucial role in the financial field,and the data obtained are often disturbed and cause the ’ill posed’ of the problem in the financial model.We introduce the regu-larization method into the calibration of parameters.In risk management,the hedging model is a main method,and we apply our regularization model to solve the calculation of the numerical derivative in the hedge model.
Keywords/Search Tags:Piecewise linear approximation, Sampling sites, Normal distribution, Ranking, Mercer kernel, Empirical eigenfunctions, Sparsity, Numerical differentiation, l~1 regularization
PDF Full Text Request
Related items