Font Size: a A A

The Relation Between The Nth Minimal Error Of Two Classes Information For Randomized Approximation

Posted on:2011-06-17Degree:MasterType:Thesis
Country:ChinaCandidate:W Y FengFull Text:PDF
GTID:2120360305473137Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Multivariate approximation is one of the most popular problems in IBC (In-formation Based Complexity). It is studied commonly by a lot of mathematicians. The most important reason is that it is related to continuous problems essentially. For example, multivariate integration, solutions of partial differential equations, solu-tions of integral equations and some non-linear problems and so on.The nth minimal error is one of the most important problems in IBC. It has a lot to do with corresponding complexity of information and computation. It is the best possible algorithm among all algorithms using at most n information operations andε-complexity is based on it. Then we can focus on the study of tractability of mul-tivariate integration and approximation which was introduced in 1994 by professor Wozniakowski from Columbia university in New York, U. S. A. Therefore, it is nec-essary to study the nth minimal error.For the nth minimal error, there are many details in IBC, including the defini-tion of it in the worst case setting, average setting, and randomized setting, and the theories of the nth minimal error. According to the theorems of the nth minimal er-rors and the proof of some theorems, we obtain lots of evidences and methods which are good for our further study ofε-complexity and the tractability of multivariate problems. Especially, constructing bump function to get the lower bounds of the nth minimal errors andε-complexity is an important way to solve other similar prob-lems in different settings and spaces.In this paper, two classes of information are considered. One is linear informa-tion which consists of all (continuous) linear functionals, denoted byΛall. It is rela-tively easy to analyze, but it is difficult for practical computation. The other is stan-dard information and it consists of function evaluation, denoted byΛstd. Although it is harder to analyze, only such information is available in computational practice in many cases. This article considers the approximation of multivariate functions in a repro-ducing kernel Hilbert space. We study the relations between the nth minimal errors for two classes of information in the worst case setting, average setting and random-ized setting. We consider functions including d variables, and d is arbitrary. The error between the function itself and its approximation is defined in a weighted L2-norm.In the kernel part of this paper, first, we focus on S:H→G and prove rela-tions between the nth minimal errors for standard information in the worst case set-ting and randomized setting. The relation enables us to infer a good result about the nth minimal errors in the randomized setting for standard and linear information. And then we get a similar conclusion considering the operator Sρ:H→Hρ.The rate of convergence or the power of convergence is an important part of approximation theory. The power of convergence presents how fast the nth minimal error of algorithms which use n linear functional or n function values goes to zero. The larger the power of convergence, the easier the problem is. Thus it is significant to study the power of information.There are three parts in this paper.In the first chapter, we introduce some preliminary knowledge, i.e. basic content of IBC, including the definitions of the error and the nth minimal errors, the bounds of the nth minimal errors, the relation between the nth minimal errors, andε-complexity connected with the nth minimal error in the worst case setting, aver-age setting, and randomized setting respectively.In the second chapter, recalling some notions and theories in reproducing kernel Hilbert space, we present the relations between the nth minimal errors for linear and standard information in the worst case setting and average setting. The relation be-tween the nth minimal errors for the two class information above is mainly discussed in the randomized setting. We consider two operators S,Sρand get the following conclusions:In the third chapter, the power of standard information for weighed approxima-tion is described in details. There, we define multivariate approximation and its error over reproducing kernel Hilbert space. It may be not true if we define them over other spaces.
Keywords/Search Tags:multivariate function, randomized approximation, linear information, standard information, the nth minimal error
PDF Full Text Request
Related items