Font Size: a A A

Adaptive Inference For Some Nonparametric And Semiparametric Models

Posted on:2010-06-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q ChenFull Text:PDF
GTID:1100360278974275Subject:Financial mathematics and financial engineering
Abstract/Summary:PDF Full Text Request
Empirical likelihood is a nonparametric method of inference based on a data driven likelihood ratio function. Empirical likelihood can be thought of as a bootstrap that does not resample, and as a likelihood without parametric assumptions. Like other nonparametric methods, empirical likelihood inference does not require us to specify a family of distributions for the data. Likelihood methods are very effective and flexible. They can be applied to find efficient estimators, to construct tests with good power properties and to construct small confidence regions.In parametric model, it always contains interesting parameter and nuisance parameter. To construct confidence region for interesting parameter, we need the estimators of nuisance parameter at first and apply them to estimating function. This is called plug-in method. Since the estimator has a parametric convergence rate, the resulted log empirical likelihood ratio converge toχ12 all the same. But in semiparametric model, for examplewhere the nuisance parameter g(·) is a unknown regression function. Then the convergence rate of the plug-in nonparametric estimator (?) is slower than (?) when an optimal bandwidth is chosen. Thus, the resulted log empirical likelihood ratio function limits toω1ri + ... +ωd-1rd-1, whereωi is unknown weight and ri-χ12, for i = 1,... ,d-1. This unknown distribution makes it difficulty to construct confidence region forβ. In section 2, we will propose two methods, bias-corrected smoothed score function and bias-corrected empirical likelihood, to conquer this problem. The key point of our ideal is to correct bias by centering. By bias correction, the estimating function has a convergence rate of (?), and ensure theχ2 distribution of the log em- pirical likelihood ratio. We also apply bias-corrected empirical likelihood method to more complicated condition: errors-in-variables models, which has a wide application. In Fuller (1987) and Pepe and Fleming (1991), suppose the auxiliary data (?) has a parametric form aswhere X is validation data, (?) is auxiliary data. In this paper, with the contribution of bias-correction, we let (?)=f(X)+εrather than a parametric form. For parametric plug-in and nonparametric plug-in, the limit distribution resulted by our method isχ2 So our bias-correction method is adaptive to plug-in method.The model-adaptability we mentioned in section 3 and section 4 is a basic theoretical criterion for statistical inference and has attracted much attention in the literature. In nonparametric settings, this issue is highly relevant to the modern adaptability theory. Briefly speaking, this adaptability means that the selected confidence region should adapt automatically to submodels of nonparametric functions in a rate-optimal way. The confidence band constructed by existing methods is actually a confidence ball, see Li (1989), Hoffmann and Lepski (2002), Baraud (2004) and Wasserman (2005). It is obviously that the shape of the confidence ball is not determined by data, so it is not data-adaptive. In this paper, we introduce El to construct honest confidence band, and propose model-data-adaptive confidence bands. We also apply this method to varying-coefficient model and normal nonparametric model. The model-data-adaptive confidence band adapt automatically to submodels of nonparametric functions in a rate-optimal way, and its shape is determined by data.
Keywords/Search Tags:semiparametric model, single-index model, score function, varying coefficients model, adaptive, empirical likelihood, confidence region, asymptotic distribution
PDF Full Text Request
Related items