Font Size: a A A

The Asymptotic Optimality And Application Of NG Estimator Under Several Models

Posted on:2021-12-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:X P ChenFull Text:PDF
GTID:1480306104453624Subject:Statistics
Abstract/Summary:PDF Full Text Request
Variable selection is a fundamental problem in data modeling.In recent years,penalized,or constrained,regression methods that simultaneously address estimation and variable selection emerged as a highly successful technique.Nonnegative Garrote(NG)(Breiman,1995)is also a penalized variable selection method.Xiong(2010)developed the NG method which has natural selections of penalty function and tuning parameter based on the Mallows' Cp criterion for the linear regression models with homoscedastic errors.However,Xiong(2010)did not provide the asymptotic optimality of the NG estimator.In this paper,we develop the NG methods under other statistical models and present the asymptotic optimality of NG estimators by referring to the idea of the asymptotic optimality of the model average estimator.The main contents are as follows:we show the asymptotic optimality of NG estimator based on Mallows criterion for the linear regression models with homoscedastic errors and corresponding proof and provide the theoretical support for the NG estimator presented by Xiong(2010);By referring to the idea of the asymptotic optimality of the model average estimator for the linear regression models with heteroscedastic errors,We propose NG estimator based on Mallows information criterion with heteroscedastic errors,and show the asymptotic optimality of NG estimators under some regularity conditions;By referring to the idea of the asymptotic optimality of the model average estimator for the linear mixed-effects models,we propose NG estimators that correspond to the fixed and random effects,respectively,based on a general Mallows information criterion and show the asymptotic optimality of NG estimators under some regularity conditions;By referring to the idea of the asymptotic optimality of the model average estimators for the threshold models,we propose NG estimators based on a adjusted Mallows information criterion and show the asymptotic optimality of NG estimators for the threshold models which do not contain lagged dependent variables and the threshold autoregressive model under some regularity conditions;By referring to the idea of the asymptotic optimality of the model average estimator for the quantile regression models,we propose NG estimators based on a Mallows-type information criterion for the quantile models and show the asymptotic optimality of NG estimators under some regularity conditions;We consider NG estimators for the partially linear quantile regression models.First,we employ B-splines to estimate the nonparametric functons and thus the partially linear quantile regression models are transformed into “the linear quantile regression modeds”,then we propose NG estimator based on a Mallows-type information criterion for “the linear quantile regression modeds”.In addition,we conduct lots of simulation studies and analysis of practical examples to show that compared with the traditional variable selection method,NG method can effectively compress the weak effect coefficient to 0.Furthermore,compared with other penalized least-squares methods for regression analysis,the NG method with the nature penalty and tuning parameter,which can make complicated calculation procedures for selecting approximate optimal tuning parameter unnecessary,was very competitive in terms of estimation accuracy and computation speed.We also show that the NG method can yield more accurate estimators of the error variance.
Keywords/Search Tags:Nonnegative Garrote, Maximum likelihood estimation, Generalized confidence interval, Asymptotic optimality, Coefficient compression, Averaging model
PDF Full Text Request
Related items