Font Size: a A A

Expirical Likelihood For Partially Linear Models Under Φ-Mixing Error

Posted on:2006-02-27Degree:MasterType:Thesis
Country:ChinaCandidate:X X HeFull Text:PDF
GTID:2120360155971505Subject:Probability and Statistics
Abstract/Summary:PDF Full Text Request
Empirical likelihood is a kind of non-parameter inferring method which was introduced by Owen(1988)~[1] in full-sample and it has the similar feature to Bootstrap. Compared with classical or modern inferring method, it has so many projecting facilities that it arouse great interests of statisticians. They apply it to many statistic models, such as linear model, generalized linear model, partial linear model, biased sample model, regression model, quantile estimation, M- functionals, density kernel estimation, biased sample, nuisance parameter, time series,conditional quantile and conditional density. But the samples they discussed are independence identity distribution. Kitamura,Kimchi(1997) applied it to weak dependence model, Zhang Jun-Jian,Wang ChengMing,Wang WeiXin(1999)~[5] investigate it to M-dependent, α-mixing and φ-mixing and get the similar results to independence identity distribution.The partially linear model originated from Engle.et al(1986)~[6] when he studyed climate's effect on electricity. Afterwards, it has received extensive studies in the literature. Statisticians interested main in large-sample features just as in other regression model and get many useful results from 1980's. For examples, they studied how to construct β's asymptotic valid estimation, β_n's(β's weight least squares estimation) asymptotic normalization, the asymptotic feature of β's estimable covariance function, the optimal strong or weak convergence speed of β and g_n(g's estimation)and so on. In these two years, statisticians in China get far more details in it, such as estimation's asymptotic effect, M-estimation's asymptotic normalization, Berry - Essen terminus for asymptotic distribution of parameter's estimation and so on. Shi.Jian, Lau.TaiShing~[7] discuss empirical likelihood in this model, but all these are independence identity distribution. For dependent samples, it is complex.In this paper, we will construct empirical likelihood confidence intervals of parameters for partially linear model under fixed-design vectors and {n) satisfy 1 > 4>{n) [ 0, X)£i 4>^ (*) < °° and ^ei = 0, Ee?=cr2>0, £|£j|4 = //4 < oo, (i = 1,-■ ■ ,n).To apply the empirical likelihood method for the partially linear model, we have to give an approximate random error sequence. Our basic idea is: suppose (3 is known, then the model (2.1) is reduced to an nonparametric regression model y — x'(3 = g(t) + e, hence g can be estimated as usual. Here, we adopt the weight function method to estimate the nonparametric part g. More precisely, g() can be estimated byi=l where {Wni(t) :lP2,--.,Pn j=1subject to the restrictions: J27=iPi^i = ?>Yn=iPi = ljPi > 0 (1 < i < n). As a consequence, a maximum empirical likelihood estimator (MELE) can be defined by /3n = argmax^^p Lni(f3). Therefore, a nonparameter log-likelihood ratio statistic based on (2.2) is given byo) = logLnl((30) - logLnl((3) (2.3)Let {pi(f3)}?=1 satisfied Lnl(f3) = H?=iPi(P) for Z3 G RP- lt can be shown later that Lnl{(3) is maximized at pi(J3) = ^,for 1 < i < n and /? = (Ya=i ^i^'i)'1 Ya=i ^iVi-To establish a theory for LRi(po), some necessary assumptions have to be imposed on the model. For 0 < a < jAl = Weight functins {Wni(t) > 0 : 1 < i < n} satisfyA2 :max IIxjII = o(na)l 1 uj\\ =here "chv denotes the convex hull of a set in Rp _h.The first result in this paper isTHEOREM 1 Suppose that conditions A1-A5 hold, ±E(YZ=i K \ E?=i x^ -^ Al(A0, Ax > 0), thenwhere A^,A\ is p x p times matrix, Z ~ Np(0,Aq), Np(0,Aq) is p times normalized distribution.As we do not know Aq and a2, the above result could not be used in practice. We will use the blockwise empirical likelihood to overcome this shortcoming of the ordinary empirical likelihood.For any (3 e Rp, under the condition Yn=iPi^ni = 0) YA=iPi = ^-iPi — 0 (1 — * — n)> we consider the following blockwise empirical likelihood ratio:It is easy to obtain the (log) blockwise empirical likelihood ratio statistic:LR2{(3) = - X>g{l + \'(p)Zni} (2.4)where A(/3) g Rp is determined byThe second result in this paper isTHEOREM 2 Suppose that condition A1-A5 hold and EVn -^ Al(A0 > 0), then-2LR2(j30) ^ X%)where xfp\ is a chi-squared distribution with p degrees of freedom.As a consequence of the theorem 2, confidence regions for the parameter (3 can be constructed. More precisely, for any 0 < a < 1, let Cq. be such that Pr(xJ ■. > ca) < a. then, t(a) = {(3 g Rp : —2LR2(f3) < ca} constitudes a confidence region for (3 with asymptotic coverage (1 — a).There are two advantages of the above nonparametric likelihood ratio inference over the asymptotic normality approach. The first is that i{a) is not predetemined to be symmetric so that it can better correspond with the true shape of the underlying distribution. The second is that there is no need to estimate the asymptotic variance which is rather complicated in nonparametric or semiparametric settings.
Keywords/Search Tags:fixed-design, φ-mixing error, partially linear model, empirical likelihood, non-parameter likelihood ratio, sieve approximation, weight functions
PDF Full Text Request
Related items