Font Size: a A A

Empirical Likelihood Ratio Confidence Intervals For The Error Density In Linear Model

Posted on:2006-02-28Degree:MasterType:Thesis
Country:ChinaCandidate:L Y YangFull Text:PDF
GTID:2120360155471507Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Empirical likelihood is a kind of nonparametric deduction method, it was introduced by Owen to construct confidence intervals, lately some authors compare it with parameter likelihood and apply it to linear model, semiparametric model, troublesome parameter, regression function, density kernel estimate and so on. In this paper, the empirical likelihood ratio confidence intervals for f (x)are given by using the empirical likelihood method. Consider a linear model : yi=xi′β+ ei,i=1…n, where {ei}i=1n are i.i.d.random error variable with unknown density f(x), β∈Rp is an unknown regression coefficient, {xi}i=1n ∈Rp are fixed design vectors. In traditional regression analysis, we always suppose that ei N(0,σ2),then using the least square estimation of βto do statistical deduction, But if ei N(0,σ2)is invalid, then the least square estimation of βwill lose many fine properties. in case of for X the regression of Y is linear and error distribution is not normal, Huber propose to substitute a robust estimation for the least square estimation and has researched its fine properties. since that a question come out: how to judge exactly ei N(0,σ2)is valid or not, in model(1.1.1).in order to solve this question, we must estimate error distribution and to test a hypothesis about it, at present lots of research on the consistency of all kind of estimation of f (x)have been done and obtain satisfied result, while there is no discussion on the empirical likelihood ratio confidence intervals for θ0=f (x) till now, in this paper, some related discussion have been done and the result obtained is similar to Owen[1988]. 一. Empirical likelihood ratio confidence intervals for error density while the estimation of βis least square estimation Throughout this paper, we assume that e1 , e2 ,..., en , are i.i.d.random error variable with The empirical likelihood ratio statistic for f (x)is defined as (1.2.2)while e?n i = yi ? xi ′β?n for i = 1,...,n ,define1n( ) 1n ( i)if x Kx enh =h= ∑?。In order to obtain the result we need the following assumptions (1)kenel K (u )in(1.2.2)is bounded for u ∈R1, Reimann integrable and has bounded derivations up to the third order, has compact support[-1,1], satisfies(1.2.3)and(1.2.4), ∫u j K ( l) ( u ) du = 0, j = 0,1, l=1,2 (1.2.3) 1, 0,( ) 0,1 ,, .jju K u du j rc j r= ???≤= <∫ ?? = (1.2.4) while r is integer no less than 2and c is nonzero constant; (2) f has continuous derivatives up to the rth order in a neighbourhood of x and f(x)>0 ; (3) h →0, nh4 →∞,nh 2 r→0,as n →∞. Note :K who satisfied condition(1)exists(can be seen in appendix). The first main result is: Theorem 1 suppose conditions(1)-(3)hold, then -2log ?( θ0)?L?→χ(21), ( n →∞) note:let Aα= {θ0 | ? 2log ?( θ0) ≤Cα},from theorem 1 we have 2P ( Aα) ?n?→∞?→P ( χ(1)≤Cα) = 1? αwhereCαis the up percentileαof χ(21),thus we obtain the asymptotically confidence interval for θ0=f (x)。二. Empirical likelihood ratio confidence intervals for error density while the estimation of βis M estimation M estimation definition ?βn is M estimation of β, that is '1n ( i i ?n)iρy xβ=∑? =1min ( )ni iiβ∑= ρy ?x′βwhere ρsatisfies: (1) ρ∈R1is a convex function and its right and left derivatives: ?+ (? ) and ?? (? ) exist,a function ? (? ) can be found who make the following formula hold ?? ( u)≤? ( u)≤?+ ( u)? u ∈R1; (2)exist positive constants cand h0 ,for any h ∈(0, h0) and anyu ,we get ? ( u + h)?? ( u )≤c; (3) E? ( e1) = 0, E | ? ( e1) |t≤∞,for some t > 2, G ( u )= E? ( e1 + u)have derivative g ( u )for u∈( ?? , ? )and g (0) > 0, g ( u )continue at point 0。It can be easily testified that ρ(? ) =| ? |and ρ( ? ) = ( ? )2satisfy the three conditions mention above. Except that substitute (3)′h →0, nh(4logn)-4 →∞,nh 2 r→0as n →∞for (3) h →0, nh4 →∞,nh 2 r→0as n →∞in the first part 1,the other assumptions are the same to part 1. The second main result is: Theorem 2 suppose conditions(1)-(3)hold, then -2log ?( θ0)?L?→χ(21), ( n →∞) note:let Aα= {θ0 | ? 2log ?( θ0) ≤Cα},from theorem 2 we have2P ( Aα) ?n?→∞?→P ( χ(1)≤Cα) = 1? αwhereCαis the up percentileαof χ(21),thus we obtain the asymptotically confidence interval for θ0=f (x)。三. Empirical likelihood ratio confidence intervals for error density while the estimation function of f (x) is indicator First we give some assumptions and symbol (1)assume that e1 , e2 ,..., en , are i.i.d.random error variable with unknown density f (x), xi ≤C , i = 1...n , where C is a finite positive number, 1n Sn→Σ> 0,Where S n = ∑i=n1 xi xi′and β? n is the least square estimation of β, (2) f (x)>0 and f satisfies partial Lipschitz at x , that is ,for constant D = D ( x) > 0and δ= δ( x) > 0who only relate to x and for y ∈( x ? δ, x+ δ) | f ( x ) ? f ( y ) |≤D | y ? x| (3) h →0, n 5 12 h /log n →∞, nh3→0,as n →∞. The empirical likelihood ratio statistic for f (x)is defined as: ? (θ0 )=sup1niinp?∏(3.1.1) where pi subject to: pi ≥0,11niip=∑=,1niip=∑( ?)[ 1( )]2h I x ? h ≤en i≤x + h? f x=0, estimation of f (x)is defined as 1?n ( ) 1n ( ?ni)if x Kx enh =h= ∑? , where e?n i = yi ? xi ′β?nfor i = 1,...,n , define 21 ( ?)( )wi = h I x ? h ≤en i≤x +h? f xand 12An = (2 nh )?。The third main result is: Theorem 3 suppose conditions (1)-(3) hold, then -2log ?( θ0)?L?→χ(21), ( n →∞) note:let Aα= {θ0 | ? 2log ?( θ0) ≤Cα},from theorem 3 we have 2P ( Aα) ?n?→∞?→P ( χ(1)≤Cα) = 1? αwhereCαis the up percentileαof χ(21),thus we obtain the asymptotically confidence interval for θ0=f (x)。In the end ,we present some simulation results so as to test the theoretical results.
Keywords/Search Tags:empirical likelihood, confidence interval, linear model, kernel estimation
PDF Full Text Request
Related items