Font Size: a A A

Interval Estimation And Interval Prediction Under The Decision Theory

Posted on:2007-05-25Degree:MasterType:Thesis
Country:ChinaCandidate:Q H MaFull Text:PDF
GTID:2120360182496209Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
At the end of 1940s, Wald established statistical decision theory.One of the features of the theory i 所 to turn a statistical problem intoa optimal problem. These guidelines, like Bayes, Minimax and theequivariant property,all may be used in the interval estimation,Thereby they greatly enriched interval estimation theory. Followingthis clue, we explained the interval estimation of a normal mean undersymmetric loss function and asymmetric loss function in the section 2. From the statistical decision theory point of view ,the mostattention has not been given to prediction problems with some orderrestrictions but the parameters estimator .By combining the improvingmethod of the parameters estimator ,which is used in the estimationtheory ,with the order restriction of the unknown distributionparameters ,we provide method for improving on the location andscale equivariant prediction interval.1 Interval estimationLet X 1 ,,Xn be i.i.d. normal random variables with mean μand variance 1 ( N (μ,1)).We take the normal prior distribution ofN ( 0,k2) as π k with positive constant k .The loss function is givenby( μ,(,))()( ,)(μ)L ab= b?a+mIabc,Then the confidence interval ( X ? d,X + d) is minimax among all1 ?α confidence interval ( X ? 1nzα2,X + 1n zα2) minimizessup{ E μ(μ?Uμ?L)}μ? .Suppose that X 1 ,,Xn be i.i.d. normal random variables withmean μ and variance σ 2 ( N ( μ,σ2)).For the case that σ isknown ,we adopt the Linex loss function to compare the lowerconfidence bound μ? L and the upper confidence bound μ? UL1 ( μ ,μ?L )= b1???e xp(a1(μ?σL ?μ))?a1(μ?σL ?μ)?1???,L2 ( μ ,μ?U )= b2???e xp(?a2(μ?σU ?μ))+a2(μ?σU ?μ)?1???,Where a iand bi (i =1,2) are known positive constants.THEOREM 1.1 If a1 < 2u0n with Φ (u 0)=1?α,then thelower confidence boundnμ? L? =X?u0σis minimax and admissible among all 1 ?α lower confidence bounds.THEOREM 1.2 If a 2 < 2v0n with Φ ( v0)=1?α,then theupper confidence boundnμ? U ? =X+v0σis minimax and admissible among all 1 ?α upper confidence bounds.In order to compare confidence interval ( μ? L ,μ?U), we adopt thefollowing loss functionL ( μ ,μ?L,μ?U)=L1 (μ ,μ?L)+L2 (μ ,μ?U)THEOREM 1.3 If ? ?)<1?αΦ ( 2a1 n )Φ(2a2n holds andnμ? L? =X?u0σ, μ? U ? =X+v0nσThen the confidence interval ( μ? L? ,μ?U?) is minimax and admissibleamong all 1 ?α confidence interval.2 Interval prediction2.1 Location familySuppose that ( X , Y,Z) has the joint densityf ( x? ξ ,y?ξ,z?η)where f is known , ξ and η are unknown location parameters withξ ≤ η.We shall consider the problem of improving a predictioninterval δ c? =( X?c1 ,X+c2) byδφ? =( X ?φ1 (Z?X), X +φ 2 (Z?X))where c i are constants and φ i are functions ( i =1,2).Let U = Z?X, V = Y?X.The joint density of (U ,V)is given byg (u ? λ,v) where λ = η?ξ≥0.Let G (u ,v)= ∫0+ ∞ g(u?t,v)dt.THEOREM 2.1.1 Assume that(1) G (u ,v) is TP2,(2)φ 1 (t)is non-increasing and t l→im ∞ φ1 (t)=c1, φ 2 (t)is non-decreasingand t l→im ∞ φ 2 (t)=c2, φ 2′ (t )≥?φ1′(t),(3) φ 2′ (t )G(t,φ2(t))+φ1′(t)G(t,?φ1(t))≤0 for each t.Then { } { }P Y∈ δ c? ≤PY∈δφ?and EL (δ c? )≥ EL(δφ?) for any θ ∈Θ.2.2 Scale familySuppose that ( X , Y,Z) has the joint densityξ ? 2η ?1f(ξx,ξy,ηz), x > 0 ,y>0,z>0where f is known , ξ and η are unknown scale parameters with0 <ξ ≤η We shall consider the problem of improving a predictorδ c =cX by δφ = φ(XZ )X where c is positive constant and φ isa positive functions.Let XU= Z,XV = Y .The joint density of(U ,V)is given by λ?1 g (λu,v) where λ = ηξ≥1.Let G (u ,v)= u∫1 + ∞t?2 g(ut,v)dt.THEOREM 2.2.1 Assume that(1) G (u ,v) is TP2,(2) φ (u) is non-decreasing and ucul →im ∞ φ ()=,(3) ∫0+ ∞ v ?1 L′(φ (u)v)G(u,v)dv≥0 for any u.Then R (θ ,δφ)≤R (θ ,δc) for any θ ∈ Θ ={θ =(ξ,η)ξ≤η}.If there exist c 0and φ 0such that()()00 0∫1 ′=+∞ v ? Lcvhvdv, ∫0 1 ′(0())(,)=0+∞ v ? Lφ uvGuvdv.COROLLARY 2.2.1 Assume that(1) G (u ,v)is TP2,(2) G (u ,xy) is TP2inxand y for each u.Then (,)≤R θδφ0(,)R θ δc0 for any θ ∈Θ.We shall consider the problem of improving a prediction intervalδ c? =( c1 X,c2X) by δφ? =(φ 1 (XZ)X,φ 2 (XZ)X),where c i arepositive constants and φ i are positive functions ( i =1,2).THEOREM 2.2.2 Assume that(1) G (u ,v)is TP2,(2) φ1 (u) and φ 2 (u) is non-increasing, ul →im ∞ φ1 (u)=c1,ul →im ∞ φ 2 (u)=c2, φ 2′ (u )≥φ1′(u),(3) φ 2′ (u )G(u,φ2(u))?φ1′(u)G(u,φ1(u))≤0 for each u.Then { } { }P Y∈ δ c? ≤PY∈δφ?and EL (δ c? )≥ EL(δφ?) for any θ ∈Θ.
Keywords/Search Tags:Estimation
PDF Full Text Request
Related items