Font Size: a A A

Empirical Likelihood Confidence Intervals Of A Class Of Statistic Functionals Under Dependent Samples

Posted on:2005-01-13Degree:MasterType:Thesis
Country:ChinaCandidate:B JiangFull Text:PDF
GTID:2120360125465251Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Likelihood function is one of the most important tools in statistic.It usually requires that we must know type of distribution function of population,population distribution only depends on some unknown parameters,but when we scarcely know of information,only knowing some auxiliary information(for example population two order monment),so we can't make hypothesis of type of distribution function of population,we can't make use of likelihood function.Empirical (log) likelihood function is a kind of nonparameter interring method ,it was introduced by Owen to construst confidence intervals,lately some authors compare it with parameter likelihood and empirical saddlepoint approximation and apply it to smooth function,linear model,semiparametermodel,disliked parameter,regression function,density kernel estimate,biased sample and so on,but they all discuss samples are i.i.d.,for dependent samples,it is complex.In the paper, we discuss that empirical likelihood confidence intervals of a class of statistic functionals under dependent samples. In this paper, we only give out the limit distributions of (blockwise) empirical likelihood ratiostatistics,organized as follows: It is divided three chapters,first,is introduction including its main content. The first chapter mainly discusses empirical likelihood confidence intervals of M-functions in the presence of auxiliary information and in the absence of auxiliary information. The first section is introduction,giving out M-functional definition M-functional:Letbe strongly stationary -mixing([5]) random variables with distribution function F.An M-functionalassociated with F is defined as a root of the equation (1.1.1)empirical likelihood ratio statistic in the absence of auxiliary information supject to the restrictions, empirical likelihood ratio statistic (1.1.2) where satisfying (1.1.3)empirical likelihood ratio statistic in the presence of auxiliary information, (1.1.4) Let h(x,)=(g'(x), (x,))' , zhangbiao(1997) article,then empiricallikelihood ratio statistic: (1.1.5) wheresatisfying: (1.1.6) satisfying: (1.1.7) In section 2 gives out theorem contentTheorem 1.1 Suppose that satisfying (1.1.1) exists and is unique,that is measurable with respect to x,suppose further that is finite and nonzero, is continuous at uniformly in x, is continuous in ,and then for positive d,and real constant ,we have As we do not know A, the above result could not be used in practice. We will use the blockwise empirical likelihood to overcome this shortcoming of the ordinary empirical likelihood.let then blockwise empirical likelihood ratio statistic whereTheorem 1.2 Suppose that satisfying (1.1.1) exists and is unique,that is measurable with respect to x,suppose further that is finite and nonzero, is continuous at uniformly in x, is continuous in ,and then for positive d,and real constant ,we have where. As we do not know A, the above result could not be used in practice. We will use the blockwise empirical likelihood to overcome this shortcoming of the ordinary empirical likelihood.let then empirical likelihood ratio statistic: (1.2.1) wheresatisfying (1.2.2) satisfying (1.2.3) Theorem 1.3 Suppose is similar to theorem 1.2, where. In chapter 2,mainly discuss empirical likelihood confidence intervals for density function and nonparameter functional.Let be with population density f random samples,for a fixed,letbe strongly stationary-mixing random samples,letbestrongly stationary-mi...
Keywords/Search Tags:Dependent samples, Empirical likelihood, Auxiliary information, M-functionals, Density function, Nonprarametric functionals, Confidence intervals.
PDF Full Text Request
Related items