Font Size: a A A

Local Polynomial Estimator Of The Regression Function In Autoregressive Models With Errors In Variables

Posted on:2007-06-02Degree:MasterType:Thesis
Country:ChinaCandidate:Z X YuFull Text:PDF
GTID:2120360182496187Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In this paper, we study the problem of the nonparametric estimation of the autoregression function in autoregressive models with errors-in-variables. With the application of the deconvolution kernel, we give the local polynomial estimator of the autoregression function , we also prove the deconvolution kernel estimator proposed by Comte(2004) is a special case of the polynomial estimator. Under some specific conditions , the properties of consistence and asymptotic normality of the local polynomial estimator can be obtained here. The performance of the method are studied by some simulation experiments.The well-known Black and Scholes'(1973) model assumed the following dynamics for an asset price St at time t:where Bt is a standard Brownian motion. The solution of this equation leads to the following model for ht = In(St/St-1), the returns associated with the asset price St at time t:where is a sequences of i.i.d Gaussian variables.It appears from financial data that the volatility , Var(e) = 0% < +00, fc denotes the density of e.[Assumption 2] et = A(ln(^t)2 -u),u = E(\n(^)2), {^} i-i-d., ^t Only the data of {Yt} can be observed , fe is known, Comte(2004) gave the deconvolution kernel estimator of the autoregression function m(x) in the model above.We consider the estimator of m{x) based on the observationswhere Kn(x) and /n(x) are given by:1 ? Yt - xwhere K(-) is a kernel, hn is a bandwidth parameter and g* denotes the Fourier transform:p*(x) = / eiteg(t)dt.For the data of {Xt} can not be observed , the estimator of m(x) just can be constructed on the data of {It} and Yt = Xf\-et.ln light of the method that Comte(2004) gave the deconvolution kernel estimator , and considering the special function of the deconvolution kernel , we give the local polynomial estimator m^x) of m(x) constructed on the data of {Yt} in section 2, and we also discuss the deconvolution kernel estimator rhn(x) is a special case of the local polynomial estimator rh{pn(x);In section 3, we exhibit the properties of consistence and asymptotic normality of the local polynomial estimator under some specific conditions;In section 4 , we deal with a test of the method through simulation study;We present some elements of proofs in section 5.We give the local polynomial estimator of the autoregression function in the autoregressive model with errors-in-variables:[Assumption 3] {Xt} is a stationary sequence , fx() is bounded away from 0 on the interval [a,b](a < b), and fx(-) admits bounded derivatives until order I — 1.[Assumption 4] The conditional moment E{Xf\Xt-i = i) is continuous on [a, 6] (a < 6), moreover EX?1 < +oo.[Assumption 5] m(x) admits continuous derivatives until order /-Ion the interval [a, b] (a < b).[Assumption 6] K{-) is a kernel ,K(u) = K(—u),moreover, K(-) is a kernel of I — 1 order, i.e.:jK(y)dy=l, jyl-lK(y)dy^0 , f yiK(y)dy = 0, i = 1,2,. ..,1 - 2.For instance, the Gaussian kernel K(x) = e^/2/y/2ir is a kernel of order 2.[Assumption 7] f*(t) = £(eite) ^ 0, t G JS1.If al the conditions of the assumptions above are fulfil ed, and Yb.Yi, ??-,^n are observations , fe is known, we consider the local polynomial estimator of m(x) in the autoregressive model :) = F(0)TCn(x),where Cn(x) is defined asCn(x) = arg minC6fl't=lwhereSee Stefanski and Carroll(1991),if i^n() is a real function, then Kn(u) = iiTn(—u), and #(?) is a even bounded function that integrates to 1 although it is not nonnegative. Here, Kn(-) is taken as the weight function, we define Kn((itn) as Tnax{Kn(fhn),0} , andc =QF(u) =Mtn -t—i 3*KBased on the definition of the estimator ,we can obtain the properties of consistence and asymptotic normality of the local polynomial estimator under some specific conditions.We assume the following:= l,E(rn) = E(t&) = 0,m4 = E{(rj( - I)2} < +ao.Erj? <+oo.The density p(-) of 7/1 exists and satisfies inf p(x) > 0 for any compact HcR1.(A2) There exists 0 < cx < 1-E\rn\ such that |m(a;)| < Ci(l + |x|). There exists C2 > 0, such that |m(i) - m(y)| < &i\x - y\, x,y G R1.(A3) Eef < +00.Lemma 1. Under assumptions (Al) - (A3) the Markov process {Yt} defined above is geometrical y ergodic ,i.e. it is ergodic, with stationary probability measure tt() such that , for almost every y,for some 0 < p < 1. Here Pn(B\y) = P{Yn € B\Y0 = y}, for a Borel subset B C R1, and | ? | tv is the total variation distance. (A4) The function m(x) is (/ — 1) times continuously differentiable and there exists one-sided derivatives m± (x), at the point x 6 Rl-(A5) The density fi(-) of the stationary distribution ir(-) exists , is bounded, continuous and strictly positive in a neighbourhood of the point x.(A6) As we defined above ,the deconvolution kernel Kn : R1 —+ i?4" U {0} , uniformly bounded, moreover, max{max{|uj;tx € suppKn}} < +00. There exists k > 0, maxH-ftTnUoo < k. (A7) hn = Pn-W+1\ where (3 > 0.(A8) The distribution of Yo is definite,which is the stationary distribution tt() in Lemma 1 .Define the fol owing matrices= fF(u)FT(u)Kn(u)du,according to the definition of Kn(-) and condition (A6),we know for any positive interger n, A, and $n are positive definite (Tsybakov'10!).Set Dn = A1^nA^1.Letu > 0, u<0.^-J F(u)ulKn(uWl\x,u)du.Denotem(x) m'(*)/?nC{x) =we have the theorems below:Theorem 1. Assume (A1)-(A8). Then {Cn(x) - C(x)}TF(0) -^ 0, asn —*■ +00.Theorem 2. Assume(Al)-(A8).Then(x) - C(x)) - &;(*)] -^ iV(0, /,),n - +oo,where...
Keywords/Search Tags:Autoregressive
PDF Full Text Request
Related items