Font Size: a A A

The Double-smoothing Local Linear Estimation In Partly Linear Models

Posted on:2013-01-20Degree:MasterType:Thesis
Country:ChinaCandidate:C ZhangFull Text:PDF
GTID:2230330395458753Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Local linear regression is frequently used in practice because of its excellent numerical and theoretical properties. It involves fitting a straight line segment over a small region, and the local linear estimate at x is the estimated intercept of that straight line segment. Local linear estimation has an asymptotic bias of order h2and variance of order (nh)-1with h the bandwidth. But the double-smoothing local linear estimator is constructed by integrally combining all fitted values at x of local lines in its neighborhood with another round of smoothing. In contrast to using only an intercept in the local linear regression, the new method attempts to make use of all information obtained from fitting local lines.Without changing the order of variance, the new estimator can reduce the bias to an order of h4. Compared to local linear regression, the new estimator has better performance in situations with considerable bias effects.In partly linear models, the estimator a has the order of n-1/2, and the estimator m(x) achieves the best possible rates of convergence in the indicated semi-parametric problems. Under appropriate conditions, asymptotic distributions of estimates of m(-) and a are established. In this paper, the estimators also satisfy these properties by using the double-smoothing local linear regression. It’s proved that the new method can be used in many applications. Let (X, Y, B) denote a random vector such that B and X are real-valued, and B and X are correlated. In partly linear models, the double-smoothing local linear regression method is used to estimate the two regression functions E(B|X)=u(X) and E(Y|X)=v(X). Then a is estimated by the least square method. In the end, m(·) is estimated by the double-smoothing local linear regression method Under appropriate conditions, asymptotic distributions of estimates of a and m(·) are established. Moreover, it is shown that these estimates achieve the best possible rates of convergence in the indicated semi-parametric problems.Before the major theorems are proved, five lemmas are given. Theorem1describes the rate of convergence and the asymptotic distribution. At the same time, it shows that the squared bias of a is asymptotically negligible compared with its variance without requiring m(·) to be under-smoothed. Theorem2describes the asymptotic normal distribution of α. Theorem3deals with the bias and variance of m(·) and shows that the bias is obtained without the differentiability condition of the density function f(·).To achieve such a bound,kernel-based estimates would require the density functionf(·) to be differentiable. Theorem4shows the asymptotic normal distribution of m(·).
Keywords/Search Tags:partly linear models, double-smoothing local linear estimator, local linearestimator, optimal rate of convergence, asymptotic normal distribution
PDF Full Text Request
Related items