Font Size: a A A

Some contributions to the theory of unbiased statistical prediction

Posted on:2011-09-27Degree:Ph.DType:Dissertation
University:The George Washington UniversityCandidate:Qin, MinFull Text:PDF
GTID:1440390002452991Subject:Statistics
Abstract/Summary:
This dissertation investigates two research topics in statistical prediction, which can be regarded as a general inference problem. Particularly, we consider predicting the realized but unknown value of a random variable Z based on an observable random vector Y, assuming that Y and Z have a joint density f(y, z∣theta), where theta∈theta is an unknown parameter vector.;One topic is finding and utilizing lower bounds for prediction mean squared error (MSE). We develop Kshirsagar type lower bounds for prediction MSE and discuss their usefulness. These bounds are shown to be at least as sharp as the corresponding Bhattacharyya bounds. However, the regularity conditions of Bhattacharyya bounds are not required for the validity of the new lower bounds. Thus, our lower bounds are more widely applicable than Bhattacharyya bounds. Kshirsagar type bounds are especially useful when the parameter space is discrete, where Bhattacharyya bounds do not exist. Examining some sufficient conditions for attaining the lower bounds, we obtain a useful and easy method for finding minimum MSE unbiased predictors, and apply it to certain specific problems.;The second topic is risk unbiasedness. We define risk unbiased predictors by formalizing the idea that, on the average, a risk unbiased predictor should be at least as close to "true" Z as to any "wrong" Z, assessing closeness in terms of the specified loss function L(d, z, y, theta), where d and z are respectively the predicted and realized (but unobserved) values of Z. We allow the loss function to depend also on the observed data y and the true parameter theta, for sake of generality and wider applicability. A novel aspect of our approach is measuring closeness between d and Z by a regret function, derived suitably from the given loss function. Logically, the general concept is more relevant than mean unbiasedness for asymmetric loss functions, for example, the LINEX loss function.;We also investigated theoretical properties as well as usefulness of our definition of risk unbiased predictors. For squared error loss, we present a method for deriving best (minimum risk) risk unbiased predictors when the conditional mean of Z given Y is of the form h(Y) + k( Y)eta(theta). This optimum predictor is closely connected to the best unbiased estimator of eta(theta) under the modified model: Y ∼ f*(y∣theta) ∝ k 2(y)f(y∣theta). A Rao-Blackwell type result is developed for a class of loss functions that includes squared error and LINEX losses as special cases. Under certain conditions, the best risk unbiased predictor can be obtained by Rao-Blackwellizing any risk unbiased predictor. For location-scale families, we prove that a unique best risk unbiased predictor of Z, if it exists, is equivariant. When the loss is weighted squared error L(d, z, sigma) = [(d - z)/sigma]2, where sigma is a scale parameter, and the conditional mean of Z given Y has the form h(Y) + k (Y)sigma, the minimum risk equivariant predictor is risk unbiased, in which case, the minimum risk equivariant predictor and the best risk unbiased predictor coincide. The concepts and results are illustrated with a variety of examples. One important finding is that in some applications, a best unbiased predictor does not exist, but a best risk unbiased predictor can be obtained. Thus, risk unbiasedness can be a useful tool in selecting a predictor.
Keywords/Search Tags:Unbiased, Prediction, Lower bounds, Squared error, Loss function
Related items