Font Size: a A A

Computer and physical experiments: Design, modeling, and multivariate interpolation

Posted on:2011-06-17Degree:Ph.DType:Dissertation
University:Georgia Institute of TechnologyCandidate:Kang, LuluFull Text:PDF
GTID:1440390002453364Subject:Statistics
Abstract/Summary:
Chapter 1 deals with the robust parameter design experiments. It is critical to estimate control-by-noise interactions in robust parameter design. This can be achieved by using a cross array, which is a cross product of a design for control factors and another design for noise factors. However, the total run size of such arrays can be prohibitively large. To reduce the run size, single arrays are proposed in the literature, where a modified effect hierarchy principle is used for the optimal selection of the arrays. In Chapter 1, we argue that effect hierarchy principle should not be altered for achieving the robustness objective of the experiment. We propose a Bayesian approach to develop single arrays which incorporate the importance of control-by-noise interactions without altering the effect hierarchy. The approach is very general and places no restrictions on the number of runs or levels or type of factors or type of designs. A modified exchange algorithm is proposed for finding the optimal single arrays. We also explain how to design experiments with internal noise factors; a topic that has received scant attention in the literature.;The presence of block effects makes the optimal selection of fractional factorial designs a difficult task. The existing frequentist methods try to combine treatment and block wordlength patterns and apply minimum aberration criterion to find the optimal design. However, ambiguities exist in combining the two wordlength patterns and therefore, the optimality of such designs can be challenged. In Chapter 2 we propose a Bayesian approach to overcome this problem. The main technique is to postulate a model and prior distribution to satisfy the common assumptions in blocking and then, to develop an optimal design criterion for the efficient estimation of treatment effects. We apply our method to develop regular, nonregular, and mixed-level blocked designs. Several examples are presented to illustrate the advantages of the proposed method.;Chapter 3 is on mixture-of-mixtures experiments. In this kind of mixture experiments, major components are defined as the components which themselves are mixtures of some other components, called minor components. Sometimes, components are divided into different categories, where each category is called a major component, and the components within a major component become minor components. The special structure of the mixture-of-mixtures experiment makes the design and modeling approaches different from a typical mixture experiment. In Chapter 3, we propose a new model called the major-minor model to overcome some of the limitations of the commonly used multiple-Scheffe model. We also provide a strategy for designing experiments that are much smaller in size than those based on the existing methods. We then apply the proposed design and modeling approach to a mixture-of-mixtures experiment conducted to formulate a new potato crisp.;In Chapter 4, we proposes a new interpolation method named regression-based inverse distance weighting. It is based on inverse distance weighting (IDW), which is a simple multivariate interpolation method but has poor prediction accuracy. In this chapter we show that the prediction accuracy of IDW can be substantially improved by integrating it with a linear regression model. This new predictor is quite flexible, computationally efficient, and works well in problems having high dimensions and/or large data sets. We also develop a heuristic method for constructing confidence intervals for prediction.;Chapter 5 proposes two analysis methods. The first one is kernel sum regression, which uses an iterative implementation of the simple classic kernel regression. An algorithm is constructed to choose the optimal number of regressions N and the bandwidth parameters based on the generalized cross-validation. The performance of the kernel sum regression is shown to be superior than the simple kernel regression through two examples, thus the extra regressions do improve the prediction. In the second part, we show that as the number of iterations increases to infinity, the kernel sum regression converges to an interpolator, which we name as kernel sum interpolation. It has many interesting connections with the other interpolation methods, such as radial basis function, kriging, as well as the regression-based inverse distance weighting method introduced in Chapter 4. Compared with these interpolators, kernel sum interpolation is shown to be more robust to the bandwidth parameter. (Abstract shortened by UMI.)...
Keywords/Search Tags:Experiments, Interpolation, Kernel sum, Chapter, Model, Parameter, Robust, Inverse distance weighting
Related items