Font Size: a A A

Three problems in statistical process control

Posted on:2001-07-01Degree:Ph.DType:Dissertation
University:State University of New York at Stony BrookCandidate:Tucker, Ann HongFull Text:PDF
GTID:1469390014956744Subject:Statistics
Abstract/Summary:
Three different aspects of statistical process control (SPC) are studied. In the first problem, we examine a set of non-normal data generated by a process in medical research and attempt to determine the sample sizes needed to set 95% confidence intervals for population percentiles. We develop a nonparametric procedure and find that the sample sizes are prohibitively large when compared to standard medical practice. We then approach the problem using normal theory and derive a general expression for the sample size necessary to observe a 95% confidence interval of any length for any population percentile. Not unexpectedly, we find that the sample sizes are much lower for the normal procedure; 584 vs. 1500 for the 1st percentile and 1114 vs. 1800 for the 95th percentile. These values are consistent with the practical limitations imposed by the medical research community.; We attempt to transform the data using the Box-Cox transformation and find that the transformation is successful for our data. The transformation is also robust to the value of the transformation parameter within a wide range and successful for a wide range of distributions. We conclude that the Box-Cox transformation combined with the normal procedure for obtaining confidence intervals for population percentiles is appropriate for use in the research setting.; The second problem compares the operating characteristics of three SPC plans for monitoring a process in blood banking. Two of the processes are well-known and the third is a novel application of the two-sample KS test. We examine the power of the three SPC plans in different regions of the process parameter space and conclude that the two-sample KS test is the only procedure that is consistent while simultaneously controlling the probability of a type I error.; In the third problem, we analyze two sequential procedures in the literature, the generalized likelihood ratio (GLR) scheme and a generalized cumulative sum (CUSUM) scheme. We expand on a simulation study in the literature and find that the GLR scheme is better than the CUSUM for shifts in the standard deviation of the process. We then consider the high computational complexity of implementing the two schemes and explore a "window-limiting" scheme for reducing the computational burden. We find that a recursive algorithm combined with a surprisingly effective new window-limiting technique for the CUSUM scheme reduces the computational complexity by a factor of 16,000 for an in-control average run length (ARL) of 1,000. The computational complexity of the full CUSUM scheme is dependent on the in-control ARL, gamma, and grows as a cubic function of the gamma. The window-limited CUSUM scheme was obtained by empirically determining, for over 230,000 applications of the test, the minimum size of a computational window so that a limited version of the scheme performs in the same way as the full version. Our simulation studies indicate that this computational window is quite small (60) and allows the computational complexity of the window-limited CUSUM scheme to grow as a linear function of gamma.
Keywords/Search Tags:CUSUM scheme, Process, Problem, Computational complexity, Three, SPC
Related items