On the feasibility of applying skills assessment models to achievement test data | Posted on:2007-03-06 | Degree:Ph.D | Type:Dissertation | University:The University of Iowa | Candidate:von Schrader, Sarah | Full Text:PDF | GTID:1457390005988266 | Subject:Educational tests & measurements | Abstract/Summary: | | Recent studies using skills assessment models have suggested that it may be possible to get diagnostic information from standardized tests that were not designed specifically for such diagnosis. Because of the current proliferation of testing in schools, many students are spending more classroom time taking standardized achievement tests. The prospect of extracting additional diagnostic information from these tests is very exciting. To better understand these issues, this study examined the use of skills assessment modeling with two subtests of the Iowa Tests of Educational Development (ITED), a large scale achievement battery. Specifically, response data were analyzed from a nationwide sample of eleventh grade students on Math and Language subtests.;In this study, the feasibility and utility of higher-order versions of the DINA model and the simplified RUNT were examined through both simulation studies and real data analyses. Both models require the definition of an item by attribute Q-matrix. ITED test specifications and alternatives were used as Q-matrices. Model parameter estimates were obtained using MCMC methods. An important focus of this study was on methods for assessing model to data fit, as the benefits of the model can only be realized if the model is an accurate representation of the test data.;The simulation studies indicated that parameter values were recovered well in situations of model to data fit and under various conditions of model misfit. Posterior predictive model checking methods were particularly useful for detecting some types of model to data misfit. Unfortunately, complexities arose in the analysis of the ITED subtest data. The Q-matrix defined attributes did not account for many of the important features of the observed test data. In each analysis of real test data, the estimates of attribute mastery were so strongly related that individual attributes did not provide unique information. The analyses of the ITED subtest data emphasized the importance of thoroughly evaluating model to data fit. When these skills assessment models do not accurately reflect the data, the model-based attribute mastery classifications can be uninformative or even misleading. | Keywords/Search Tags: | Skills assessment models, Data, Test, Achievement, ITED | | Related items |
| |
|