Font Size: a A A

Factor analytic models and cognitive diagnostic models: How comparable are they?---A comparison of R-RUM and compensatory MIRT model with respect to cognitive feedback

Posted on:2010-03-28Degree:Ph.DType:Dissertation
University:The University of North Carolina at GreensboroCandidate:Wang, Ying-chenFull Text:PDF
GTID:1445390002472511Subject:Education
Abstract/Summary:
The necessity and importance of cognitive diagnosis is being realized by more and more researchers. As a result, a number of models have been defined for cognitive diagnosis---the IRT-based discrete cognitive diagnosis models (ICDMs) and the traditional continuous latent trait models. However, there is a lack of literature that compares the newly defined ICDMs based on constrained latent class models to more traditional approaches such as a multidimensional factor analytic model. The purpose of this study is to compare the feedback provided to examinees using a multidimensional item response model (MIRT) versus feedback provided using an ICDM. Specifically, a Monte Carlo study was used to compare the diagnostic results from the R-RUM, a noncompensatory model with dichotomous abilities, to diagnoses made based on the 2PL CMIRT model, a compensatory model with continuous abilities. A fully crossed design was used to consider the effects of test quality, Q-matrix structure and inter-attribute correlation on the agreement rates of the diagnostic feedback for examinees between these two models. Given that one of the factors of this study is "test quality", an initial study was performed to explore the possible relationship between test quality (including estimated model parameters) based on the models used to characterize examinee responses. In addition, because these models provide examinee information in different ways (one discrete and one continuous), a method using logistic regression, which is used to discretize the continuous estimates provided by the 2PL CMIRT, is discussed as a way to maximize diagnostic agreement between these two models.;The significance of this study is that, if the two models agree consistently across the experimental conditions, model selection for cognitive purposes can be based largely on the preference of the researcher, which is informed by an underlying theory and assessment purposes. However, if the two models do not agree consistently, this study will help (1) to identify situations where the two models agree or disagree consistently and (2) to explore the feasibility of using the MIRT model for classifying examinees cognitively.;The results from the first study demonstrate that the two models define test quality in different ways and that item parameters of the two models are weakly associated. Therefore, subsequent comparisons are made within each model after estimating the R-RUM and the 2PL CMIRT, using common datasets. The results from the final study indicate that (1) the two models agree more consistently under the R-RUM generation, (2) there is a higher agreement rate between the two models under most scenarios of simple structure, (3) there is more error for both models under the MIRT generation, and (4) the MIRT model does not appear to be as successful at classification decisions as the R-RUM. Possible future directions are discussed.
Keywords/Search Tags:MIRT model, R-RUM, Cognitive, Diagnostic, Feedback, Test quality
Related items