Font Size: a A A

Investigating the construct validity of the Community Language Program (CLP) English writing test

Posted on:2008-07-10Degree:Ed.DType:Dissertation
University:Teachers College, Columbia UniversityCandidate:Park, TaejoonFull Text:PDF
GTID:1445390005966485Subject:Education
Abstract/Summary:
In the present study, in order to better understand the complex nature of writing performance assessment, multiple sources of variance that affect test scores were investigated in a systematic analysis in which the effects of these different sources of variability could be disentangled. Specifically, the primary purpose of this study was to investigate the extent to which the test takers' performance on the CLP English writing test was affected by the aspects of writing ability being tapped into by the four analytic rating scales (i.e., task fulfillment, content control, organizational control, and language control) and the test methods (i.e., the tasks and raters) used to elicit and score test performance. Multivariate generalizability theory was employed in this study to investigate the dependability of ratings for the individual rating scales and for the composite scores that were reported to the test takers. In addition, a series of substantively plausible confirmatory factor analysis (CFA) models were tested for the purpose of seeking the best representation of the underlying trait and method factor structure for the current measurement design of the CLP English writing test.;Most importantly, the results showed that the examinees' scores on the writing test were significantly affected by the method effects associated with the writing tasks used to elicit test performance. The test tasks used in this study were very different in the ways that they were contextualized (e.g., informal vs. formal). As a result, the test takers performed differently on the two writing tasks. Therefore, the observed variance attributed to the test tasks is evidence of the expected influence of "context" on performance (i.e., construct relevant variance) and thus should not be viewed as measurement error. Given the instability of test takers' performances across tasks observed in the present study and a number of studies in the general measurement literature, the way forward seems to recognize that, while some contexts activate stable ability features, others produce more variable performance from test takers. Thus, the current results suggest that language testers need to take into account both a construct-based and a task-based approach to test design and score interpretation, especially in the context of writing performance assessment.
Keywords/Search Tags:Writing, Test, Performance, CLP, Language
Related items