Font Size: a A A

Justifying The Validity Of Literary Competence Tasks For English Language Teacher Certificate

Posted on:2022-07-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:X J JiangFull Text:PDF
GTID:1485306320488894Subject:Foreign Linguistics and Applied Linguistics
Abstract/Summary:PDF Full Text Request
Literary competence tasks in English Language Teacher Certificate(ELTCertificate)were developed by China Language Assessment(CLA)to assess pre-service high school English teachers' literary competence,i.e.their expected ability to understand and appreciate English stories.This study examines the validity of the test tasks in order to guarantee their quality.The argument-based approach takes the leading part in validation studies in language testing nowadays.But the current validity argument frameworks are not appropriate for the tasks at the developmental stage.Therefore,Language Test Task Development and Validity Argument(TTDVA)has been constructed based on Assessment Use Argument(AUA)(Bachman&Palmer 2010).It consists of Construct Defining Inference,Test Task Designing Inference and Rating Criteria Developing Inference and upholds the claims that "The interpretations about the ability to be assessed are meaningful and generalizable." and "assessment records are consistent." According to the collection of the validity evidence needed in Test Task Designing Inference and Rating Criteria Developing Inference,an Exploratory Sequential Mixed Design was employed.This study collected qualitative and quantitative evidences to support the two inferences mentioned above.For qualitative analysis,6 experts'opinions and suggestions solicited from the focus group interview were analyzed to assess the content relevance and adequacy and the appropriateness of the tasks;59 examinees'responses to the tasks were analyzed in order to find out how adequate the tasks were for the intended construct and how well the rating scales measure their performance.5 raters' feedback was analyzed to study the dependability and operability of the the rating criteria.For quantitative analysis,30 examinees' scores given by 5 raters were analyzed using different methods:multi-faceted Rasch model(MFRM)analysis was employed to judge the discrimination of the tasks and the separability of the rating criteria,the effectiveness of the categories and rater consistency;Pearson correlation analysis was used to study the inter-rater consistency;and Kruskal-Wallis Test was applied to verify the consistency between the examinees' performance and their scores.The results reveal that the test tasks are fairly well designed with a satisfactory discrimination level,and can be used to assess the test takers' intended literary competence.The scale categories are effective with a quite good separability,both intra-rater and inter-rater consistency are satisfactory,and there is also obvious consistency between the examinees' performance and their scores,which speak well for the rating criteria.All of the findings lend support to the claims and warrants in the validity argument and both the downward inferential links(i.e.from "defining test task construct" to "scores")and the upward inferential links(i.e.from "scores" to"meaningful interpretations")are obtained and the validity argument is justified.To some extent,the TTDVA framework built in this study is complementary to the current validation frameworks.The read-to-write and write-to-speak integrated tasks which can be used to assess pre-service high school English teachers' literary competence may provide beneficial implications for the improvement of the current English teacher certificate test design and the cultivation and assessment of the pre-service English teachers' literary competence in China.
Keywords/Search Tags:literary competence, test task, rating criteria, validity argument
PDF Full Text Request
Related items