Font Size: a A A

Automatic Short Answer Item Generation For Reading Assessment

Posted on:2015-03-21Degree:MasterType:Thesis
Country:ChinaCandidate:Y HuangFull Text:PDF
GTID:2255330428477449Subject:Foreign Linguistics and Applied Linguistics
Abstract/Summary:PDF Full Text Request
In order to facilitate item writing for computerized adaptive language assessment and customized second language teaching, this study explored the potential of computer in automatically generating test items. The research goal was to investigate the linguistic nature of item writing, and to realize a system which, given an input text, can automatically generate items that assess reading comprehension.The study set out to summarize prior works relevant to automatic item generation, and discussed the nature of reading comprehension assessment. A three-module framework for item generation, which includes testing foci identification, paraphrasing and interrogative transformation, was proposed. To operationalize the framework on computer, the study discussed the language knowledge and skills which humans should employ during item writing, and established a linguistic framework of Lexical Functional Grammar (LFG), semantic network and semantic space model as the basis for language processing during automatic item generation. The criteria and algorithms for operationalizing each module of the item generation framework were defined accordingly. An automatic short-answer item generation system was implemented with a number of natural language processing (NLP) techniques in Perl.To test the efficacy of the system, this study selected some reading materials from past College English Test Band4as the input of the system, and evaluated the performance of the individual modules as well as the whole system against some baseline algorithms and the original human-written test items. Experimental results showed that the key information of an input text can be effectively identified according to word frequency density, paragraph length and latent semantic analysis; semantic restriction on word senses based on the vocabulary range of the assessment can effectively promote the paraphrase precision which has been limited by currently immature word sense disambiguation technique; compared to syntax-based question generation system, interrogative transformation method based on syntax and thematic roles within the framework of lexical functional grammar could produce questions of higher quality; given a text input, the whole system can select non-redundant key testing foci that cover different aspects of the text information, and generate acceptably paraphrased short answer questions on factual information., nearly60%of which are valid test items. The study has demonstrated the potential of computer in assisting item writing for language assessment.
Keywords/Search Tags:automatic item generation, reading assessment, computational linguistics, lexical functional grammar, latent semantic analysis
PDF Full Text Request
Related items