Font Size: a A A

A Manual Evaluative Model For The Translation Quality Of Machine Translation Systems

Posted on:2018-06-17Degree:MasterType:Thesis
Country:ChinaCandidate:Q PiFull Text:PDF
GTID:2335330533964042Subject:Translation science
Abstract/Summary:PDF Full Text Request
Machine translation studies have been growing at a fast pace and have undergone several phases of development: from rule-based approach to example-based approach to statistic approach.The evaluation of the translation quality of machine translation remains an active field.It has seen encouraging progress,whereas difficulties remain.The automatic evaluative metrics are inflexible and can be inaccurate.The use of standards from Translation Quality Assessment(TQA)fails to accurately evaluate the translation by machine translation systems.This thesis proposes a Manual Evaluative Model for Machine Translation,aiming at addressing the aboved-mentioned problems.Some of the existing evaluative standards approach evaluation only from the perspective of word choice and word oder.Other standards have the problem of overlapping evaluative areas.This model consists of two aspects: evaluative index and evaluative weight.There are three evaluative indices: the index of information,the index of grammar and the index of variety.The construction of this model is based on Systems Thinking.A system consists of component,connection and exhibts complexity when combined.The component of a system corresponds with the index of information,the connection of a system corresponds with the index of grammar and the complexity of the system corresponds with the index of variety.The evaluative weights represent the relative significance among the three indices.When testing the effectiveness of the model,two typical machine translation systems are chosen: Google Translate and Baidu Translate.As for the source text,this thesis has selected 30 Chinese sentences from three domains randomly.The translations are scored by METEOR(an automatic evaluative metric)on Asiya(an open toolkit for evaluating machine translation)and by MEM(Manual Evaluative Model).This thesis compares scores made by METEOR and MEM and concludes that both METEOR and MEM agree in the evaluation of translations by online machine translation systems.With further analysis,this reseach finds that the scores produced by MEM are more stable than those of METEOR,with the coefficient of variation being 6.58% for MEM and 29.1% for METEOR.Therefore,this Manual Evaluative Model can be trusted to improve the accuracy of translation evaluation of machine translation systems and it is also possible for this manual approach to be made automatic to increase efficiency.The fact that the three indices can be quantified makes it possible for them to be integrated in the automatice evaluative standards to help continue to improve the quality of machine translation systems.
Keywords/Search Tags:machine translation, manual evaluation, systems thinking, evaluative index
PDF Full Text Request
Related items