Font Size: a A A

Generating and evaluating evaluative arguments

Posted on:2002-09-09Degree:Ph.DType:Thesis
University:University of PittsburghCandidate:Carenini, GiuseppeFull Text:PDF
GTID:2465390011492350Subject:Computer Science
Abstract/Summary:
Evaluative arguments are pervasive in natural human communication. In countless situations, people attempt to advise or persuade their interlocutors that something is good (vs. bad) or right (vs. wrong). With the proliferation of on-line systems serving as personal advisors and assistants, there is a pressing need to develop general and testable computational models of generating and presenting evaluative arguments.; Previous research on generating evaluative arguments has been characterized by two major limitations. First, because of the complexity of the natural language generation problem, researchers have tended to focus only on specific aspects of the generation. Second, because of lack of systematic evaluation, it is frequently difficult to gauge the scalability and robustness of proposed approaches.; The research presented in this thesis addresses both limitations. By following principles from argumentation theory and computational linguistics, we have developed a computational model for generating evaluative arguments. In our model, all aspects of the generation process are covered in a principle way, from selecting and organizing the content of the argument, to expressing the selected content into natural language. For content selection and organization, we devised an argumentation strategy based on guidelines from argumentation theory. For expressing the content into natural language, we extended and integrated previous work on generating evaluative arguments. The key knowledge source for both tasks is a quantitative model of user preferences.; To empirically test critical aspects of our generation model, we have devised and implemented an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. The design of the evaluation framework was based on principles and techniques from several different fields, including computational linguistics, social psychology, decision theory and human computer interaction.; Within the framework, we have performed an experiment to test two basic assumptions on which the design of the computational model is based; namely, that tailoring an evaluative argument to a model of the addressee's preferences increases its effectiveness, and that differences in conciseness significantly influence argument effectiveness. Both assumptions were confirmed in the experiment.
Keywords/Search Tags:Evaluative arguments, Generating, Natural
Related items