| In this dissertation, we study computational models for classification and application of natural language relations. Specifically, two types of relations are explored: inter-entity semantic relations (in the context of information extraction) and cross-document structural relations (semantic connections between sentences cross document boundary).; We tackle the problem of natural language relation classification by a number of supervised, weakly-supervised, and transductive learning algorithms exploiting features at different linguistic levels. With supervised classification being the baseline, in many cases the proposed new algorithms were able to help reduce the need for labeled training data (by focusing human annotation effort) or boost classification performance (by mining information hidden in unlabeled data). It is also shown that transductive learners can be improved by using an induced similarity function.; Both types of natural language relations have practical implications for other natural language processing applications. In this thesis, we show that a simple algorithm can use cross-document structural relations to enhance the output of an state-of-the-art extractive text summarizer. |