| Supervised learning algorithms classify or estimate test points based on labelled training samples. Learning algorithms have been applied in diverse areas of engineering, including speech recognition, damage detection, and document analysis. Two difficulties in learning are the ‘curse of dimensionality’ and bias arising from the distribution of training samples. In this thesis, a new nonparametric algorithm for supervised classification or estimation is presented. The algorithm extends linear interpolation using the principle of maximum entropy, and is termed LIME. Compared to other nonparametric methods, LIME is shown to ameliorate difficulties arising in high dimensions or from asymmetrical distributions of training data. Asymptotic theoretical results are shown, as well as noise robustness and analytical forms for LIME solutions. Simulations show that error rates are in some circumstances lower than those of other nonparametric algorithms, discriminant analysis methods, neural nets, regularized linear regression, and decision trees. The problem of supervised learning based on grids of training samples is considered in-depth. Application of LIME to color management and gas pipeline integrity are demonstrated. LIME is computationally more expensive than standard nonparametric algorithms, but the improvement in error rates may be a worthwhile trade-off. LIME may also be a valuable component in a hybrid classification system. |