| Super resolution(SR) for digital images is a fundamentally important issue and research hotspot in computer vision, which has wide applications in industry and agriculture, military and medical treatment etc. Historically, SR has undergone three main steps: interpolation based methods, reconstruction based methods and machine learning based methods. Interpolation based methods take the continuity assumption of data and not actually work well. But it has the widest application because of its fast performance. However, there are so many varieties of this kind of methods that they look very messy, and no theoretical and unified description can be found in pertinent literature s; Reconstruction-based technologies normally recover HR images by modeling the reverse process of the imaging. This kind of methods gives better performance than traditional interpolation-based methods but requiring more execution time. Moreover, they are so sensitive to the scale factor that they are rarely used in practical applications; Machine learning based methods are promising and the hotspot of the research area currently. However, the quality and the compatibility of the training samples, and the time consumption are the primary problems faced by this kind of methods. Aiming at the above-mentioned issues, this paper investigated and surveyed SR technologies widely and deeply, and proposed the corresponding resolutions for these issues. The main contributions of our work are as follows:1. Finding the essential rule and internal relation of the traditiona l interpolation technologies through extensively theoretical analysis and experimental verification. A unified theoretical description and a regular resolution for traditional interpolation based methods are provided by the paper through osculating polynomial approximation, which can be not only used to analyze and compare the existing methods, but also to exploit new interpolation algorithms.2. A curvature-driven reversely SR approach based on the Taylor expansion was proposed. The Taylor expansion preprocesses the image and made it over-processed, and then the image was adjusted by the partial differential equation driven by curvature reversely. This strategy can reduce the time consumption effectively and improve the visual quality via reserving contrast and clarity.3. The super resolution technologies based on machine learning was mainly studied and a SR algorithm depending on blind Blurring Kernel Estimation(BKE) and Dictionary Learning(DL) was proposed aiming at the problems of the quality of the training samples and the time consumption. And a single image super-resolution based on unified iterative least squares regulation was proposed aiming at reducing the dissimilarity between HR and LR feature spaces. Anchored Neighbor Regression(ANR) was adopted to improve the efficiency further which got an obvious improvement.4. An improved blind blur estimation algorithm based on Selective Patch Processing strategy and the Non-Local Self-Similarity(NLSS) was proposed. This paper estimated blur kernel via selective patch processing and improved the accuracy of the estimation and the speed of the convergence, as well as the time consumption in a single iteration. |