Font Size: a A A

Research On Image Classification Based On Non-negative Coding And Structure Learning Of SPNs

Posted on:2016-06-01Degree:MasterType:Thesis
Country:ChinaCandidate:P N LiuFull Text:PDF
GTID:2308330479490069Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Image classification problem occupies an important position in the field of computer vision, and the application is very extensive. Exploring efficient image classification algorithms is the research hotspot in recent years, mainly including image feature coding and deep structure learning. Based on these two research area, this paper includes two parts of research work, i.e., the image classification algorithm based on feature encoding and structure learning of Sum-Product Networks(SPNs). In recent years, feature coding has been sufficiently researched and achieved good performance. On the basis of locality-constrained linear coding(LLC), this paper proposes a new coding scheme and verifies its effectiveness on several widely used image datasets. However, SPNs is just a new deep structure proposed. Base on the structure learning of SPNs, this paper explores its characterristics and proposes a new learning algorithm for image classification to verfy its performance.The most important issue of image classification algorithm based on feature extraction is how to efficiently encode features. LLC has achieved the state-of-the-art performance on several benchmarks, due to its underlying properties of better construction and local smooth sparsity. However, the performance of LLC on image classification is sensitive to the number of neighbors, i.e., the value of k. With the increase of k, the absolute difference of some negative and positive elements may be likely to become larger and larger. This will make LLC more unstable. In this paper, a new coding scheme called non-negative locality-constrained linear coding(NNLLC) is proposed. It adds an extra non-negative constraint to the objective function of LLC. Generally, this new model can be solved by iterative optimization methods. However, such solutions are quite impractical due to high computational cost. Therefore, two fast approximation algorithms are proposed; more importantly, they and LLC have a similar computational complexity. To compare with LLC, the experiment results on several widely used image datasets demonstrate that NNLLC not only can improve the classification accuracy by nearly 1%~4%, but also is more robust on the selection of k.The first proposed SPNs’ structure learning algorithm, i.e., Learn SPN makes the learning of SPNs more fast and flexible, but the learning system could only be applied on discrete binary variables. What is more, the image classification performance of Learned SPN is very poor on CIFAR-10 dataset which is a standard dataset for deep networks learning. In order to apply Learned SPN on continuous image datasets, this paper proposes an improved Learn SPN algorithm, called NLearn SPN. It not only calls for new ways for instances and variables splitting, but also updates the whole learning process slightly. For NLearn SPN, the experiment results on CIFAR-10 dataset show that the cassification performance is not ideal. This is probably because that the structure for improved NLearn SPN algorithm must first be set instead of automatically learned. This makes learning SPNs deflection on the expression of image data. If the NLearn SPN algorithm can be improved to automatically learn the SPN structure, it may be effective on image classification problem.
Keywords/Search Tags:image classification, locality-constrained linear coding, non-negative contraint, Sum-Product Networks, variable independence
PDF Full Text Request
Related items