| Support vector machine (SVM) which began in the 1960s has already been widely used in production, economy, science and engineering activities, and has played an important role, so it is paid more and more attention to. There are many kinds of SVM, such as standard support vector machines, least squares support vector machines, as well asν-SVM studied in this thesis, and so on. Because all the machines'ultimate issues to be addressed are to solve a large-scale quadratic programming problem, due to the larger scale, the calculation is relatively complex and time-consuming, so we need to find a good algorithm to improve the speed of the calculating.This thesis studies chunking algorithm and decomposition algorithm of theν-SVM and the combination of the two methods of theν-SVM. After giving the chunking algorithm of the standard SVM, we propose the chunking algorithm of theν-SVM. In view of the working set of the chunking algorithm will become more and more large, and the space we need storage is very large, and then this thesis introduces the decomposing algorithm of the SVM, and extends to the decomposing algorithm of theν-SVM.We divide the large-scale quadratic programming problem into many small quadratic programming problems using the decomposing algorithm. We get the global optimal solution by finding sub-optimal solution of planning. Decomposing algorithm overcomes the disadvantage of the working set's becoming largely more and more, and solves the problem of excessive storage space effectively, but the number of iterations will effect the training time too much.So in this thesis we combine the advantage of the two algorithms in the end, and combine the advantage of theν-support vector machines, we get a newν- support vector machines algorithm based on chunking algorithm and decomposition algorithm. Through a large number of experiments, we do comparative analysis of experimental results on classification accuracy and the experimental time system, and then we get the conclusion: this new algorithm indeed improves a lot than the original algorithm in both effectiveness and training time. |