Font Size: a A A

Fundamentals Of Statistical Learning Theory Based On Random Set Simples

Posted on:2010-10-28Degree:MasterType:Thesis
Country:ChinaCandidate:L SunFull Text:PDF
GTID:2120360302961542Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Statistical Learning Theory (SLT) based on random samples is considered at present as one of the fundamental theories about small samples statistical learning. It has become a novel and important field of machine learning along with other concepts and architectures such as neural networks. However, the theory hardly handles statistical learning problems for samples that involve random set samples. In this study, we discuss SLT based on random set samples. Firstly, we discuss the definitions of random sets and the distribution function of random sets while some properties are given, and introduce a certain law of large numbers for m dimension random samples while some important inequalities are given. Secondly, we present a notion of the strict consistency of the principle of empirical risk minimization (ERM) on random set samples, and afterwards formulate and prove the key theorem. Finally, we discuss the bounds on the rate of uniform convergence of Statistical Learning Theory based on random set samples and VC dimension theory, which become cornerstones of the theoretical fundamentals of the SLT for random set samples.
Keywords/Search Tags:Random sets, key theorem, bounds on the rate of uniform convergence, VC dimension
PDF Full Text Request
Related items