Font Size: a A A

Machine Learning from Faulty Data: Optimal Sparse L1-Norm Principal-Component Analysi

Posted on:2018-02-19Degree:Ph.DType:Dissertation
University:State University of New York at BuffaloCandidate:Chamadia, ShubhamFull Text:PDF
GTID:1478390020956370Subject:Electrical engineering
Abstract/Summary:PDF Full Text Request
With the advent of big-data involving high-dimensional large data sets, there is a constant demand for robust algorithms to extract meaningful information through simpler and low-dimensional representation. Such representations not only uncover previously unobserved patterns but also often improve the system performance. Principal Components Analysis (PCA), arguably the most widely used dimensionality reduction technique, holds its important applications in machine learning, wireless communication, finances, and statistics (to name a few). While enjoying successful use, conventional PCA suffers from two major drawbacks. Firstly, PCA is highly sensitive to corrupted/outlier points, even when they appear sparingly, in the data. Secondly, principal components are, in general, combinations of all original features and are typically non-zero, which makes them difficult to interpret or extract features.;To address the above challenges, the research herein focuses on the following aspects: (i) optimal computation of sparse L1-norm principal-component analysis; (ii) computational advances in sparse L1-norm principal-components of multi-dimensional data via robust iterative procedures; (iii) low-complexity computation of L1-norm principal-components via bit flipping; (iv) outlier processing techniques by utilizing the robust L1-principal subspace designs; and (v) reliability based near-maximum-likelihood (near-ML) decoding of Golden codes.
Keywords/Search Tags:Data, Sparse l1-norm, Robust
PDF Full Text Request
Related items