| Low-rank matrix optimization problems have wide applications in a host of fields such as statistics,control and system identification,machine learning,signal and image processing,combinatorial optimization,finance,quantum computation,and so on.In this dissertation,based on the factorized variational characterization of the rank function,we propose a columnl2,0-regularized factorization model and design several algorithms for solving this nonconvex and nonsmooth optimization model,which greatly enriches the computation methods of low-rank matrix optimization.Firstly,we derive a family of equivalent DC-regularized surrogates for the columnl2,0-regularized factorization model and investigate its optimization properties and those of the columnl2,0-regularized problem and the Frobenius norm regularized problem.Specifically speaking,we show that the columnl2,0-regularized problem has the same critical point set,local optimal solution set and global optimal solution set as the Frobenius norm regularized problem does;derive an error bound to the true matrix for the critical points of the Frobenius norm regularized problem under a suitable assumption on the Hessian of the loss function,from which the error bounds of the critical points for the other two models are also obtained;and establish the KL property of exponent 1/2 of three models on theirs global minimizer set,respectively,under the noisy and full sample setting.Then,we develop in Chapter 4 a proximal alternating majorization-minimization(PAM)method with extrapolation and a hybrid PAM(HPAM)with extrapolation for solving the columnl2,0-regularized factorization model;under the gradient Lipschitz continuity of loss function,provide the global convergence analysis for the iterate sequence generated;and apply the two methods to dealing with the matrix completion problem with non-uniform sampling schemes.Numerical experiments are conducted with synthetic and real data examples,and comparison results with ALS for nuclearnorm regularized factorization model and ADMM for max-norm regularized convex model show that the columnl2,0-regularized factorization model has an advantage in offering solutions of lower error and rank within less time.Finally,in Chapter 5 we propose a proximal alternating majorization-minimization(MM)method(PAMSN),armed with a dual semismooth Newton method for the weightedl2-regularized subproblems,for solving an equivalent DC-regularized surrogate problem.For this method,we establish the global convergence for the iterate sequence generated,and under suitable conditions show that the limit of the generated sequence is not only a critical point of the DC-regularized surrogate,but also is a critical point of the columnl2,0-regularized problem.Numerical experiments are conducted with synthetic and real data for matrix completion problems with non-uniform sampling schemes.Numerical comparisons with PAM and HPAM for solving the DC-regularized surrogate problem indicate that PAMSN is superior to PAM and HPAM in the quality of solutions,though it requires more CPU time than PAM and HPAM do. |