Font Size: a A A

Algorithms For Low-rank Matrix And Tensor Completion With Its Application

Posted on:2021-04-20Degree:MasterType:Thesis
Country:ChinaCandidate:X H LanFull Text:PDF
GTID:2370330623967962Subject:Statistics
Abstract/Summary:PDF Full Text Request
Data completion is aimed at recovering the missing entries from partially observed data with prior information.The commonly used data completion algorithms are matrix completion algorithm and tensor(multi-dimensional array)completion algorithm.A com-monly adopted assumption of matrix and tensor completion is that the given data matrix and tensor is of low-rank or approximately low-rank structure.Therefore,the low-rank property enables us to recover the missing entries through minimizing the matrix or ten-sor rank.Unfortunately,owing to the nonconvexity and discontinuous nature of the rank function,solving the rank minimization problem is NP-hard.At present,the most widely used method to this problem is to use the nuclear norm as the convex agent of the rank function.There are some efficient algorithms for matrix completion,but in practice,the data we need to recover is usually multi-dimensional(greater than or equal to three-dimension).Such as video completion and color image completion.The traditional matrix completion algorithm cannot be well applied to these data.As an extension of the matrix case,the tensor completion has attracted the attention of many scholars.This paper mainly studied the tensor completion algorithm and proposed two regularization methods for low-rank tensor completion.1.A new regularization term referred to as Tensor Truncated Frobenius Norm(T-TFN)was proposed.Then a tensor hybrid truncated norm(T-HTN)model,where the model uses a combination of the Tensor Truncated Nuclear Norm(T-TNN)and T-TFN,is presented for tensor completion.A simple and effective two-step iteration algorithm is devised to implement the proposed T-HTN model,besides,we allow the quadratic penalty parameter to change adaptively in terms of some update rules to reduce the computational cost.Finally,the experimental results show that my approach is better than the tensor truncated nuclear norm.2.To improve the convergence speed and robustness of the Tensor Truncated Nuclear Norm(T-TNN),a Truncated Nuclear Norm Regularization Method based on weighted residual error as well as an extension model are proposed,called the TTNN-WRE and ETTNN-WRE methods,respectively.In the augmented Lagrangian function,different weights are assigned to the horizontal slices of the residual tensor to accelerate the con-vergence of the T-TNN.ETTNN-WRE is less sensitive to the key parameter r than T-TNN,which is the number of subtracted singular values.Experimental results verify the effec-tiveness and superiority of the proposed method.
Keywords/Search Tags:matrix completion, tensor completion, truncated nuclear norm
PDF Full Text Request
Related items