| Indeed, in many applications, the original optimization model is often non-convex and nonsmooth. This can be seen in a wide array of problems such as: compressed sensing, matrix factorization and sparse signal recovery. People focus on the analysis of algorithms for convex models which is used as a relaxation for an original nonconvex model. It is a good way to simplify and solve the model, howev-er, there is a big gap compared to the accurate solution of practical problem. So we will use proximal operator to study new algorithm based on the original nonconvex and nonsmooth model.We devote to a board class of nonconvex and nonsmooth minimization prob-lems, whose objective function can be described as the sum of three functions, that is Ψ(x,y)=f(x)+g(y)+H(x,y), where f:Rn→R∪{∞} and g:Rm→R∪{∞} are proper lower semicontinuous functions, and H:Rn× Rm →R is a smooth C1 function. An approach to solve this problem is via proximal alternating min-imization algorithm which is based on Gauss-Seidel iteration scheme. Attouch et al.[5] establish the convergence result and it is the first work in the general noncon-vex and nonsmooth setting. Bolte et al.[12] propose proximal alternating linearized minimization (PALM) which helps for actual practice. By combining the proximal alternating linearized method with an inertial force, we propose a new algorithm, i.e., inertial proximal alternating linearized method (iPALM).Our main results can be stated as follows:Assume the objective function has the powerful Kurdyka-Lojasiewicz property and parameters satisfy certain conditions, we will prove that each bounded sequence generated by iPALM globally converges a critical point; we show the efficiency of inertial proximal alternating linearized mini-mization algorithm compared with the proximal alternating linearized minimization algorithm by some applications in signal recovery and image denoising, and can obtain that the former has less iteration steps and faster running speed. |