Font Size: a A A

Convergence Research Of Conjugate Gradient Methods

Posted on:2014-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:P Y YuFull Text:PDF
GTID:2230330398477551Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Ubiquitous in many fields of engineering, military, economic management, extensive application of constrained optimization problems. With the rapid development of production and scientific research, especially in the development of computer optimization problems has not only become an urgent need to have a more powerful solution tool, the solution of many large-scale problems become solved.Conjugate gradient method is optimized in the most commonly used one of the most effective method, it has simple algorithm, storage space requirement is small wait for a characteristic, is especially suitable for solving large-scale nonlinear optimization problems. In recent years, along with the practical problems in the large-scale optimization problems are constantly emerging, the nonlinear conjugate gradient methods become the focus of attention.This paper focuses on the conjugate gradient method, and is divided into three chapters.The first chapter briefly introduces the nonlinear conjugate gradient methods the research background, research contents and current situation.In the second chapter, we study a class of new hybrid conjugate method; we studied the new formula of several properties, and build the corresponding convergence theorem.The third chapter, we give a family of conjugate gradient formula based on the conjugate descent method, proof of the global convergence of the conjugate descent method in the condition of Wolfe, abundant research. Finally, two new algorithms for our numerical experiments presented in this paper, compares numerical performance in different functions.
Keywords/Search Tags:unconstrained optimization, conjugate gradient method, Wolfe linesearch, global convergence
PDF Full Text Request
Related items