Font Size: a A A

The Theory Of The Subgradient Methods For Solving The Quasi-convex Optimization Problems

Posted on:2020-09-04Degree:MasterType:Thesis
Country:ChinaCandidate:J W LiFull Text:PDF
GTID:2370330599454491Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Mathematical optimization is an important branch of applied mathematics,which provides a unified research and modeling framework for many scientific and applied problems.Due to well properties of convexity,convex optimization has been widely studied and applied by many scholars.However,for the application problems in the fields of economics and management science,the limitation of convex function characterization is too strong,while the generalized(quasi)convex function can provide much more accurate representation of the practical problems.Subgradient algorithms are popular methods to solve constrained(convex or quasi-convex)optimization problems.However,the convergence theories of subgradient algorithms,especially the convergence rate theory,still have a lot of research gaps.In this paper,we investigate the constraint quasi-convex optimization and quasi-convex feasibility problems.The purpose of this thesis is to investigate the global convergence and convergence rate theory(and iteration complexity)of the corresponding subgradient algorithm.Thus,to provide a well theoretical basis and guarantee for the application of algorithms.At the same time,knowledge of the properties and efficiency of these algorithms can lead to a better understanding of their performance on many applications,and point the trend to future research on improving and extending optimization algorithms.There is an overview of the work in this paper from two aspects as follow:For quasi-convex optimization,we investigate the iteration complexity and convergence rates of various subgradient methods for solving it in a unified framework.Firstly,we consider a framework satisfying a general(inexact)basic inequality,and investigate the global convergence theorem and the iteration complexity when using the constant or dynamic stepsize rules.Moreover,we establish the linear(constant stepsize rule)or sublinear(dynamic stepsize rules)convergence rates of the sequences under assumptions of the weak sharp minima of H?lderian order and the upper bound of noise.Lastly,these convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods,including the standard/inexact/conditional subgradient methods,for solving quasi-convex optimization problems under assumptions of the quasi-convexity and the H?lder condition.For quasi-convex feasibility problems,we propose the subgradient method and investigate convergence as well as convergence rate theory of the proposed algorithm.Firstly,we propose subgradient algorithms with the most violated constraint or the almost cyclic controls and give the basic inequality with H?lder condition.Thus,the global convergence theory of the algorithms is studied.Moreover,we introduce the important notion of the error bound of H?lderian order,and establish the linear(or sublinear)convergence rate of the algorithms under it.Finally,the algorithm and conclusions are extended to the quasi-convex multiple-sets split feasibility problems.Noting that the basic inequality plays a key role in our study.As far as we know,the convergence rate analysis of subgradient methods in two problems are new,and it provides reliable theoretical guarantee for the algorithms with great significance.
Keywords/Search Tags:Quasi-convex optimization, subgradient method, basic inequality, iteration complexity, convergence rate
PDF Full Text Request
Related items