Font Size: a A A

A Local Optimization Method With Separable Structure Nonconvex Problem

Posted on:2019-11-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y N GuoFull Text:PDF
GTID:2370330545972472Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
The non-convex problem with separable structure is common in real life,including the meteorological forecast,artificial neural network(ANN),risk investment prediction,cluster information research,etc.Due to the problems with the big variable,the characteristics of the data storage difficulties,Using the existing classical optimization algorithm(such as the steep-est drop hair,Newton's method,quasi-newton method)cannot effectively solve the problem.In recent decades,the theoretical research on large-scale problems,such as functions can be divided and variable regions can be classified,has attracted the attention of scholars.The main methods for solving this problem include incremental learning algorithm,parallel computation(For example,parallel algorithms)and distributed optimization algorithm.In these methods,incremental learning algorithm and parallel computation are widely used in solving large scale problems.When the objective function is non-convex(and possibly non-differentiable),it becomes more difficult to solve the global optimal solution for this type of large-scale problem.There-fore,how to solve this problem quickly becomes a hot topic.Based on this,some researchers proposed for non convex separable mass problem special contains strong local convex approx-imation(SCA)method,combined with incremental learning algorithm or parallel algorithm can solve the problem of local optimization method to design.In this paper,we try to adopt the thought frame of incremental learning,and combine the special method with the second order information of the original function(SCA)for the non-convex function.In the case of large scale problems in variable regions,an improved(BCD)parallel algorithm is adopted.It is worth noting that,aiming at this problem,we have designed a new local pseudo-convex approximation method and combined different kinds of selection renewal mechanisms.The-oretically,the method proposed in this paper can converge to the asymptotic stability point or stable point of the function.Numerical examples show that these optimization methods are effective.This paper focuses on three types of unconstrained separable large-scale problems,and gives some local optimization methods for these problems.We now proceed the contends of this paper as follows:In chapter 1,we briefly introduced the increment algorithm,parallel computation(in-cluding parallel algorithm),and introduced the development status of the above two methods.At the same time,two sequence approximation function methods(MM algorithm,SCA al-gorithm)are given in this chapter.In the end,the research ideas and work of this thesis are given.In chapter 2,a special incremental learning algorithm is presented.Under the framework of incremental learning,this method combines a special method with more special second order information(SCA).The special incremental algorithm proposed in this chapter can con-verge to a asymptotic stable solution of the original function,experimental results show that the proposed algorithm is effective and stable.In chapter 3,we consider that the variable region can be divided into large scale problems,call it(CP2).Aiming at this special situation,this paper adopts the idea of parallel algorithm,at the same time,based on the classical(BCD)algorithm,using the random and "greed" two different regional selection mechanism to choose the target area to be updated at every itera-tion.It is worth noting that,in this chapter,we did not adopt the(SCA)algorithm presented in the previous literature,but designed a method of local pseudo-convex approximation.This not only makes the auxiliary function more flexible and diverse,but also accelerates the iterative speed of the algorithm.The theory can prove that the algorithm we designed can converge to the stable point of the original function.The last part gives some numerical example is given to illustrate the effectiveness of the two local optimization methods,compare some classical algorithms can be found at the same time,we can design the algorithm for certain case has a certain advantage.In chapter 4,a non-differentiable function can be considered in the area of non-convex non-convex variables.Since the problem has a special non-smooth term,we will use the subgradient information of the function.In the first place,two special algorithms are de-signed with the idea of parallel algorithm combined with second-order information(SCA)and pseudo-convex approximation.It is worth noting that in this chapter we did not adopt the"greedy" selection mechanism,but instead designed a selection mechanism under an incre-mental learning framework.This mechanism framework can reduce the internal iteration of the algorithm and simplify the complexity of the iteration of the inner loop,thus accelerating the algorithm.The theory can prove that the algorithm we designed can converge to the stable point of the original function.Finally,some numerical examples are given,and the results show that the new local optimization method is effective.In Chapter 5,we summarize the research in this article and make a prospect for the four-ther research studying.
Keywords/Search Tags:non-convex problems, Incremental learning algorithm, Parallel algorithm, (SCA)method, (BCD)algorithm, The stable point, Asymptotic stability point
PDF Full Text Request
Related items