Font Size: a A A

Truncated Aggregate Smoothing Quasi-Newton Methods For Min-max Problems With Applications In Fuzzy Neural Networks Learning

Posted on:2020-03-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y L LuFull Text:PDF
GTID:1360330578471727Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Min-max problems are very important non-smooth optimization problems,which have been widely used in many fields such as portfolio,engineering design and large-scale fault diagnosis.Although the smoothing methods for solving unconstrained min-max problems have achieved very rich results,due to the advent of the era of big data,the upgrading of computing technology and the data collection,transmission and storage technologies,the scale of real-wold problems are increasing.Therefore,the topic on solving large scale min-max problems is still important.Our purpose is to solve min-max problems with large dimensional and many composition func-tions and propose an aggregate smoothing algorithm to train fuzzy neural networks with min and max logical operators.By conducting a thorough study on the stable smoothing Newton method,we propose an algorithm for solving large-scale unconstrained min-max problems.For training fuzzy neural networks with min and max operators,we propose a gradient asymptotical aggre-gate smoothing algorithm for training networks.The proposed algorithm effectively reliefs the occurrence of ill-conditioned phenomena,and overcomes the problem that the previous smooth training algorithm makes the network terminate too early or too slow.The main content of this article are summarized as follows:In Chapter 1,we mainly introduce min-max problems and their applications,and com-prehensively review the related theories and various types of algorithms for solving min-max problems,including the aggregate smoothing methods from directly smoothing max-type func-tion in the objective function.We summarize the research results of the aggregate smoothing methods,such as the stable Newton type and Gauss-Newton type methods,and introduce a cur-rent application of the aggregate smoothing for training the fuzzy neural network with min and max logical operators.Finally,the research motivation,research ideas and the structure of the content of this thesis are briefly described.In Chapter 2,we propose a truncated aggregate smoothing quasi-Newton method with Armijo line search and a truncated aggregate smoothing symmetric rank-1 method with trust region.We simplify the truncation criterion,and propose a very simple aggregate parameter adjustment rule.Numerical results show that,compared with other previous algorithms,the proposed algorithms have advantages for solving unconstrained min-max problems with many composition functions,especially when the dimension of the problems is increasing.In Chapter 3,for solving unconstrained min-max problems with large dimensional and many composition functions,we propose an efficient truncated aggregate smoothing BFGS quasi-Newton method.We give two conditions based the information of the adjacent iteration points and the corresponding gradients for obtaining the appropriately approximate Hessian matrix.Un-der the assumption that the composition functions are strong convex,we can conclude that the approximate Hessian matrices and their inverse matrices are bounded under any given aggregate parameter.Together with a simple aggregate parameter adjustment rule,a truncated aggregate smoothing BFGS algorithm is proposed.Finally,we analyze the convergences of the algorithm and the inner iterative sequence.The numerical results show that,compared with the aggregate smoothing algorithms with the truncated strategy or active set strategy,the truncated aggregate smoothing BFGS algorithm with limited memory version has obvious advantages.In Chapter 4,we propose an aggregate smoothing algorithm to train a fuzzy neural network with min and max logical operators.Considering the network structure of the perceptual model,a smoothing function on the evaluation function of network is obtained by the aggregate smooth-ing technique.Together with an aggregate parameter adjustment rule,we propose a gradient asymptotical aggregate smoothing algorithm to train fuzzy neural networks.Finally,we discuss the optimality conditions of the original min-max-min problem,and reveal the relationship be-tween the original min-max-min problem and minimizing aggregate smoothing problems,and discuss the global convergence of the algorithm.The numerical results show that,compared with the previous smoothing algorithms,the proposed algorithm has higher computational efficiency and can overcome the problem that the previous algorithms terminate early or too slowly when training networks,which can greatly alleviate ill-condition phenomenon and train the network more effectively.
Keywords/Search Tags:Min-max Problems, Aggregate Smoothing, Truncated Strategy, Smoothing Quasi-Newton Method, Max-min Fuzzy Neural Network
PDF Full Text Request
Related items