Font Size: a A A

Some Stochastic Algorithms For Global Optimization

Posted on:2009-02-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:X W ZhangFull Text:PDF
GTID:1100360272982198Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
In practice, the mathematical model constructed either has the high dimension or hasn't or don't obtain the good analytic properties, hence deterministic algorithms for global optimization is difficult to or can not solve these problems. When the dimension of the problem is increasing, the number of the local optimal solutions will often increase rapidly. It is a significant challenge to find the global optimal solution from thousands of local optimal solutions. With stochastic optimization algorithms, which take genetic algorithms as representative, appearing in, the difficult problems in deterministic algorithms can be solved well. These stochastic algorithms come generally from the simulation of the natural phenomena or the social behaviour, and can solve well the high dimensional, multimodal, noise and nondifferential problems. Its main characteristic is that the objective function needs not the more analytic properties, even has not the explicit expression. However, the stochastic technique employed and the lagging behind theory result in designing the robust algorithm, which can solves efficiently the high dimensional, difficult and complex problems, being a major research issue.Aim to the unconstrained global optimization problems, some stochastic algorithms for global optimization are proposed in the dissertation for the major purpose of increasing the universality, efficiency and robustness. The main contributions and original ideals included in the dissertation are summarized as follows:(1) Two class hybrid genetic algorithms are presented, which borrow from the advantages of deterministic algorithms. First, using the randomicity of the reproduce operators of genetic algorithm and the efficiency of trust region method to solve quadratic optimizations, a trust region-genetic algorithm is presented. the algorithm can overcome the limitation of trust region method and solve efficiently a class of deceptive problems. Numerical experiments show that the algorithm works efficiently; Second, to overcome the disadvantage of high computation cost in traditional interval optimization algorithms for high dimensional problems, an interval-genetic algorithm is proposed. This algorithm combines the interval algorithm and a genetic algorithm. On the one hand, at each iteration the algorithm employed the branch and bound method in the interval algorithm to provide a shrinkable gradually search domain for the genetic algorithm, on the other hand, the algorithm employs an upper bound of the global optimum provided by the genetic algorithm to discard the intervals not containing the global optimal solution in the work set, which increases the speed of convergence. Numerical experiments show that the efficiency of the proposed algorithm is higher than traditional interval optimization algorithms and this advantage becomes more significant in solving high dimensional optimizations.(2) The common Moore—Skelboe rule and Hansen rule can guarantee the reliability of the interval algorithm. However, the high computation cost and the shortage of memory become the bottlenecks of the interval algorithm with the increasing dimension. To construct a new selection rule of the interval is an very important solution. A good selection rule of the interval can decrease greatly the computation cost and guarantee the reliability. A new selection criterion inspired by Casado rule is proposed. The proposed criterion selects some intervals of which the number is not greater than a constant so that the complexity can not increased greatly when the dimension of the problem is increasing. Based on the selection criterion, the interval algorithm is presented. A large number of numerical experiments show that the proposed algorithm is superior in the reliability and the convergence speed, and change the known global optimum -99.2784 of the 100-dim benchmark test function Michalewicz to -99.3289.(3) In order to improve the performance of differential evolution, a new hybrid strategy of differential variation is presented. The strategy regards each individual in the population as a charged particle, and uses the charge of the particle and the attraction-repulsion mechanism among the particles to determine the direction and magnitude of the motion. This strategy can compel the individual to move self-adaptively in the direction of the force excreted on it by three other individuals, hence avoid the trouble on the setting of the scale factor. Numerical experiments show that the differential evolution with this strategy can balance well between the global search and the faster convergence.(4) A fit parameter setting usually improves greatly the performance of differential evolution. However, how to set these is nuisance. The differential evolution with the propositional parameters setting often is superior in some aspects while inferior in another ones. Three parameter-free differential evolutions are presented, each of which borrows from the ideal of the attraction-repulsion mechanism among the charged particles in the electromagnetism-like algorithm and Taguchi method used in engineering design. These algorithm eliminate the scale factor F and the crossover probability Cr with the exception of a sole parameter: the population size POP. Numerical experiments show that the proposed algorithms have the better performances.(5) The signal to noise ratio in the traditional Taguchi method regards generally 0 as the expectation of the quality characteristic. If a evolutionary algorithm based on the traditional Taguchi method solves the problems with the global optimum 0, then it is unreasonable since this method considers acquiescently the global optimal value of the problem as 0. Based on the discovery, a modified parameter-free differential evolution is proposed. The algorithm is an improved version on (4). Numerical experiments on twenty benchmark test functions show that the proposed algorithm can solves efficiently 30 or 100 dimensional problems, and outperforms the compared algorithms. Moreover, the known global optimum -99.3289 of the 100-dim benchmark test function Michalewicz is improved as -99.61225.
Keywords/Search Tags:Global Optimization, Stochastic Algorithm, Differential Evolution, Interval Method, Particle Swarm Optimization, Taguchi Method
PDF Full Text Request
Related items