Font Size: a A A

Research On Neural Network Architecture Search And Parameter Optimization Based On Particle Swarm Optimization Algorithm

Posted on:2024-07-25Degree:MasterType:Thesis
Country:ChinaCandidate:T H YangFull Text:PDF
GTID:2568307106970749Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years,scholars at home and abroad have paid much attention to the field of neural network architecture search and parameter optimization,and many scholars have used various methods to study them,respectively.The aim is to find the optimal neural network parameters,such as structure and connection weights,for a specific problem.By considering the problem as an optimization problem,the researcher can choose to search the optimal architecture of the network or optimize the parameters,such as the network weights,individually.In this paper,with the goal of obtaining the simplest neural network structure and optimal connection weights,we investigate the neural network architecture search and parameter optimization method based on Particle Swarm Optimization algorithm,which contains the following three parts:(1)For the application of neural networks without theoretical guidance in scale selection,a simple to implement and effective network growth method is proposed to design a growth rate function,which considers a variety of indicators that can show the performance of the neural networks,such as recognition rate and loss error value,and reflects the improvement space of neural network performance through this function to guide the neural network in training to improve the network scale and adjust the parameters of the process.While adjusting the network structure,the most streamlined and effective neural network structure and weights are obtained by splitting,deleting,and adjusting the weight values of the selected specific neurons.(2)An improved Particle Swarm Optimization algorithm with curiosity named PSO-C and applied to optimize the parameters of feedforward neural networks such as weights.PSO-C algorithm is proposed by classifying particles into two classes of particles with different curiosity characteristics,including physical curiosity particles and social curiosity particles,so that the particles,driven by different curiosities,form their preferences in the direction of movement,searching around their optimal historical positions or searching around the current global optimal position.Chaos factor are also introduced to enhance the ability of the algorithm to jump out of local optimal in the search space.The neural network weights are encoded as the positions of the particles in the particle swarm optimization algorithm,and the parameters such as the neural network weights are learned using the proposed Particle Swarm Optimization algorithm with curiosity mechanism,and the experimental results show that the network eventually obtains better performance.(3)In order to make the neural network can further learn better parameters such as weights,based on classifying the particles in the swarm of the Particle Swarm Optimization algorithm,an improved curiosity-based Particle Swarm Optimization algorithm is proposed to enhance the parameter search capability of the algorithm.The particles are dynamically classified,and the velocity update and movement pattern of physical and social curiosity particles are redesigned.For physical curiosity particles,their search process is more focused on exploiting local areas.For social curiosity particles,information from neighboring particles is considered,which is more beneficial to maintain the diversity of the swarm and enhance global exploration.An internal indicator calculation method is also designed which relate to the change in the fitness value of the particle to guide particle behavior for making the appropriate treatment based on the result.The specific implementation of the process contains a change in particle movement speed,a mixture of differential evolution algorithms,and the introduction of the elite learning strategy.In addition,a linear decay of inertia coefficients is used,focusing on global exploration at the beginning of the algorithm optimization process and gradually focusing on local exploration as the optimization process gradually converges.The experiment results on a series of benchmark functions show that the proposed algorithm has outstanding optimization capability on multimodal problems.Experimental results on application to feedforward neural network parameter optimization show that the algorithm achieves better search capability than the algorithm proposed in(2).
Keywords/Search Tags:artificial neural network, growth rate function, particle swarm optimization algorithm, artificial curiosity
PDF Full Text Request
Related items