Font Size: a A A

Automatic Architecture Optimization Strategy Of Generative Adversarial Networks

Posted on:2022-07-10Degree:MasterType:Thesis
Country:ChinaCandidate:Y FanFull Text:PDF
GTID:2518306557968529Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Generative Adversarial Network(GAN)is an important generative model,which is widely used in tasks such as image generation.However,the premise of ensuring the usability of GAN is sophisticated designing of its architecture.Therefore,engineers have spent much time and energy on the design of GANs’ architecture.In recent years,the birth of Neural Architecture Search(NAS)makes automatic optimization of GAN possible.However,the search object of most NAS is classification network,while the research on NAS for GAN is very limited.However,GAN has the characteristics that architecture affects a lot to its performance,and the performance is uncertain along with a time-consuming calculating process.These characteristics determine that the NAS methods needs to be reformed when searching GAN.Therefore,the main work of this paper is to improve the GAN-oriented NAS.Considering that AutoGAN is a classical method in GAN-oriented NAS,all the works in this paper are carried out on it.Considering that AutoGAN ignores the performance differences among candidate networks in preceding cells,this paper proposes the Improved AutoGAN based on AutoGAN.It is found that the architecture of candidate networks in preceding cells will affect the overall performance of the network.Therefore,compared with AutoGAN,which randomly selects candidate networks in preceding cells,Improved AutoGAN uses gradient Bandit algorithm to select high-performance networks.It also introduces a temperature coefficient to prevent search results from falling into local optimum.On the CIFAR-10 dataset,GAN is searched in the same search space as AutoGAN,and the FID score reaches 11.60,which exceeds AutoGAN.And the search results have ideal transferability.Considering that the controller of AutoGAN is easy to be disturbed by the evaluation results with large errors,resulting in the deviation of the search direction,this paper proposes Stable AutoGAN on the basis of AutoGAN.Firstly it is proved that using random sampling instead of probabilistic sampling does not affect the training effect of the controller during the training process.On this basis,the robustness of the controller is enhanced by the multi-controller model.During the search process,each controller learns the sampling strategy independently.At the same time,the credibility score is introduced to measure the training effect of each controller,and the employment frequency of each controller is determined according to the credibility score.The search processes of AutoGAN and Stable Auto Ga N are both repeated 5 times on CIFAR-10 dataset.It is found that the standard deviation of the FID score of GAN obtained by Stable AutoGAN is approximately 1/16 of that of AutoGAN,and the actual FID score is similar to that of AutoGAN.Considering the search process of AutoGAN is time-consuming,this paper proposes the Efficient AutoGAN based on AutoGAN.It is pointed out that the main reason for the long search time of AutoGAN is that the time for calculating the performance of candidate networks is too long.Under the premise of not being able to shorten the test time,the key to compress the search time is to reduce the number of tests.Therefore,a performance predictor is introduced to predict the performance of the candidate networks instead of the actual testing process.The main body of the predictor is Graph Convolutional Network(GCN),integrating the architecture information of preceding cells processed by embedding.The test results show that Efficient AutoGAN reduces the search time by nearly half and the search results are comparable to AutoGAN.
Keywords/Search Tags:Generative Adversarial Network, Neural Architecture Search, Gradient Bandit Algorithm, Multi-controller Model, Performance Prediction
PDF Full Text Request
Related items