| In recent years,neural networks have evolved toward deeper and more complex architectures to obtain a stronger capacity for feature representation.However,manually designed models need significant expert experience and prohibitive trialand-error costs,which limits the development of deep neural networks.Therefore,Neural Architecture Search(NAS),which automatically design a best-performing network architecture for specific tasks by using limited computing resources,have received increasing attention.Existing NAS-discovered models have achieved better results than manually designed neural networks for several tasks such as image classification,target detection,and semantic segmentation.However,the traditional NAS methods only consider a single objective(e.g.,classification accuracy)or a specific deployment scenario,ignoring other factors such as the number of parameters,FLOPs and power consumption,which limits their application and development in realistic tasks.Therefore,we focus on the multi-objective NAS in the image classification task and research multi-objective NAS from three aspects: improving the search efficiency,enhancing the multi-objective optimization,accelerating the training and evaluation process of each candidate network.First,to overcome the drawbacks of Layer-wise network morphism in the existing multi-objective NAS algorithm,we propose a network morphism-based NAS algorithm called Block NM NAS.By using convolutional blocks for initialization and modularization,Block NM NAS is able to effectively improve the performance of the NAS algorithm.We propose the Block NM NAS algorithm that uses CBNR block for initialization and modularization,which significantly improves the search efficiency of NAS algorithms.Four modularization structures are designed and verified in Block NM NAS: RBNC block,CBNR block,BNRC block,and RCBN block,and the effects of initialization and modularization on the network morphism-based NAS algorithms are investigated.Experiments show that the Block NM NAS algorithm with the CBNR block can search for a lightweight model with only 2.72 M parameters and 3.53% test error on the CIFAR-10 dataset.Compared with the Layer-wise network morphism-based NAS algorithm,the best test error is reduced by 10.77%,and the number of candidate networks searched within a certain time limit is increased by 81,which effectively improves the efficiency of NAS algorithm.Secondly,we propose the GP-LEMONADE algorithm to address the irrationality and repeatability of only sampling based on cheap objectives in the LEMONADE algorithm.Meanwhile,to make the sampling process more efficient,we design the online predictor based on Gaussian Process,and sample candidate networks by cheap and expensive objectives.Experiments show that the GP-LEMONADE algorithm evolves 100 generations and obtains the SOTA model with 7.17 M parameters and 3.98% test error,which only takes 7.38 GPU days,which is 26.75 GPU days shorter than that of LEMONADE algorithm,effectively improving the search efficiency of multi-objective NAS algorithms.Finally,we propose the general heuristic mixed-precision training method which shortens the training time of candidate networks and further improves the search efficiency of the GP-LEMONADE algorithm.In this paper,we verify and analyze the acceleration of five typical mixed-precision strategies on different-scale network models in terms of training time and memory consumption,and propose a heuristic mixed-precision training method for modular network morphism-based NAS algorithms.Experiments show that the time required for the GP-LEMONADE algorithm to evolve for 100 generations is reduced by 1.38 GPU days when mixedprecision training is applied,and the search process is accelerated by 19%.When compared to training with the O3 strategy,our method accelerates the search process by 10%,which effectively improves the search efficiency of multi-objective NAS algorithms. |