| In the recent years,as artificial neural networks grow deeper and develop more complex structures,their computation costs grow dramatically,which makes it extremely difficult when porting extraordinary deep learning models to less powerful platforms such as embedded environments.It also harms the potential of their application in extensive fields.Therefore,a network simplification method that is compatible with current network structures and training schedules is needed.Such method will not only improve the interpretability of deep learning models,but also dramatically reduce their computation costs.During the development of human brains,mass neurogenesis is followed by noticeable regressive events,which lead to the loss of about half of the synapses due to synaptic pruning.The pruning of synapses is largely determined by environmental influences and are considered to represent the actual process of learning.Inspired by the synaptic pruning in mammalian brains,we develop an adaptive deep learning framework,which consists of smooth initialization of parameters,convolutional pooling layers,and dynamic network pruning method.By applying smooth initialization technique,important parameters and redundant ones are markedly differentiated during the training process.Thus,unimportant weights in neural networks can be easily removed based on the standard deviation thresholds.Unlike some of the weight reduction methods,we prune a large portion of parameters in the flatten layers by employing convolutional pooling layers and turn fully-connected layers into convolutional layers with the same computation complexity.The dynamic pruning method is capable of detecting and dropping more than half of the redundant connections during the normal training process and does not require pre-training to learn the connectivity of network structure,neither requires a very long time of re-training to recover from the accuracy loss caused by the missing connections.Experiments show that the proposed methods help increase test accuracy in several deep learning models,and our dynamic pruning method outmatches several weight reduction methods in reduction ratio and test accuracy.Specifically,we compress the LeNet-5 model to 6% of its original size.We achieve higher accuracy than several weight reduction methods on Cifar-10 classification tasks.We introduce more natural appearance in generated images out of Generative Adversarial Networks.We reduce the 91% of parameters in the text classification model.Furthermore,for demonstrating the effectiveness of the proposed methods for real-life samples,we employ the adaptive deep learning framework on the practical problem of vehicle classification. |