Font Size: a A A

Research And Implementation Of Incremental Learning Model Based On Neural Network

Posted on:2022-09-21Degree:MasterType:Thesis
Country:ChinaCandidate:W C WangFull Text:PDF
GTID:2518306329498764Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the wide application of neural networks,their shortcomings are increasingly being discovered.Because of the "catastrophic forgetting problem",it is impossible to perform incremental.In recent years,the related fields of transfer learning have developed rapidly.Most transfer learning methods focus on the effects of models on new tasks,while the effects on past tasks are often not.As a special transfer learning method,the main task of incremental learning is to solve the "catastrophic forgetting problem".This article will explain catastrophic forgetting from another perspective: the training of neural networks has high requirements on the distribution of training data.If the training data does not meet the distribution of the target results,the network will overfit a part of the data,resulting in training fails.In the incremental learning tasks,the data of the post-training tasks may not meet the distribution of the training data of the previous tasks,so catastrophic forgetting will occur.Based on the understanding of catastrophic forgetting proposed in this article,a faster method named RFD(Random Sample Distribution Fitting)is proposed in this article to overcome the time consumption of traditional incremental learning methods which complete the training data distribution.This method uses random values to enter the original training network,and the random values and its results in the original training network are used as the input and output of the new network and the posttraining data are updated together to train the network so that it has roughly the same distribution as the previous task.The following work has been done around this model:(1)Deduction of the probability distribution formula to prove the rationality of the method theoretically.(2)Proved that the method can restrain the catastrophic forgetting in the two scenarios proposed in this paper.(3)Discuss the characteristics of the model and find the appropriate model parameters.(4)Comparing this model with other incremental models based on neural networks proves the advantages of this model compared to other models.In recent years,Neural Architecture Search(NAS)has begun to be studied,and the NAS network as a special neural network may also cause catastrophic forgetting problems.In addition,the simultaneous changes of structure and weights will have a great impact on the migration work.The existing migration methods have complex structures and it is difficult to interpret their results.It is conjectured that the RFD method proposed in this article can also have an effect after being applied to NAS.For this reason,this article redefines the search method and proves its related features in experiments.The experiment proves that this method can make the structure more stable,and can effectively alleviate the effect of simultaneous changes in the incremental learning weight and structure in the NAS on the incremental learning,and can save the corresponding migration cost in the task migration.
Keywords/Search Tags:Incremental learning, sample distribution fitting, multi-task learning, network architecture search
PDF Full Text Request
Related items