| Since entering the information age,with the advancement of information technology,various multi-source heterogeneous data has exploded,and most traditional machine learning research has focused on how to learn the classification or prediction model of a single task from a limited sample set,without considering the effective use of knowledge of other related selflearning tasks.Lifelong machine learning(love my life)aims to adapt learned patterns to new tasks while retaining knowledge gained in the past.One of the main challenges Love My Life now faces is how to balance the retention of knowledge from previous tasks with the learning of new tasks to reduce catastrophic forgetting.In view of the above problems,based on the analysis of existing LML methods,this thesis proposes a lifelong machine learning research method based on template comparison.Through template comparison,only a small amount of data from previous tasks needs to be saved,which can reduce the degree of forgetting of previous knowledge and become a new idea for lifelong machine learning.The main research work of this thesis is as follows:1.A template-based lifelong machine learning neural network(TB-Network)based on Template similarity is established.The lifelong machine learning neural network(TB-Network)is a lifelong model based on template comparison.We convert the multi-classification tasks of any number of categories into binary classification tasks(to determine whether they are similar),which enables the model to produce a good classification effect on the multi-classification data sets with different number of categories without changing its own structure,that is,without changing the number of parameters.The classification accuracy of the multi-classification tasks can reach more than 90%.2.Apply the above network to lifelong machine learning research.When the method proposed in this thesis is applied to lifelong learning,it is essentially a lifelong learning method based on replay,which needs to add some samples of previous tasks to reduce the catastrophic forgetting of the model.Compared with other replay-based methods,this method needs to retain less data of the previous task,which reduces the storage overhead.In addition,this approach does not require the design of complex specific strategies for each network to retain samples.This is mainly done by optimizing templates to reduce the degree of forgetting knowledge about previous tasks.This thesis uses the split MNIST dataset to test whether the model can reduce catastrophic forgetting.For these five tasks,the average classification accuracy of the model was 98.70% and Backward transfer was-8.48%.Experimental results show that the network can reduce catastrophic amnesia.3.Further improve TB-Network,so that the model can better reduce catastrophic forgetting and improve the classification accuracy of a single task.The original network used simple multi-layer full connection for feature extraction,but the convolutional module was added after the improvement,so that the network could better capture the spatial features of images when feature extraction was carried out.Experimental results showed that the average classification accuracy of the improved network was 99.68% and Backward transfer was-4.42%.Experimental results show that the improved network can further reduce catastrophic forgetting,and the accuracy of individual classification tasks is further improved. |