| Recommendation systems have become a key technology in the fields of ecommerce and online services,as they can help users discover and select items of interest.With the popularization of the Internet and the explosion of data growth,recommendation systems have also been widely applied and studied.In recent years,with the development of deep learning and big data technologies,new research hotspots have emerged in recommendation systems.Deep learning-based recommendation systems can automatically learn the relationships between users and items,thereby improving recommendation accuracy and personalization.Traditional recommendation systems refer to a class of recommendation systems based on static features such as user historical behavior and item attributes,and are typically recommended using methods such as collaborative filtering and content filtering algorithms.However,data sparsity and uncertainty in user behavior can lead to reduced recommendation performance.Many users do not provide explicit feedback,and these cold-start users cannot achieve accurate recommendations.To address this issue,some deep neural network-based sequence recommendation methods have been proposed,which implement more accurate user behavior prediction.Although these deep learning-based sequential recommendation models have achieved certain results,all of these models have noise issues.There may be a large amount of noisy data in users’ historical interactions,such as products that users are not interested in and products that users have accidentally clicked on.Deep neural networks overfit noise information,which affects the final prediction results.This article has studied the methods based on sequential recommendation and proposed three effective models to extract valid information from users’ historical interactions and remove noise information.Firstly,in order to remove noise information from users’ historical interactions and make more accurate predictions,this article proposes a denoising recommendation method based on modified HAWKES and neighbor key information.This model combines the HAWKES process with neural networks,introduces neighbor information for supplementation,and designs three modules.Among them,the key information aggregation module is used to aggregate key information and remove noise information.Finally,this paper has conducted full experiments on two datasets,and multiple types of experiments have jointly demonstrated the effectiveness of the proposed model,which has led to an improvement in the recommendation performance of the model.This method has been successfully published in a CCF C-class conference paper.Secondly,we propose a filtering denoising sequential recommendation model based on Bert.In this article,a filtering layer with a learnable filter is added to Bert,which adaptively attenuates noise information by converting temporal and frequency domain information.The addition of the filtering layer makes the denoised interaction sequence easier for the model to capture intrinsic features,effectively improving the performance of the sequential recommendation model.In addition,this model reduces the number of Transformer layers in the Bert model,reduces the number of training parameters,and makes model training faster.A large number of experiments show that the proposed model has good performance.This model outperforms other Bert-based methods and many baseline methods in the recommendation field.This method is currently in the process of being submitted to a CCF B-class conference.Finally,this paper proposes a sequence recommendation-based anomaly loss denoising method.Over-parameterized deep networks have the ability to memorize training data and can achieve zero training error.Even after memorizing the data,the training loss continues to approach zero,making the model overconfident and its performance decreases.Since existing regularization methods do not directly avoid zero training error,it is difficult to adjust their hyperparameters to maintain a fixed or preset training loss level.This paper proposes a direct solution called "flooding" and applies it to the first two sequence recommendation-based denoising models in this paper to prevent overfitting of the model,reduce the removal of valid information,and better reduce noise in sequence information.After the training loss reaches a reasonable small value,flooding intentionally prevents further reduction of the training loss.This paper’s method makes the loss fluctuate around the flooding level using normal mini-batch gradient descent,but if the training loss is below the flooding level,gradient ascent is performed.This can be implemented in one line of code and is compatible with any random optimizer and other regularization methods.After using flooding,the model will continue to "random walk" with the same non-zero training loss,and it is expected to drift to a flat loss area that leads to better generalization.The experiments show that flooding can significantly improve performance in a stable manner. |