Font Size: a A A

Research On Multi-modal Time Series Data Prediction Method Based On Dual-stage Attention Mechanism

Posted on:2022-09-17Degree:MasterType:Thesis
Country:ChinaCandidate:M W XuFull Text:PDF
GTID:2480306350481794Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
With the development of industrial society,many fields have accumulated increasingly large-scale data.Most of these data come from various sensors and monitoring equipment.Some of them show the complex characteristics of multi-modality,high dimensionality,and large correlations between attributes,which have important research value for the development of the field to which they belong.Precise prediction of multi-modal time series data is of great significance for guiding production and life.The existing time series prediction methods are roughly divided into three types based on mathematical statistics,based on traditional machine learning,and based on deep learning.Time series prediction methods based on mathematical statistics usually rely heavily on the data itself,resulting in limited application scenarios and poor universality;time series prediction methods based on traditional machine learning cannot adapt to high-dimensional time series data,and it is difficult to cope with the prediction of large amounts of data Task:Deep learning-based methods are the current mainstream research in the field of time series prediction.Currently,the most widely used are Long Short-Term Memory(LSTM)and Gated Recurrent Unit(GRU).However,there are a large number of gating weight parameters in LSTM neurons,which leads to slow training of LSTM neural networks and easy overfitting.The GRU neuron is improved from the LSTM neuron.The training speed is improved by cutting all the gated units,but it does not perform well on long-distance time series prediction tasks.In addition,the existing time series prediction methods cannot fully mine the time series characteristics and multi-modal characteristics of the data,and the prediction accuracy rate is not high.In response to these problems,this article has done the following research:(1)Propose an enhancement method for timing and modal features based on the Dual-stage Self-attention(DATT)mechanism.This method uses the first-stage self-attention(FATT)mechanism to weight the time series dimension of each set of data,so that the model can fully capture the characteristics of the time series data,thereby effectively improving the problem of time series prediction lag and improving The accuracy of the multi-modal time series forecasting model.Through the second-stage self-attention mechanism(Second-stage Self-attention,SATT),the output of the prediction unit is used as input,and the correlation degree of the predicted attributes is weighted according to each attribute,so that the model has data for different modal attributes Have the ability to selectively learn.Experiments show that the second-stage self-attention mechanism can improve the predictive performance of the model,especially when the column attributes of multimodal time series data have large differences in correlation,the effect is particularly significant.(2)Propose a time sequence prediction neural network based on Gated Feedforward Recurrent Unit(GFRU).The neural network unit is based on the improvement of LSTM neurons.On the one hand,by introducing a cell state feedforward mechanism on the basis of LSTM,the cell state at the previous moment is involved in determining the cell state at the current moment,thereby strengthening the relevance of the temporal context,To solve the problem that the time series prediction model is not sensitive to the sudden change of the trend of the stationary sequence,and further improve the prediction accuracy of the model;on the other hand,by using the update gate to replace the original input gate and forget gate,thus effectively reducing the scale of the weight matrix Improve the training speed and generalization performance of the model.This article introduces the calculation process of GFRU in detail from the perspective of forward propagation,and elaborates and proves the principle of GFRU from the perspective of mathematical derivation.Finally,use the UCI standard data set published by the University of California,Irvine,and implement a dual-stage Self-attention based Gated Feedforward Recurrent Unit(DATT-GFRU)neural network that integrates a two-stage self-attention mechanism.The dual-stage self-attention based Long Short-Term Memory neural network(Dual-stage Self-attention based Long Short-Term Memory)that integrates the two-stage self-attention mechanism is compared on different data sets to verify the generalization performance of DATT-GFRU.Experiments show that the network model has high accuracy and generalization ability.
Keywords/Search Tags:Multi-modal time series data, Recurrent neural network, Self-attention mechanism, Prediction
PDF Full Text Request
Related items