| Text summarization is a technique that automatically extracts useful information from long text using machine learning.With the rapid development of internet technology,peo-ple’s channels and methods of obtaining information are becoming more diverse.How-ever,this also brings about an enormous amount of redundant information beyond human processing capacity,the textual information occupies the majority.People hope to have a method to automatically obtain concise and beneficial information from these massive amounts of information.Text summarization technology is an effective method to achieve this goal,and the emergence and development of deep learning have made text summa-rization technology more mature.However,existing text summarization models also have many shortcomings.This thesis mainly studies the deep learning-based text summariza-tion method.By deeply understanding and analyzing the research status and existing prob-lems of existing text summarization models,thesis propose some solutions to improve the generation effect of text summarization.The research work and contributions of this thesis are summarized as follows:1.Propose a summary generation method based on the Long- and Short-term Time-series Networks.As recurrent neural networks have a natural advantage in processing text sequences,most models use recurrent neural networks as the basic unit of text summariza-tion models.However,recurrent neural networks have inherent shortcomings,namely,as the sequence length increases,their gradients are prone to unstable problems during back-propagation.To solve this problem,this thesis borrows the advantage of the the Long-and Short-term Time-series Networks,which combines convolutional neural networks with recurrent neural networks to extract the long and short-term features of the sequence and proposes a recurrent skip layer to capture the very long-term dependency relationship in the sequence.Existing summary generation models are mainly built on the encoder-decoder architecture.This thesis introduces the convolutional layer and recurrent layer in the Long- and Short-term Time-series Networks into the encoder and introduces the recurrent layer and temporal attention layer into the decoder,combined with the existing pointer and coverage mechanisms in the text summarization model to generate text sum-maries.Experimental results show that this model can effectively improve the generation effect of summaries.2.Propose a summary generation method based on the Long- and Short-term Time-series Networks.As recurrent neural networks have a natural advantage in processing text sequences,most models use recurrent neural networks as the basic unit of text summariza-tion models.However,recurrent neural networks have inherent shortcomings,namely,as the sequence length increases,their gradients are prone to unstable problems during back-propagation.To solve this problem,this thesis borrows the advantage of the the Long-and Short-term Time-series Networks,which combines convolutional neural networks with recurrent neural networks to extract the long and short-term features of the sequence and proposes a recurrent skip layer to capture the very long-term dependency relationship in the sequence.Existing summary generation models are mainly built on the encoder-decoder architecture.This thesis introduces the convolutional layer and recurrent layer in the Long- and Short-term Time-series Networks into the encoder and introduces the recur-rent layer and temporal attention layer into the decoder,combined with the existing pointer and coverage mechanisms in the text summarization model to generate text summaries.Experimental results show that this model can effectively improve the generation effect of summaries.Propose a summary generation method that combines contextual seman-tic information.Existing text summarization generation models often cannot effectively capture the key information in the original text and instead focus on some less important information.The traditional generative summary models usually encode text sequence with a word encoder,which lack the extraction of senttence and contextual information.Therefore,this thesis add a contextual information encoder to encode the contextual in-formation of source text,the contextual information are extracted from the output of word encoder with combinnation of gated linear units and convolutional neural networks,de-coder used attention mechanism can consider both the contextual and word information in the original text when generating the summary,and generate the summarization which can more reflect the key information of source text.Experimental results show that the proposed model in this thesis achieves better performance than the baseline model. |