| History and culture reflect the living conditions and social activities of human beings in the process of development.It is of great significance to understand the cultural connotation and national spirit of a country.With the significant improvement of China’s international influence,China’s history and culture has attracted worldwide attention.Recently years,with the rapid popularization of the Internet,the related network resources in the field of history and culture grow rapidly.However,the historical and cultural information in the network mostly appears in the form of unstructured text,it is difficult to use it directly and effectively.Therefore,how to extract the structured semantic relationship from the massive unstructured text data is of great importance to the construction of the historical and cultural field relational database It has important research significance.First,the word vector is trained by word2 vec model,and the entity location information is fused to mine the deep semantic features of the text.First of all,the unstructured text data related to the historical and cultural fields are divided into sentences,words and other operations,and the word2 vec model is used to mine deeper text semantic information for the segmentation results;for the problem that the word vector features do not contain entity location information,this paper marks the location of words in the text examples according to the relative distance between the words and entities,and Position annotation features are integrated into word vector features of word2 vec.The experimental results show that the pretraining word vector which integrates the entity location information can effectively improve the learning performance of the model to the entity relationship features.In order to analyze the semantic relationship between text entity pairs more accurately,based on the bidirectional LSTM model,this paper designs an attention structure which combines the attention mechanism of word level and sentence level to analyze the output features of LSTM,so that the output features of the model not only contain the semantic relationship information of text,but also contain the word level and sentence level respectively It enriches the expression ability of semantic relation information.In order to solve the problem that the neurons in each layer of bidirectional LSTM do not fully learn the semantic features of text,an LSTM network structure based on the attention mechanism of neuron block level is designed.Block level attention mechanism is used to block LSTM internal neurons,and block based attention calculation is used to improve the learning ability of neurons to text semantic features.Finally,combining the output characteristics of the LSTM with the parallel attention structure,an LSTM relation extraction model based on multiple attention mechanisms is proposed.The experimental results show that this model provides richer semantic relation features than other relation extraction models. |