| In the era of informatization and intelligence,with the continuous increase of various types of data,artificial intelligence technology has had a profound impact on people’s lives.The problem of using natural language to generate text is one of the hotspots in the field of artificial intelligence.For example,using this method has successfully solved problems such as abstract extraction,information retrieval,and text style conversion.At this stage,generative adversarial network is one of the mainstream methods to solve the generation problem.It has been successfully applied to solve the problem of image generation,while the research results of using this method to solve the problem of text generation are relatively small,but generative adversarial network has shown excellent performance in text generation.When using generative adversarial network to perform the text generation task,the output accuracy of the generator affects the generation result,while the discrete characteristics of the text data restrict the output accuracy of the generator.In addition,in the training process of generative adversarial network,the generator focuses on the acquisition of local semantic information,and the generated text is not smooth enough,the quality is not high,and the diversity is not rich.For the problems mentioned above,this paper proposes an improved generation model of network text against network text generation.The main work is as follows:Firstly,to address the issue of discrete variables affecting the output accuracy of generative adversarial network generator,this paper proposes an improved generative adversarial network model LFMGAN.In this model,a Loss function is designed,and the discriminator uses the Loss value obtained by the Loss function to guide the generator in optimization,making the text generated by the generator as close as possible to the real text,so that the generator no longer only focuses on whether a generated word and the target word are consistent,but instead let the model focus on whether the overall generated results are reasonable,which makes the semantic distribution of the samples generated by the generator closer to the original semantic distribution of the samples.This means that the generator iterates in the direction of high semantic similarity,avoiding the accuracy impact caused by discrete variables.This article uses GPT-2 and Ro Berta models to generate the generator and discriminator of generative adversarial network,and compares them with the basic model Mali GAN model,Leak GAN model,and baseline model MLE using BLEU,ROUGE,and METEOR evaluation indicators.The verification and comparison experiments on Image_COCO dataset,EMNLP2017 WMT News dataset and Three Hundred Tang Poems dataset show that the generative model proposed in this paper is superior to other comparison generative model,with reasonable results and improved text quality.Second,because the generator of LFMGAN model uses greedy search decoding strategy to output text results,the generated text has insufficient diversity,and the model performs poorly in obtaining global semantic information.In this paper,the LFMGAN model is optimized by using the cluster search decoding strategy,and the size of the cluster width is 4,which can effectively reduce the computational complexity and improve the learning efficiency.At the same time,it can effectively focus on the global semantic information,and can further improve the diversity and quality of the text.In order to verify the effect of global semantic information acquisition,the same data sets and evaluation indicators in the above experiments are used to compare with the LFMGAN model.The experimental results show that by focusing on global semantic information,increasing the diversity of generated text,the evaluation indicator results have been greatly improved,which further illustrates the effectiveness of the model. |