| Although the deep learning method based on the word vector technology and the pretrained model has a significant effect on the task-based dialogue system,it has the problem of excessive calculation and parameters of the model,which requires high server resource conditions for the deployment model.How to reduce the parameters and volume of the model as much as possible while maintaining the effect of the model is very important for the landing of the dialogue system.This paper uses multi-task learning and low-rank decomposition techniques to improve the natural language understanding and natural language generation modules of traditional task-based dialogue systems,and uses the improved model to implement a complete document dialogue system in the architectural field.1)In order to take into account the effect and speed of the natural language understanding module,a multi-task learning model MTLA based on bidirectional LSTM and Attention intent detection and word slot filling is proposed.On the one hand,three methods of word-level and character-level multi-granularity feature fusion,bidirectional LSTM and Attention models to capture text sequence features and global features,and joint optimization of the loss functions of the two tasks improve the accuracy of the model.The experimental results show that the accuracy of the MTLA model on the three data sets is improved by 2 to 3 percentage points relative to the benchmark model.On the other hand,the two methods of adding fusion features and removing the CRF layer improve the prediction speed of the model,and verify the effectiveness of the two methods.2)In response to the problem of the large number of parameters in the MTLA model,the parameters of the LSTM layer,Attention layer and Dense layer of the model were further subjected to TSVD decomposition and TT decomposition,and TT-Attention was proposed.Finally,the three data sets were compared in detail.The effect of the parameter amount,reasoning speed,memory resource occupation of the model before and after compression.The results show that compared with TSVD method,TT decomposition can greatly reduce the resource consumption of the model and increase the speed of inference while maintaining the performance of the model.3)Aiming at the problem of multi-level input caused by multi-level input in traditional NLG IR and MRC modules,a retrieval and reading multi-task learning model HWMTBERT based on BERT model is proposed to improve model performance while reducing the size of the model.At the same time,in order to improve the problem that the multi-task learning model is not sensitive to the keywords of the input problem,keyword representation is introduced to enhance the perception ability of the model.The results show that the reading accuracy of the HW-MTBERT model is improved by nearly 1%,and the retrieval accuracy is improved by 2%.4)Finally,based on the open source dialogue system framework RASA,its NLU module and NLG module were modified to the model proposed in this paper,and the system was evaluated on the data set of the construction field we marked. |