Font Size: a A A

Two-stage Boost Matching And Coupling Control Of Diesel Engine Based On Deep Reinforcement Learning

Posted on:2022-12-03Degree:MasterType:Thesis
Country:ChinaCandidate:C B WuFull Text:PDF
GTID:2492306758450444Subject:Vehicle Engineering
Abstract/Summary:PDF Full Text Request
As an important technology to improve engine power and reduce emissions,turbocharging has been widely used in engines;Single-stage turbocharging is limited by factors such as structure and size,and has shortcomings such as turbo lag and narrow working conditions.Automotive engineers use two-stage supercharging to improve the above shortcomings.Among them,electric auxiliary supercharging is widely used.The electric supercharging system can respond quickly in low-speed conditions and acceleration conditions,and can also perform energy recovery under certain conditions.Exhaust gas turbine Supercharging is more economical under high-speed conditions,and a better supercharging effect can be achieved by combining the two.There is aerodynamic coupling in the two-stage supercharging system,and it is also difficult to control the two systems.This paper firstly uses the cooperative control algorithm to control the dual-input and single-output two-stage supercharging system.The PID controller is also used in the cooperative control algorithm.The problem of high cost of participation time;In this regard,this paper adopts a model-free deep reinforcement learning problem to control the two-stage supercharging.Deep reinforcement learning acquires experience through the interaction between the agent and the environment,and has the characteristics of self-adaptive self-learning.Deep reinforcement learning requires a large amount of data training to obtain a better strategy.This paper adopts two methods to improve this problem;firstly,parallel reinforcement learning is used to improve the efficiency of data collection,and secondly,the pre-training method in transfer learning is used for reference.Firstly,build a one-dimensional mean engine model in GT-power,and compare the mean engine model with the output parameters of the detailed model to verify whether the accuracy of the mean model can meet the requirements;according to the engine parameters,the pressure ratio,flow and other parameters of the electric compressor are calculated.Secondly,drawing on the experience of Valve Position Controller(VPC),a two-stage booster controller is built in Simulink,and this controller is used to coordinately control the two-stage booster system.Build a parallel deep reinforcement learning algorithm in Python to realize the adaptive control of a two-stage supercharging system.Then,build a co-simulation platform and conduct training.Connect GT-power and Python software through Simulink to realize real-time data transmission between the two;FTP-72 is used as the verification condition,in which the cooperative control adopts the classic PID controller,and the training parameters are adjusted through the co-simulation platform until A better control effect is achieved in the whole working condition.Parallel deep reinforcement learning requires multiple rounds of training,using a complex working condition for parameter tuning training,and then applying it to the entire working condition.For parallel deep reinforcement learning,the simulation training of a single agent and multiple agents is carried out respectively,and the convergence of the cumulative reward value is compared;drawing on the pre-training method in transfer learning,the neural network in parallel deep reinforcement learning is pre-trained to speed up convergence speed.Finally,the training results are analyzed.From the overall and local working conditions,the following analysis of the two-stage supercharged intake pressure of the cooperative control shows that the overall following is good,but there are still overshoot and insufficient following in the details of the local working conditions.Parallel reinforcement learning results show that 4 agents reduce the number of rounds at convergence by 23% compared to a single agent.After pre-training the neural network,the number of rounds of 4 agents is reduced by 65.45% compared with the convergence without pre-training,and the number of convergence rounds of a single agent after pre-training is reduced by 40.97%,indicating that pre-training can speed up the convergence.In order to verify the control effect of deep reinforcement learning,the intake pressure following and cooperative control are selected to be compared within 900 s to945s in the working condition.From the results,the pressure following effect of deep reinforcement learning is better than that of cooperative control,and the absolute error is used to quantify the intake pressure.Following the effect,through the calculation,the absolute error of deep reinforcement learning is reduced by 47.54% compared with the collaborative control.To sum up,this study uses parallel deep reinforcement learning to control two-stage supercharging,which can realize dynamic parameter adjustment and reduce the workload of parameter adjustment.The coupled control of the two-stage supercharging is realized through the algorithm,and the pre-training is used to speed up the convergence,which provides a reference for the parameter adjustment of the traditional control algorithm,and has a certain reference significance for promoting the application of the intelligent algorithm to the field of engine control.
Keywords/Search Tags:Diesel engine, Two-stage boost, Cooperative control, Deep reinforcement Learning algorithm, Pre-training
PDF Full Text Request
Related items