Font Size: a A A

Energy Optimization For Metro Train Operation And On-line Timetable Rescheduling Algorithm Under Random Disturbances

Posted on:2021-03-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:G YangFull Text:PDF
GTID:1482306503961829Subject:Electrical engineering
Abstract/Summary:PDF Full Text Request
China's rail transit has a large base of daily average passenger capacity and shows a trend of increasing year by year.This phenomenon leads to the increase of the operation cost of the train system and the increase of the demand for security.In recent years,with the development of artificial intelligence,the intelligent operation of rail transit is greatly promoted.As an important part of intelligent operation,the optimization and rescheduling algorithm of timetable has the advantage that it is directly attached to the automatic train operation system.Without modifying the original structure of the system,the purpose of reducing the energy consumption of train operation can be realized.It has the characteristics of low investment cost and small potential safety hazard.The automatic train operation system can not only make differential driving strategy for each train,but also has certain anti-interference ability and real-time characteristics.When the random disturbance occurs,if the energy-aimed offline timetable is still used,the whole train system will no longer maintain low energy consumption,so it is necessary to reconstruct the offline timetable online.By capturing the external force majeure interference factors,the strategy adjusts the system strategy in real time under the premise of ensuring safety,so as to reduce the impact of interference factors on the energy-saving effect as much as possible.In this paper,the automatic train control system as a platform to build an integrated energy-saving program.This paper is divided into four parts: first,the train operation model is established on the goal of metro energy optimization;then,the energy-aimed off-line timetable is constructed by genetic algorithm;then,the on-line timetable decision-maker is constructed by deep neural network;finally,the decision for optimizing energy consumption under continuous random disturbance is explored by the reinforcement learning.In this paper,firstly,the state of train operation,especially the energy change,is modeled,and a time domain analytical model is proposed.The model includes single train energy consumption optimization control model and multi-train energy transfer model.Most of the existing single train optimal control models are based on the analysis method,and the Pontryagin maximum principle is used to solve the optimal control sequence of train.In this paper,the optimal control mode is supplemented by the electromagnetic characteristics of the train motor,the expression of state variable function is derived,and the method of solving the switching point is proposed by combining the analysis method with the numerical method.The multi-train model describes the law of energy transfer between trains.The popular multi-train model is simplified energy consumption model.In this paper,we compare the circuit model with the simplified energy model,and construct a complete time-domain analytical model of train operation.This model is a lumped model based on the operation process and energy transfer process of metro trains,which lays a foundation for the energy optimization and the timetable online rescheduling.To build a reasonable timetable can ensure that the running train runs safely,make full use of the regenerative kinetic energy in the running process,and minimize the total net traction energy consumption of the train.Genetic algorithm is the most commonly used algorithm to customize the offline timetable,but the optimization effect varies according to the way of its construction.In this paper,through the comparative analysis of different train operation models,different driving strategies,different decision variables and different objective functions and other factors,an energy-saving schedule optimization strategy based on double decision variables is proposed.Through testing on Shanghai Metro Line 1,this strategy is verified to be an off-line optimization strategy without additional energy storage equipment,and has excellent performance in reducing the net traction energy consumption and the iteration speed of the optimization strategy.Disturbances occur frequently in the actual metro transit,which forces the off-line timetable to fail to achieve the expected energy-saving effect.In this paper,an on-line timetable rescheduling method based on the deep neural network is proposed.This method can re-plan the train operation strategy in the event of station stop disturbance,specifically,re determines the cruise speed at the moment of departure of each train.This method overcomes the defect of high time complexity of genetic algorithm,and uses the discrete samples to train the deep neural network.In order to test the effectiveness and real-time performance of this method,this paper carries out experiments.The experimental results show that the energy-saving effect of this method is similar to that of genetic algorithm,and the real-time performance meets the demand of online decision-making.In this paper,deep reinforcement learning is also applied to solve the problem of schedule online reconstruction,and a timetable rescheduling method based on deep deterministic policy gradient algorithm is proposed.In this method,the agent is trained by continuous random sampling,and the cruise speed and stop time are used as decision variables of the agent,and the agent is guided to make better decision which minimizes net traction energy consumption according to the value function.This method can be used to develop an on-line timetable rescheduling strategy for the case of continuous random disturbances.The experimental results show that the strategy can meet the real-time requirements,has certain self adaptability,and has good energy-saving effect.
Keywords/Search Tags:energy optimization, train timetable rescheduling, metro system model, genetic algorithm, reinforcement learning, deep neural network
PDF Full Text Request
Related items