Font Size: a A A

Research On Failure Compensation Of Linear Superconducting Section Using Artificial Intelligence Algorithm

Posted on:2022-07-01Degree:MasterType:Thesis
Country:ChinaCandidate:J W DuFull Text:PDF
GTID:2492306512982819Subject:Nuclear energy and technology projects
Abstract/Summary:PDF Full Text Request
The Ci ADS superconducting linear accelerator is the first high-power continuous-wave superconducting accelerator in the world to drive a transmutation research device.The design current of Ci ADS is 5m A and the energy is 500mev.It is expected to be upgraded to 10m A,1Ge V in the future.Due to the operating indicators and the special requirements of the terminal,the overall physical design is faced with two challenges:extremely low beam loss rate and high availability.Therefore,the accelerator needs to meet the radiation safety requirements of 1W/m and the operating power of 2.5MW,and the beam loss rate should be controlled below 10-6.In summary,in order to ensure the safe operation of the terminal,the final availability requirement of the accelerator is an order of magnitude higher than that of the existing accelerators.The failure compensation of accelerator key components is a key technology to meet the requirements of high availability and low beam loss,which makes the equipment availability improved.The traditional algorithms have the problems such as the calculation speed is relatively slow,and the effect of low-energy transmission section failure compensation is poor.In the process of experiment,we adjust the model constantly,so that the accelerator can be completely re-matched after failure.This paper uses the Deep-Q Network(DQN)model of reinforcement learning to establish a failure compensation model,regards the accelerator itself as a reinforcement learning environment,and uses the main-net of the DQN algorithm as a selector in the failure compensation process to select the component parameters that need to be changed in the virtual accelerator.We continuously adjust the model during the experiment,so that the virtual accelerator can make beam current re-matched after failure.In order to improve the availability of the accelerator in failure compensation,we need to minimize the failure compensation simulation time.By changing the learning rate and other methods,after the reinforcement learning iteration is completed,the selector can complete the single-cavity failure compensation requirement within 10 time units,thereby reducing the failure compensation time.In addition,in order to verify the feasibility of the algorithm in the horizontal and vertical compensation,we conducted simulation experiments in the low-energy HWR010 section structure of the Ci ADS superconducting linear accelerator,and carried out single cavity failure,single solenoid failure,multi-cavity failure and other experiments.In the single-cavity failure compensation simulation experiment,the output energy is 7.7Me V,which is higher than the output energy threshold.The peak difference between the two ends of the matching section is less than 0.1,the back-end envelope variance is increased by less than 5%compared to before failure,and the emittance growth under multi-particle simulation after compensation is less than 18%,which meets the compensation requirements.So we think the feasibility of longitudinal compensation is proved.In addition to satisfying the vertical problem,the failure compensation also needs to satisfy the horizontal matching.Therefore,we conducted a solenoid failure compensation experiment.After compensation,the export energy,transmittance increase,and envelope smoothness all meet the requirements,indicating that our compensation method can meet the requirements in a six-dimensional space.In addition,we have further tried to conduct a multi-cavity failure compensation simulation experiment.In this experiment,the output energy is kept above the threshold energy under the premise of smooth envelope.The above experiments prove the feasibility of the algorithm we used in the failure compensation simulation problem.
Keywords/Search Tags:Reinforcement learning, failure compensation, beam, deep neural network
PDF Full Text Request
Related items