| Currently,cloud computing technology has been widely used in power systems,and various power system dispatching and control software are deployed on the cloud one after another to achieve elastic scaling of computing resources.However,as the quantity of dispatching and control software in the cloud environment is increasing,the number of computing tasks within the cloud is also increasing,while the computing resources for processing tasks are usually limited,and some tasks are delayed due to the unavailability of computing resources.Especially,when a fault occurs in the power system,a large number of urgent tasks related to fault handling flood into the cloud environment,resulting in some important tasks cannot be completed on time,thus causing economic losses or even safety accidents.Therefore,it is important to design an efficient task scheduling mechanism so that the massive tasks in the cloud environment can get reasonable resource allocation to guarantee the safe and stable operation of the power system.This paper focuses on the efficient task scheduling of power system dispatching and control software in cloud computing mode,and discusses the task scheduling model,online scheduling algorithm and scheduling performance improvement methods that can solve the above problems.The main research works of this paper are as follows.(1)A scheduling model that considers multiple attributes of tasks in power system dispatching and control software is proposed,including a task model considering independent tasks and workflows,a resource model considering service instances and physical machine resource constraints,a task importance model considering inherent importance,laxity and critical path,and a utility function to evaluate the importance of tasks.In addition,a data collection method is designed for two types of data,which are task processing time and resource usage,to provide data support for task scheduling.(2)Based on the task scheduling model in(1)and the requirements of dispatching and control business,an online scheduling algorithm that takes into account the dynamic changes of task importance is proposed.First,a scheduler framework applicable to power system dispatching and control software is proposed.Then,an online scheduling algorithm containing three scheduling mechanisms is proposed,which are normal allocation,resource reservation,and preemptive scheduling.After that,five metrics are proposed for evaluating the performance of the algorithm.Finally,a method for selecting the initial weights of utility function based on statistical learning is proposed.The proposed algorithm is verified by a power system dispatching and control software,and the simulation results show that the proposed algorithm shows significant advantages in all performance metrics compared with four online scheduling algorithms,which are First Come First Serve,Earliest Deadline First,Least Laxity First,and Fixed Priority Scheduling,and can realize efficient task scheduling.(3)To improve the performance of the proposed scheduling algorithm in(2),we propose a scheduling performance improvement method based on Q-learning,which is a time-series differential method in reinforcement learning and is suitable for scenarios with high real-time performance and unknown environment model.A reinforcement learning model for the task scheduling problem is established,which includes the selection of weights of the utility function is modeled as actions,the features of the current waiting task are modeled as states,and a reward function for evaluating the effect of actions is established.A Q-learning-based action iteration process is designed,and a scheduler architecture that enables offline learning and online updating of weights is proposed.The proposed scheduling performance improvement method is verified by a power system dispatching and control software,and the simulation results show that the proposed scheduling performance improvement method can effectively improve the scheduling performance,and all the performance metrics are improved to a certain extent. |