| With the rapid development of Internet technology,cloud computing technology has been widely used in various fields of the Internet,and the shortcomings of the virtual machine technology that exists behind it,that is,the technology itself is too abstract and slow application startup speed and other shortcomings are increasingly reflected.In recent years,the development of container technology represented by Docker has become more and more rapid,and overall has driven the innovation of the cloud computing industry.Compared with virtualization technology,Docker containers have the advantages of smaller images,consume fewer resources,more flexible deployment and second-start,and so on.In order to orchestrate and deploy Docker containers more efficiently and reasonably,the container orchestration tool Kubernetes was born.The current Kubernetes default scheduling algorithm suffers from two main problems:on the one hand,in the process of calculating node scores,the default scheduling algorithm only considers two metrics,CPU and memory,and both metrics are calculated with equal weights,which cannot meet the needs of various types of Pod applications such as bandwidth-based and disk-based;on the other hand,after deploying a large number of Pod applications,the default scheduling algorithm does not consider the minimization of resource consumption cost and the overall load balancing of the cluster after scheduling Pod applications.To address these issues,the paper focuses on two main areas of research:(1)To address the problem that the resource metrics considered by Kubernetes in the preference stage are not comprehensive and the weights assigned to various resource metrics are all the same,which cannot reflect the bias of Pod applications towards various resource requirements,i.e.,it cannot satisfy Pod applications with different resource requirements,a scheduling algorithm based on the combined weight of TOPSIS(Combination Weight of TOPSIS Algorithm(CWT)based scheduling algorithm to optimize the existing Kubernetes scheduling algorithm.Firstly,the Kubernetes performance metrics are extended on the basis of the original resource metrics,after which the subjective and objective weights are calculated by two weight calculation methods,AHP and EW,respectively,and applied to the improved TOPSIS method to select the appropriate deployment nodes.(2)To address the problem that the default scheduling algorithm of Kubernetes does not take into account the minimization of resource consumption cost incurred after the deployment of Pod applications and the overall balanced load of the cluster after scheduling a large number of Pod applications,a scheduling algorithm based on an improved knowledge gaining and sharing algorithm(Check And Gaining-Sharing Knowledge Algorithm,CGSK)is proposed to optimize the existing Kubernetes scheduling algorithm.CGSK),which optimizes the existing Kubernetes scheduling algorithm.Firstly,the Kubernetes resource metrics are extended;secondly,a check dictionary based on the properties of the nodes themselves and the ports requested by Pod applications is built and introduced into this algorithm to fix the initially generated population and all individuals updated during the population iteration that do not match the configuration;finally,an objective function model based on the cost,cluster load degree and cluster imbalance degree and a normalized evaluation function are built and applied to the algorithm in order to improve the node selection strategy of the default scheduling algorithm.Simulation experimental results show that the Kubernetes scheduling algorithm based on the combined weight TOPSIS can effectively balance the cluster node resources in the case of high cluster load compared with the Kubernetes default scheduling algorithm;the improved gaining-sharing knowledge optimization algorithm can reduce the resource consumption cost,reduce the cluster load when scheduling multi-Pod applications and make the Pod application distribution more balanced compared with the Kubernetes default scheduling algorithm. |