Font Size: a A A

Research And Implementation Of GPU Scheduling Technology Based On OpenStack

Posted on:2019-07-11Degree:MasterType:Thesis
Country:ChinaCandidate:B Q WuFull Text:PDF
GTID:2428330566473519Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the further development of artificial intelligence and large data technology,the performance requirements of the enterprise are becoming higher and higher.Cloud computing,as the underlying operation support of these technologies,inevitably needs to further strengthen its data processing capability.The image processing unit GPU has become the main choice to strengthen the computing power of the cloud end.Because the main functions of CPU and GPU are different,they lead to different hardware architectures.CPU is good at processing logic operations,and is mainly used for managing computer hardware and software scheduling.GPU is good at parallel data processing,and is mainly responsible for high performance computing and image processing.Therefore,adding GPU computing services to the cloud can effectively improve the computing power of the cloud,and reduce the workload of the cloud CPU,so that the cloud can provide better services.However,due to architectural reasons,GPU can not fully virtualization lead to a very low utilization rate in the cloud,so the cloud is currently mainly using CPU to process data computing tasks,which limits the ability to further strengthen data processing in the cloud.The main purpose of this study is to make use of CUDA(Compute Unified Device Architecture unified computing device architecture)in the cloud to make GPU provide general parallel computing services and improve the utilization and stability of GPU in the cloud.The specific research content is to use OpenStack to build a cloud platform with IaaS.Through the transmission technology,it virtualized GPU and created the virtual machine of the exclusive GPU device.At the same time,the C/S architecture program was run on the control node,the GPU virtual machine and the user virtual machine.When the user creates an ordinary virtual machine at thecloud end,when the client program requests the GPU resource to the cloud,the cloud will dispatch the user to a reasonable GPU virtual machine according to the user's request and the GPU working state,and the user task is completed by the GPU virtual machine.This research can greatly improve the utilization of GPU in the cloud and provide users with complete GPU performance.In addition,the cloud achieves GPU load balancing by working state and user request for GPU,and effectively improves the stability of GPU work.In order to further improve the security and stability of GPU services in the cloud,the Linux user rights management technology is adopted to control the privileges of the server in the GPU virtual machine,and the hot backup technology is used to deal with the problem of GPU failure.
Keywords/Search Tags:OpenStack, CUDA, GPU Task Scheduling, Transmission Technology, User Rights Management, Hot Backup
PDF Full Text Request
Related items