Font Size: a A A

Design And Implementation Of Edge Intelligent Platform Based On Micro-service Architecture

Posted on:2023-01-28Degree:MasterType:Thesis
Country:ChinaCandidate:H B LiFull Text:PDF
GTID:2558306914473654Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
In recent years,edge computing has emerged as the times require to meet the challenges brought by massive device access and data processing.Edge computing provides low-latency,high-efficiency processing power for device requests by deploying computing resources at the edge of the network close to the device.At the same time,various emerging businesses emerge in an endless stream,which increases the demand for intelligent processing of IoT device data.Therefore,edge intelligence has become an important application scenario in edge computing,and as the carrier of its realization,the research on edge intelligence platform is particularly critical.How to build a platform with high expansion and flexible service scaling,and how to reasonably allocate node resources to improve the deployment efficiency of services,how to cooperate with each other to reduce the execution delay of tasks,is a challenge for the function and architecture design of edge intelligent platform.Therefore,this paper conducts research on the related issues of edge intelligent platform,and the main contributions include the following four aspects.1.This paper provides an in-depth analysis of the requirements and challenges associated with edge intelligence platforms.For the resource management,service scheduling management and other requirements of distributed edge nodes,the platform’s cloud-side collaboration architecture is designed based on docker and Kubernetes,which realizes the resource collaboration,service orchestration collaboration and data collaboration capabilities between the cloud and the edge.According to the processing flow requirements of device data,a layered software model based on micro-service architecture is proposed,which makes the platform have the advantages of high scalability and flexible deployment.At the same time,in order to meet the challenge of concurrent execution of multi-type tasks in the platform,the concurrent task scheduling mechanism and task queuing and distribution mechanism are designed to realize the ability of multi-task cooperative processing.Through the relevant design of the platform,it lays a foundation for the research of service scheduling and task scheduling in the following.2.In terms of service deployment,Kubernetes provides a service scheduling strategy for balancing node resource utilization for cloud computing scenarios.However,the above strategy ignores the logical relationship between services and the network topology between nodes,and lacks consideration in heterogeneous resource allocation and image distribution.To solve the above problems,this paper proposes a service scheduling algorithm based on multi-criteria decision-making.The algorithm considers the cross-node data transmission delay between services,the similarity between the service request resource and the node’s remaining resources,and the node’s mirror layer cache,and correspondingly designs three priority functions to score the node’s priority.Then,based on the TOPSIS(Technique for the Order of Prioritization by Similarity to Ideal Solution)algorithm,the above scoring results are balanced and analyzed,and the service deployment nodes optimized by multi-index constraints are finally returned.The simulation results show that,compared with the Kubernetes scheduling strategy and the service scheduling scheme of normalized sorting,the algorithm proposed in this paper reduces the data transmission delay,cluster balance and service deployment delay by 31%/15%,40%/5%,and 30%/24%,respectively.3.In terms of the platform’s response to inference tasks,due to the limited resources of a single node and the volatility of task requests on the edge side,the static service pre-deployment solution based on the algorithm of Content 2 cannot meet the task execution requirements.Therefore,this paper studies the collaborative optimization of service scaling and task offload scheduling.Firstly,the queuing execution process of tasks in the container is modeled and analyzed,and the optimization problem to reduce the average completion delay of tasks is set.In the process of solving the problem,it is decomposed into two sub-problems:service scaling decision and task offloading decision,a scaling decision algorithm based on dynamic programming and a heuristic task scheduling algorithm are proposed respectively to realize the adaptive optimal scheduling of task requests.The simulation results show that,compared with the default round-robin load balancing algorithm in Kubernetes and the heuristic task scheduling algorithm without service scaling,the algorithm proposed in this paper reduces the average completion delay by about 41%and 26%.At the same time,when the task type changes dynamically,the algorithm proposed in this paper can still maintain the optimal execution performance.4.Based on the platform’s architecture,software model design,service scheduling algorithm and task scheduling algorithm,the prototype of the edge intelligent platform is built.And through the face recognition and pedestrian detection business,the data reasoning ability of the platform is displayed in the form of UI front-end.
Keywords/Search Tags:edge Intelligence, microservice, container, service scheduling, task scheduling
PDF Full Text Request
Related items