| With the wide application of mobile devices and the vigorous development of wireless communication technology and Internet of Things technology,a large number of delay-sensitive and high-power mobile products have emerged.Mobile Edge Computing(MEC)emerged as a new generation of network computing paradigm.MEC places the real-time computing,storage,and communication management and control capabilities of the cloud on the edge of the mobile network close to the user,which meets the requirements of the network in the 5G era for terminal equipment types.and requirements in terms of quantity,throughput and delay.In recent years,how to optimize the design of task offloading and resource allocation strategies to ensure low-latency,low-energy communication;how to optimize the design of edge caching strategies to achieve network performance improvement has become one of the research hotspots of MEC.This paper takes MEC as the research scenario,and uses automatic machine learning methods to conduct optimization research on the above two technical issues.The main research work includes:Aiming at the joint optimization problem of user computing task offloading and resource allocation,a fast heuristic algorithm is first proposed to find a near-optimal solution to the problem in a short time,and a fast problem solving algorithm based on automatic machine learning is proposed.The algorithm uses the MEC server to collect user tasks and network information,and converts the information parameters into optimization problem variables,then uses the optimal network model trained by automatic machine learning methods to solve the discrete variable of unloading strategy,and finally obtains the joint optimization problem.untie.The simulation results show that the fast algorithm can solve the task unloading strategy and resource allocation strategy within ten milliseconds.The numerical difference of the optimal solution of the method is only 25%at most.Aiming at the optimization of edge caching strategy,a timesegmented popularity-aware caching algorithm based on automatic machine learning is proposed.The algorithm first divides user requests into six time periods according to time,and describes the request content as a weighted average type vector.The optimal network model trained by automatic machine learning method is used to predict the type preference of user requests in each time period in the future,and then Allocate cache.This paper uses the Movielens data set to simulate and verify the scheme.The simulation results show that the cache hit rate and the total delay of user requests are better than least frequently used(LFU),least recently used(LRU)and first input first output(FIFO)three basic cache strategies. |