| With the rapid development of internet of things(IoT),a massive number of devices are connected together to provide users with real-time application services,among which contentbased services occupy a dominant position.Traditional content-based services are provided by the cloud based on the content requested by users.However,the cloud network is usually deployed far away from users and cannot provide real-time services due to the large content transmission delay in the process of providing services.The edge caching system caches contents at the edge nodes so that users can fetch contents through the neighboring edge nodes,thus reducing the content transmission delay and improving the system performance.Since the storage capacity of edge nodes is limited,an efficient caching scheme needs to be designed to identify and cache popular contents that mostly users are interested in.By collecting and training user data to extract hidden features,Machine learning(ML)can effectively predict popular contents that need to be cached in edge nodes.However,user data usually contains private information,and users are reluctant to share their data directly with others,which makes it difficult for edge nodes to collect and train user data.Federated learning(FL)can replace data collection by sharing users’ local models to achieve the prediction of popular contents while protecting users’ data privacy.The edge caching system is characterized by mobile users and various popular contents,and it is crucial to update the cached contents appropriately while ensuring users’ privacy and adapting to the system characteristics.Deep reinforcement learning(DRL)is an effective tool for constructing decision framework and optimizing cooperative caching of contents in complex network environments.This paper investigates caching schemes for different edge caching systems based on FL and DRL,and the main research works included are as follows:(1)For the vehicular edge caching scenario with a single edge node,Asynchronous Federated learning based Mobility-aware edge Caching(AFMC)scheme is developed.This part of the work first considers vehicle mobility,and designs an asynchronous FL algorithm to improve the accuracy of the global model in the edge node.Then,autoencoder(AE)is used to predict popular contents based on the global model and cache them to the edge nodes.Finally,the simulation results show that the AFMC scheme is superior to other baseline caching schemes in terms of cache hit ratio performance.(2)For the vehicular edge cache scenario with two edge nodes,Cooperative caching based on Asynchronous Federated and deep Reinforcement learning(CAFR)scheme is developed based on asynchronous FL and DRL.This part of the work first considers vehicle mobility(position and speed)and vehicular communication model,and designs an asynchronous FL algorithm to obtain an accurate global model.Then,the algorithm for predicting content popularity based on the global model is adopted,where each vehicle adopts AE to predict content popularity based on the global model,and the local edge node collects the content popularity of all vehicles within the coverage area to fetch popular contents.The DRL then optimizes content transmission delay by obtaining the best cooperative cache location for predicted popular contents.Finally,the simulation results show that the CAFR scheme is superior to other baseline caching schemes in terms of cache hit ratio and content transmission delay performance.(3)For the edge cache scenario with multiple edge nodes,Cooperative caching based on Multi-agent deep Reinforcement learning and Elastic Federated learning(CMREF)scheme is developed.This part of the work first uses the multi-agent deep deterministic policy gradient(MADDPG)algorithm to learn the optimal caching decision.Next,each edge node uses adversarial autoencoder(AAE)model to predict the content popularity within its own coverage.Then,the AAE model of each edge node is trained by elastic FL to protect user privacy and reduce communication costs in the network.Finally,the simulation results show that the CMREF scheme is superior to other baseline caching schemes in terms of cache hit ratio and cost performance. |