| With the rapid development of the Internet of Things(Io T),an increasing number of devices are connected to the Internet,which not only collect large amounts of data but also use artificial intelligence technology for intelligent analysis and decision-making.However,due to data security and privacy concerns,these devices are often unable to upload data to the cloud for processing,resulting in the problem of data silos.To address this issue,researchers have proposed the concept of federated learning.Federated learning is a distributed machine learning method that allows multiple devices to collaboratively train a shared model while keeping data localized.This approach not only protects user privacy but also maximizes the utilization of data on each device.However,in practical applications,federated learning faces a significant challenge: the problem of non-independent and identically distributed(Non-IID)data.Non-IID data refers to data on different devices that is not independently and identically distributed,which can lead to a decrease in model training performance and affect the final model’s performance.Therefore,in Non-IID-based federated learning,optimizing algorithms has significant significance.Traditional federated learning algorithms converge slowly and suffer from severe performance degradation in Non-IID data training.This thesis proposes two optimization algorithms based on the client selection and model aggregation steps in the federated learning process.Extensive experiments show significant improvements in convergence speed and model accuracy during the training process.(1)To address the performance degradation caused by traditional client selection algorithms based on random selection,this thesis proposes a hierarchical client selection algorithm,Fed DR-H-CS,based on aggregation degree,which can hierarchically divide clients into approximately independently and identically distributed data distributions and select the best clients within each layer to eliminate Non-IID problems.The results show that this algorithm outperforms the traditional Fed Avg and optimization Fed Prox algorithms in terms of accuracy and convergence speed.(2)Existing model aggregation optimization algorithms often struggle to obtain a single global model that generalizes well to each client.To address this issue,we propose a flexible local model aggregation method called Fed FLA that can aggregate global and local models at a fine-grained level to adapt to local objectives on each client.This allows for personalized local model initialization and tracking of local gradient updates,with unbiased gradient global parameter aggregation performed on the server side.Experimental results show that our algorithm outperforms traditional algorithms such as Fed Avg and optimized algorithms such as Scaffold in terms of accuracy and convergence speed. |