| Machine learning training requires sufficient training data,and centralized data collection methods are often limited by ensuring the privacy of data owners.Federated learning,as a distributed machine learning method,enables clients with a large number of data to collaboratively learn a centralized model,which can significantly protect the client’s local private data from being exposed to external attackers.However,by analyzing the differences in the gradient update data uploaded by the client,private information can still be leaked.Therefore,strict privacy policies are required to ensure data security in the federated learning system.Differential privacy is a common privacy protection method.The existing differential privacy federated learning algorithms generally protect private data by introducing random noise at the expense of reducing the accuracy of the model.Improving the model accuracy under differential privacy guarantee is an important research direction.In the process of federated learning,frequent information exchange is required between the client and the central server.The data transmission of the federated system is usually carried out on the communication network with limited resources.The overhead caused by these frequent communication exchanges is quite large.It becomes the bottleneck of the performance of the federated learning system.Regarding the privacy and communication issues in federated learning,the contributions of this paper are as follows:(1)In order to effectively prevent information leakage,a hierarchical Gaussian differential privacy federated learning algorithm is proposed,which makes full use of the network characteristics of multi-layer neural networks,adds appropriate artificial noise to interfere with the global model aggregation,and resists member reasoning attacks.The traditional privacy definition method is improved,and it can meet the requirements of high-accuracy model tasks under the same privacy protection level.(2)In order to fully save communication resources,an adaptive gradient exchange algorithm is proposed to compress gradient communication.The algorithm adaptively skips part of the gradient communication in the global model iteration process,compresses the total amount of gradient data interacted in the federated learning communication process,and saves communication overhead without reducing the model performance.Under a certain target model accuracy,sparse communication will increase the number of communication rounds required for federated training,which is not conducive to the privacy guarantee of differential privacy methods.Therefore,parameter control,first-time exemption strategy and dynamic accuracy inspection strategy are further introduced,combined with the hierarchical Gaussian differential privacy federated learning algorithm,and it is extended to the differential privacy protection scenario.The proposed adaptive gradient exchange Gaussian differential privacy federated learning scheme realizes the balance between communication efficiency and privacy assurance. |