| Advances in big data technology are accompanied by the growing popularity of artificial intelligence and machine learning.In order to overcome the limitations of traditional centralized machine learning,such as storage cost,storage space,and data transmission privacy risks,research on distributed machine learning has gradually emerged.Federated learning technology has become one of the most effective and reasonable privacy-preserving solutions in the distributed machine learning environment.However,some recent research works have shown that privacy leakage still exists in the federated learning environment because the transfer of abstract data such as gradient updates may sometimes leak users’ data privacy,which will greatly hinder the far-reaching development of distributed machine learning.Member inference attack is one of the most threatening privacy attacks.In order to reasonably measure the privacy leakage risk of membership in the federated learning environment,thus exploring the influencing factors of model efficiency and privacy levels in the federated learning environment,in this paper,we propose a gradient-based membership inference attack.Using the average difference between the membership inference attack accuracy and the benchmark probability of50%,we can measure the privacy leakage risk of different federated learning environments.In addition,model training efficiency and privacy-preserving level are the two main design goals of the federated learning framework.Therefore,we also explored various factors affecting its privacy leakage risk and model training efficiency by observing the attack accuracy and convergence speed in different federated learning environments,including model training structure,training dataset distribution,and model optimization methods.In order to verify the effectiveness of our proposed gradient-based membership inference attack on the measurement of privacy leakage risk in federated learning systems,we give examples of privacy leakage risk measurement in different federated learning settings using three traditional classification datasets.The results of a large number of simulated attack experiments in different federated learning environments show that although the second-order machine learning technology helps to improve the training efficiency of the model and accelerate the model convergence speed of the federated learning algorithm,it also increases the privacy leakage risk in the federated learning environment.Finally,we also discuss the relationship between the risk of privacy leakage and various factors in the federated learning environment,and prove that in the complex federated learning environment,the model structure and the number of training parties can both affect the privacy levels. |