| Federated learning is the latest development of distributed machine learning.It is widely used in many fields,such as financial analysis,medical diagnosis,etc.,in which the client locally acquires and processes private data,and then transmit updated machine learning model parameters to cloud servers for aggregation.However,federated learning also poses some challenges,as private information can still be obtained by analyzing the uploaded parameters from the client,and even if the parameters are encrypted,dishonest entities can still collude to achieve their goals.In response to the problems raised above,this thesis mainly studies the privacy protection of federated learning based on multi-party secure computing.The main work is presented as follows:1.The first section introduces a decentralized system that relies on a set of computing nodes to achieve aggregate computation of the parameters of the network model.In order to ensure the confidentiality of the data and the privacy of the data provider,the system combines the key replacement algorithm and partial homomorphic encryption,and uses the Chinese remainder theorem to optimize the decryption efficiency.Under the threat model,clients do not need to trust specific participating entities.The comparision of this systemwith the related federated learning privacy-preserving work is discussed in this thesis.Experimental results show that the proposed scheme has high accuracy in terms of training results and reduces communication overhead..2.The second section analyzes the BatchCrypt scheme and points out that the attack model requires a non-collusion and semi-honest third party to participate in the training process.A more secure signature verification scheme,Secure Batch Crypt,is constructed to tackle the problems raised.Using bilinear aggregated signature technology and threshold homomorphic encryption,the scheme requires participants to collaboratively decrypt aggregated data,and can locally verify the calculation data returned by a third party to deal with malicious behaviors such as returned false aggregatedresults or collusion in cloud services.The experimental results show that the communication overhead is still controllable and higher security is achieved without affecting the quality of the model. |