| The proposal of Federated Learning(FL)aims to address the issue of data silos,which is mainly caused by concerns over data privacy.As a result,research related to FL has consistently focused on protecting data security and privacy.Privacy in FL primarily manifests as data privacy and model privacy.Data privacy usually refers to information such as the personal identity and health status of the data collector.Data privacy leakage may pose a threat to individual or societal security.Model privacy generally refers to the model structure and parameters.Leaking model privacy can lead to model inversion attacks,which can then result in the disclosure of sensitive personal information.Protection of both forms of privacy aims to safeguard sensitive data in the original sample but neglects the privacy of the model itself.In FL,task initiators hold intellectual property rights over the model’s functionality,but most of the existing FL algorithms share a global model during training.Therefore,protecting the model privacy of task initiators has become one of the challenges of privacy protection in FL.This article proposes model privacy protection methods and some quantifiable indicators for three FL algorithms based on existing privacy protection studies,from the perspective of protecting task initiators’ intellectual property rights.The main contents and innovations of this article are as follows:1.With regard to federated reinforcement learning algorithms,a model privacy protection method based on function transformation is proposed in this thesis.By utilizing the transformation function,the actual meaning of the participating models is altered such that the participants only contribute data value and computing power,while the task initiator can monopolize all the training results.Due to the unequal expenditure and benefits between the task initiator and participants,a method for measuring contribution is also presented,which provides a quantitative basis for rewarding participants.Finally,the correctness of the algorithm is validated in the Grid-World game environment,and the resistance of the method is tested through reconstruction training attacks.2.In the context of federated tree model algorithms,a privacy protection method based on the Random Forest is proposed in this thesis.The method utilizes node information masking to ensure that the participants can only maintain incomplete tree models,while the task initiator can maintain the complete tree models.Building on the superior interpretability of tree models,this thesis also proposes an indicator that calculates the information content of tree models through entropy,which not only quantifies and controls the degree of model privacy protection but also measures the contribution of the participants.Finally,the correctness of the proposed algorithm is validated on three public datasets,and the accurate measurement results of the information content of the tree models of each participant are presented.3.This thesis proposes a privacy protection method based on homomorphic encryption for ordinary federated neural network algorithms.The method encrypts the neural network model rather than the original samples,enabling the participants to work with the model in the ciphertext space,while only the task initiator can use the model.Due to the limitations of existing homomorphic encryption methods,this thesis uses homomorphic encryption methods to encrypt the linear part of the federated learning network structure and uses obfuscation circuits to achieve the protection of the nonlinear part.Finally,the correctness of the proposed algorithm is validated on a binary classification problem using the basic federated learning structure. |