Font Size: a A A

Federated Learning Based On Knowledge Distillation

Posted on:2024-06-12Degree:MasterType:Thesis
Country:ChinaCandidate:Y T LiFull Text:PDF
GTID:2568307067972789Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Data plays an essential role in machine learning algorithms.The success of many intelligent applications,such as recommendation system and medical image recognition,depends on sufficient real-world data.However,data is usually scattered across user terminal devices and enterprise storage systems.Collecting and utilizing these data in accordance with laws and regulations is difficult.To solve this challenge,federated learning performs joint modeling of clients without directly sharing their local data,and promotes the integration of distributed data sources.As a new collaborative learning method,federated knowledge distillation transfers the knowledge of local models to the global model,effectively alleviating the issues of data heterogeneity and privacy security.This thesis studies two problems in current federated knowledge distillation,namely,the large consumption of privacy budget and the dependence on public data,and proposes novel federated knowledge distillation algorithms based on the scenarios of whether there are unlabeled public data in the federated server.(1)When there are unlabeled public data in the federated server,a novel federated knowledge distillation algorithm based on reverse k-nearest neighbors voting(RKNN)is proposed.In RKNN,each private record is used to label at most k queries and the privacy budget is decreased when querying the labels of public samples.The reverse k-nearest neighbors voting can be reduced to bucketized sparse vector summation,and concrete centralized and local differential privacy mechanisms are employed.Through extensive experimental validation,the privacy protection effect of RKNN is more than 10 times higher than existing federated knowledge distillation algorithms.(2)When no public data is available in the federated server,a new federated learning algorithm based on adversarial distillation(Fed AKD)is proposed.Fed AKD enables federated learning without public distillation samples and transferring model parameters.Specifically,in addition to training a local model,each client maintains a global generator and a global classifier synchronously.The global generator and global classifier transfer knowledge from the ensemble of local models through adversarial distillation,and the distillation samples are provided by the global generator.Within this framework,the server is responsible for aggregating the classification probability of false samples,which constitutes the communication content with the client.Experiments show that Fed AKD has a better performance when dealing with heterogeneous client data than other federated learning algorithms.
Keywords/Search Tags:Federated Learning, Knowledge Transfer, Differential Privacy, Adversarial Distillation
PDF Full Text Request
Related items