Font Size: a A A

Efficient Communication Method For Federated Learning Based On Spiking Neural Networks

Posted on:2024-06-08Degree:MasterType:Thesis
Country:ChinaCandidate:Z T LiuFull Text:PDF
GTID:2558307079459634Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,the proposal of federated learning has provided a secure and efficient solution for various embedded devices to participate in distributed training of neural networks.Additionally,the development of spiking neural networks has made low-power deep learning possible.In light of this,researchers begin to focus on low-power privacypreserving training modes,specifically spiking neural network-based federated learning(hereinafter referred to as spiking federated learning).However,the mainstream training frameworks for spiking federated learning currently available are designed based on the traditional federated averaging algorithm used in federated learning.Federated averaging framework takes network parameters as the main content for communication in federated learning,which results in unacceptable communication costs in scenarios involving largescale network participation.To reduce communication costs in spiking federated learning with large-scale networks,the thesis introduces knowledge distillation techniques into spiking federated learning training and proposes a spiking federated learning training framework that is different from traditional approaches.The main contributions of the thesis are as follows:1.The thesis improves the classical spiking federated learning framework based on knowledge distillation technology.In this framework,the basic processes of each node participating in spiking-based federated learning are elaborately designed,and a complete distributed training system is established.The framework utilizes knowledge distillation techniques to perform joint training by aggregating spiking neural network outputs and convolutional layer feature maps as communication entities on the server.The spikingbased federated distillation framework reduces communication costs and information loss,while ensuring the effectiveness of joint training.2.Based on the spiking-based federated distillation training framework,the thesis refines the necessary processes and proposes a novel spiking knowledge distillation loss function,a new federated aggregation scheme,and a compression algorithm for spiking outputs.The knowledge distillation algorithm requires a unique loss function to extract information from the spiking tensor,and the new knowledge distillation loss function provides an effective solution to maximize the use of information during network training.The federated aggregation scheme can better integrate the distilled information uploaded by various clients and extract more knowledge.The spiking tensor compression algorithm solves the problem of large storage space required during transmission due to the time dimension included in the output of the spiking neural network.3.Building upon the aforementioned work,the thesis further improves the spikingbased federated distillation framework by optimizing the distillation process with a hint layer.This approach enhances the accuracy of the framework’s training without increasing communication costs.As the addition of the hint layer alters the basic structure of the distillation learning framework,it requires corresponding adaptive adjustments to the refined design.Therefore,the thesis optimizes the details of the federated aggregation scheme and spiking tensor compression algorithm in the framework to match the hint distillation scheme,resulting in a significant improvement in the overall performance of the framework.
Keywords/Search Tags:Federated Learning, Spiking neural network, Knowledge distillation, Communication costs
PDF Full Text Request
Related items