| The Spiking Neural Network(SNN)draws inspiration from the information transmission mechanisms of biological neurons,using discrete spikes to represent information.As a result,it has the advantage of strong temporal information processing capability and low power consumption.Unlike the artificial neural networks,SNN is closer to the operation of biological nervous systems and can effectively process time-series data.However,due to the discreteness of spike signals,there exists non-differentiable problem during network learning.Moreover,the complex network structures often require numerous parameters for training and inference.This thesis investigates supervised learning algorithms for SNN in temporal code and explores effective solutions that can reduce network complexity,improve energy efficiency,and maintain high accuracy.The main work of this thesis is as follows:(1)In response to the issue of non-differentiable direct training SNN,this thesis proposes a Spike-Timing-Dependent Backpropagation algorithm.By introducing the concept of first firing time and designing a neuron model with dynamic thresholds,the number of dead neurons is reduced.During network training,global optimization is combined with backpropagation to locally update connection weights based on temporal relationships between neurons.Finally,image classification experiments on the MNIST and FashionMNIST datasets demonstrate the effectiveness of the algorithm,achieving accuracies of97.6% and 88.2%,respectively.(2)In order to reduce the computation and storage requirements during SNN training,this paper proposes binarizing the synaptic weights of the network.Based on the SpikeTiming-Dependent Backpropagation algorithm,a binary SNN direct supervised learning algorithm is introduced.In the network inference process,binary weights are used for forward propagation and real-valued weights are used for backward propagation.Experiments are conducted on MNIST and Fashion-MNIST datasets,and the accuracy is improved by 0.4% and 1.2%,respectively,compared with the binary-valued neural network of the same architecture,which has advantages in terms of latency and energy.(3)In regard to address the unimportant neurons during the experiments,three timecoded neuron pruning strategies are proposed: constant number,constant firing time threshold and adaptive firing time threshold pruning methods.The neuron importance depends on whether it contributes to the numerical classification,and the experiments demonstrate that the pruning method with adaptive firing time threshold is simple and efficient,with an energy reduction of up to 55% and a corresponding accuracy loss of only1.1%. |