| Deep learning has had tremendous success in all directions in recent years,revolutionizing the field of machine learning(particularly in computer vision).Deep neural networks,however,have extraordinarily high computational and energy requirements,which prevents them from being widely used on neuromorphic hardware circuits.But spiking neural networks with a biological orientation offer a practical response to these problems.By utilizing a neuronal dynamics model that draws inspiration from the biological brain for computing,spiking neural networks seek to close the knowledge gap between neuroscience and machine learning.Its temporal dynamic characteristics and sparse computing capacity open up new avenues for developing lowpower,highly intelligent,and quick-reacting machines.However,spiking neural networks cannot be trained using gradient descent-based error backpropagation like conventional artificial neural networks do because of its discrete spike mechanism.For research on brain-like computing,this non-trivial and discontinuous nature has created a conundrum that prevents large-scale applications,deployments,and scaling.The disadvantages of current spiking neural network learning techniques include significant inference latency and high training costs.This work suggests a direct approach for converting artificial neural networks to spiking neural networks and an agent gradient-based spiking neural network training algorithm,both of which are based on the current classical training techniques.On the basis of the previous two learning strategies,a hybrid learning algorithm that combines the conversion algorithm and the direct training approach is suggested.The precise research consists of:(1)There is a propposed algorithm for converting artificial neural networks to spiking neural networks.A spiking neural network model based on double-threshold neurons is propposed to reduce the loss of information during the transmission process,and a new threshold selection and weight updating strategy is suggested to decrease the inference delay of the converted model.These issues are examined from the perspective of the conversion principle.Lastly,ablation experiments are used to show how successful the propposed algorithm is.(2)Propposing a new algorithm to update the model weights by limiting the output layer results of the spiking neural network at each time step using an inheritable training approach for spiking neural networks based on the self-attention mechanism.The suggested algorithm has strong temporal inheritability and generalization,which might assist the model in avoiding local minima during training and hence enhance model performance.Also,the suggested algorithm’s temporal inheritance can lower the expense of deep spiking neural networks’ training.Last but not least,an experimental demonstration of the proposed algorithm’s superiority shows that it may be integrated with other algorithms.(3)A new hybrid learning algorithm is proposed to use the transformed spiking neural network model as the initialization step of the self-attention mechanism-based temporal training algorithm,which can obtain the deep spiking neural network model with a smaller training cost.Finally,it is demonstrated through comparative experiments that this work has better performance and higher energy efficiency compared to other algorithms in recent years,and also shows better generalization and robustness. |