Font Size: a A A

Research On An Inference Acceleration Circuit For Low-bit Spiking Neural Networks

Posted on:2024-05-04Degree:MasterType:Thesis
Country:ChinaCandidate:T C LiFull Text:PDF
GTID:2568307079966799Subject:Electronic information
Abstract/Summary:PDF Full Text Request
To liberate humans and develop productivity,frontier researchers in various fields have set to build high-intelligence electronic equipment,and artificial intelligence has gradually become a hot topic of scientific and technological innovation.However,hardware platforms based on traditional computer architecture can no longer support the scale expansion of the neural network,especially on terminal devices with limited conditions.Thus,it’s vital to design high-performance low-power hardware devices with new computing architectures to meet the needs of artificial intelligence.Inspired by the structure of the biological brain,the source of intelligence,the spiking neural network algorithm based on neuromorphic theory shows high bio-similarity and low computational complexity,and the new hardware based on neuromorphic computing often exhibits higher computational efficiency and lower power as well.Therefore,based on the lightweight neural network model and parallel computing architecture,the thesis designs a low-bit spiking neural network inference acceleration circuit and combines it with a processor to form a high-flexibility inference acceleration system on chip.Based on the binary weight spiking neural networks with leaky integrate-and-fire neuron models,the thesis proposes the overall structure of the circuit.For the hardware-friendly algorithm model,the thesis designs dedicated computing cores,including the low-bit processing element in the systolic array for synaptic parallel computing,and the neuron model computing circuit for spike fire.Finally,the circuit integrates 4096 neurons in a 6.85 mm~2 on-chip area,achieving 409.6 GSOPS high performance,60.96 m W low normalized power,and 6.72 TSOPS/W high normalized efficiency.It demonstrates the feasibility of using systolic array computing architecture to improve performance in future lightweight neuromorphic chips.In addition,the thesis designs the system on chip with an extensible open-source processor to form domain-specific architecture hardware,which provides high flexibility and computing specificity.The thesis gives test and deployment plans for the chip after been taped out,enabling it to achieve flexible and efficient application on the platform.
Keywords/Search Tags:Neuromorphic Computing, Spiking Neural Network, Inference Acceleration, System on Chip
PDF Full Text Request
Related items