Font Size: a A A

Design Of Mixed Precision Neural Network Processor Based On FPGA

Posted on:2022-08-30Degree:MasterType:Thesis
Country:ChinaCandidate:A HuFull Text:PDF
GTID:2518306524484804Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,convolutional neural networks(CNN)as a subcategory of deep neural networks have gained widespread popularity.CNN has revolutionized the execution of tasks such as natural language processing,image classification,and speech recognition.Generally,CNN can be implemented on platforms such as CPU,GPU,ASIC,FPGA,etc.The application of artificial intelligence Internet of Things(AIOT)equipment has higher requirements for portability and low power consumption.Besides,different neural network processors need to be designed for different precision types of algorithm models.And the thesis proposes a mixed-precision neural network processor based on an FPGA platform,which supports the switching of the full-precision neural network model and binary neural network model in different precision demand scenarios,and meets the needs of portability and low power consumption.The thesis first introduces the research background and significance of neural network processors,and then briefly summarizes the history and current status of research in this field at home and abroad.Subsequently,the basic knowledge related to convolutional neural networks and the FPGA implementation platform that can be used to carry convolutional neural networks are introduced,and the basic reference architecture for hardware implementation is described.Then,the thesis elaborates on the basic architecture of the full-precision neural network algorithm model implemented on this processor.At the same time,the principle of the binary weight neural network algorithm is introduced,which provides theoretical support for the training and realization of the binary neural network.To be suitable for different precision neural network algorithm models,the thesis designs a mixed-precision neural network processor based on the proposed algorithm model structure.The processor proposed in the thesis adopts the strategy of storing feature maps in row and reading feature maps based on "clusters",which improves data reusability and reading efficiency.At the same time,for the two sets of different weights of the full-precision and binary neural network models,a mixed-precision weight storage scheme is proposed,which reduced address access during weight reading.The processor of the thesis can be used to switch between different precision neural network models by designing two working modes.Besides,the processor proposed in the thesis can also configure different neural network model structures according to instructions to be suitable for different application scenarios.In order to analyze and verify the designed processor,the thesis first trained a fullprecision neural network model and a binary neural network model with good accuracy under different data sets,and gave the accuracy and hardware of the two precision models.Resource occupancy analysis.Then use the System Verilog language to write scripts to simulate the data processing of the hardware,and test the recognition accuracy of the hardware implementation.In addition,a functional simulation of the processor was performed on the Vivado design suite,and the comparison output result was consistent with the simulated hardware,which verified the correctness of the hardware implementation.Next,the processor was implemented in hardware with a clock frequency of 100 MHz on the ZYNQ-7045 development board.The measured operating power consumption in full-precision mode was 8.3W,and the operating power consumption in binary mode was measured to be 7.1W.Finally,the research work of the thesis is summarized and prospected,and the direction of follow-up work to be improved and perfected is determined.
Keywords/Search Tags:Neural Network Processor, Mixed Precision, FPGA, Low Power Consumption, Configurable
PDF Full Text Request
Related items