| Implementing machine learning models on hardware platforms has a wide range of application scenarios.Ensemble learning is an important part of machine learning and also an ideal goal for hardware deployment.To reduce the resource cost of hardware deployment,it is necessary to use ensemble pruning algorithms to reduce the size of ensemble learning models.However,due to the special properties of the gradient boosting ensemble models,existing ensemble pruning algorithms cannot perform effectively with them,and the related algorithms are needed to be further explored.By analyzing the characteristics of the gradient boosting ensemble models,the residual vector pruning algorithm is designed.The algorithm selects a weak learner according to the angle between the prediction vector and residual vector iteratively.To make up for the lack of convergence speed of the residual vector pruning algorithm,the vector replacement pruning is further designed,which could greatly enhance the algorithm efficiency by selecting multiple prediction vectors in one iteration.For reducing the algorithm’s time complexity,the Lasso pruning algorithm is designed by combining the Lasso machine learning model,the feature selection ability of the Lasso model is used to prune the gradient boosting ensemble models.The related codes are completed based on these algorithms,and a solution for automatic hardware code generation is designed to implement gradient boosting ensemble models on the FPGA(Field Programmable Gate Array)hardware platform.The performances of the pruning algorithms are tested by software and hardware experiments.Finally,the deep neural network model is also implemented on the FPGA hardware platform to compare with the gradient boosting ensemble models.The experiment shows that by keeping the accuracy as much as possible,the residual vector pruning algorithm,vector replacement pruning algorithm,and Lasso pruning algorithm have great performance for they can reduce the model size by 74.38%,85.65%,and 70.63%,compare with the existing algorithms,the designed algorithms have better performance both on compressing ensemble size and keeping model accuracy.After the hardware implementation on Xilinx 7 series FPGA platform,the test results show that orientation ordering pruning,kappa pruning,residual vector pruning,vector replacement pruning,and Lasso pruning can reduce the hardware cost by 56.38%,55.69%,71.34%,84.57%,and 65.62% respectively,so the proposed pruning algorithms can bring 16%~50%improvement.After the comparison with the deep neural network model,the result shows that the pruned gradient boosting ensemble model has great advantages in hardware resource cost. |