| Deep learning related research is a trendy research topic in machine learning in recent years.As the depth and width of deep neural networks increase,deep neu-ral networks are often overparameterized,which is prone to overfitting.Thus,sparse optimization-based regularization methods have been widely used in deep neural net-works.Since the selection of regularization parameters in sparse optimization-based regularization methods is time-consuming,we solve the problem of time-consuming se-lection of regularization parameters for deep neural networks by combining the models obtained from several different regularization parameters through the ideas in ensemble learning.The main research contents and innovations of this paper are as follows.In this paper,we first introduce deep ReLU networks based on Tl1regularization and analyze the generalization bound and learning rate for the ensemble of such net-works,we establish the generalization bound of the ensemble algorithm of the Tl1 regu-larized deep ReLU neural network under independent identical distribution conditions,and obtain the optimal learning rate.Based on the theoretical study,we investigate the Tl1 regularized depth ReLU neural network-based integrated learning algorithm with a fixed range of regularization parameters.Since the range of regularization parameters may be different on different datasets,we also investigate the ensemble algorithm based on Tl1 regularized depth ReLU neural networks with an adaptive range of regulariza-tion parameters.The numerical experimental results on the public dataset show that the two algorithms proposed in this paper have lower misclassification rates and shorter training times compared to the classical Tl1 regularized depth ReLU neural network algorithm with the classical cross-validated selected parameters. |