| Deep echo state network(DESN)is widely used in time series prediction and other fields due to its efficient time series prediction and nonlinear processing abilities.The parameters of DESN include both structural and weight parameters,the optimization of which can improve the prediction performance of DESN.However,current structural parameter optimization methods consume a significant amount of training time when applied to DESN,necessitating acceleration of the training process.Existing acceleration methods are based on feedforward neural networks and are not suitable for DESN’s unique structure.Additionally,when current weight parameter optimization methods are applied to DESN,more training samples and time are required.These challenges make parameter optimization difficult for DESN and limit its prediction ability.This thesis studies parameter optimization methods of DESN based on both structural and weight parameters.The details are as follows:1.In order to solve the problem of long training time for structure parameter optimization,this thesis proposes a structure parameter optimization method of DESN based on improved Net2 Net.Firstly,First,the classical Net2 Net algorithm was improved based on the characteristics of the DESN reservoir to accelerate the training process of structural parameters.Secondly,a contribution-based pruning method was used to train a small-scale echo state network(ESN),providing a good starting point for the optimization process of structural parameters.Then,a meta-controller based on reinforcement learning was constructed to perform network conversion operations on the pruned ESN using the improved Net2 Net algorithm.Finally,simulation results on three time series datasets show that the proposed method has better prediction performance and requires less training time,compared with the existing structural optimization methods.2.In order to solve the problem of high cost of weight parameter optimization,this thesis proposes a weight parameter optimization method of DESN based on improved knowledge evolution.Firstly,the weight parameters of DESN are divided into two mutually exclusive subnets,namely,fitting subnets and reset subnets using weight-level splitting technique to reduce the training samples.Secondly,the sailfish optimizer is used to train the fitting subnet while the random operator perturbation is used to reset the other subnet to reduce the training time.Finally,simulation results on three time series datasets show that the proposed method has better prediction performance and requires less training samples and training time,compared with the existing weight optimization methods. |