Font Size: a A A

Variable Fractional Order Gradient Descent Algorithm And Its Application

Posted on:2024-05-20Degree:MasterType:Thesis
Country:ChinaCandidate:W P LouFull Text:PDF
GTID:2530307145458824Subject:Engineering
Abstract/Summary:PDF Full Text Request
Optimization problems have always been a focal point of research in various engineering fields.Most optimization methods rely on gradient information,with gradient descent algorithm being a commonly used fundamental algorithm for solving optimization problems,playing an important role in the solution of many practical problems.However,the integer-order gradient descent method suffers from the drawback of slow convergence speed.By incorporating fractional calculus,the fractional-order gradient descent algorithm has been developed,which can improve the convergence speed and accuracy of the algorithm in certain order cases.However,the computation complexity of the fractional-order gradient is higher,and as the order increases,the convergence speed increases while the convergence error tends to rise,often leading to a tradeoff between convergence speed and accuracy.Therefore,this paper conducts research on the variable fractional-order gradient descent algorithm and explores its convergence performance in the application of Backpropagation(BP)neural networks and Least Mean Square(LMS)adaptive filtering algorithm.The main work is as follows:(1)A novel Variable Fractional-Order Gradient Descent(VFOGD)algorithm is proposed.This algorithm approximates the gradient value by considering the dominant term of the Taylor expansion of the fractional derivative and incorporates the coefficient containing the Gamma function with the learning rate to reduce the computational complexity.Moreover,the fixed lower limit of integration in fractional differentiation is replaced with a variable integration initial time to improve convergence accuracy.Additionally,the range of the order is extended from(0,1)to(0,2),and an adaptive variable-order method based on the iteration count is designed,which partially resolves the trade-off between convergence accuracy and speed encountered in traditional methods.Through rigorous mathematical derivations,it is demonstrated that the proposed method can converge to the true extremum point,indicating its theoretical feasibility.Comparative experiments are conducted using benchmark tests with four fundamental functions,comparing the results of the proposed method with those of integer-order,fixed fractional-order,and other variable-order methods.The results of the variable and error iteration curves substantiate the effectiveness of the proposed method,showcasing its ability to enhance the convergence capability of the fractional-order gradient descent algorithm.(2)A BP neural network optimization method based on the VFOGD algorithm is designed,applicable to both fully connected neural networks and convolutional neural networks.In the fully connected layer and convolutional layer,the conventional integer-order gradient iteration for learning is replaced by the improved fractional-order gradient,with the order being autonomously adjusted based on the training epochs.The mean squared error loss curves of the experimental results demonstrate that the VFOGD-based BP neural network optimization method exhibits faster initial loss reduction compared to classical integer-order and fixed fractional-order methods.In the MNIST dataset experiment with the fully connected neural network,with a variable-order factor of 1.6,the proposed method achieves a 0.1%-3.79% improvement in accuracy on the training set and a 0.11%-3.01% improvement on the test set,compared to the integer-order and fixed fractional-order methods under different hidden node configurations.In the MNIST dataset experiment of the convolutional neural network,with a variable-order factor of 1.6,the accuracy on the training set and test set reaches 99.75% and 99.12%,respectively.In the experiment with the more complex CIFAR10 dataset and the convolutional neural network,with a variable-order factor of 1.5,the accuracy on the training set improves by 1.46%-29.12%,reaching 87.23%,compared to the integer-order and fixed fractional-order methods,and the accuracy on the test set improves by 0.09%-7.8%.The experimental data demonstrate that the VFOGD-based backpropagation neural network optimization algorithm designed in this paper possesses faster learning speed and improved learning accuracy.(3)A variable fractional-order normalized LMS adaptive filtering algorithm(VFO-NLMS)is proposed based on the VFOGD algorithm.This method combines variable fractional-order techniques with normalization,allowing the adaptation rate parameter to vary based on the magnitude of the current input signal.By adopting a variable fractional-order gradient iteration strategy,the convergence performance of the algorithm is enhanced.In system identification experiments,the output error curve of the adaptive filter demonstrates a faster convergence speed with the VFO-NLMS algorithm.With a variable-order factor of 0.5,the error in the transfer function estimation between the adaptive filter and the unknown system is reduced by 11.04%-20.39% compared to the integer-order and fixed fractional-order methods.In signal equalization experiments,the output error of the VFO-NLMS algorithm’s adaptive filter is reduced by 6.98%-53.1% compared to the integer-order and fixed fractional-order methods,indicating a faster convergence speed.In speech denoising experiments,the mean square error of the adaptive filter’s output is reduced by 97.68% compared to the initial noisy signal,with an improvement of 0.4%-7.92% compared to the integer-order and fixed fractional-order methods.In summary,the VFO-NLMS algorithm exhibits faster convergence speed and lower output error in multiple experimental scenarios,demonstrating its effectiveness and superiority.
Keywords/Search Tags:Variable fractional differentiation, Gradient descent method, Convergence characteristics, BP neural network, LMS
PDF Full Text Request
Related items