Font Size: a A A

Some Topics Research On Constrained Quaternion-Variable Convex Optimization Based On A Quaternion-Valued Recurrent Neural Network Approach

Posted on:2020-04-18Degree:MasterType:Thesis
Country:ChinaCandidate:Y L ZhengFull Text:PDF
GTID:2370330578461322Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Generally,there are two steps to solve optimization problems.First,a reason-able mathematical model should be established according to the practical problem,which contains the objective function,the corresponding constrained conditions,and the appropriate optimization variables.Then,the mathematical model need to be organized and simplified while a appropriate optimization method is selected to re-solve the model.Given a constrained quaternion nonlinear convex programming,only the second step is required.However,sometimes classical optimization theory can gets into trouble when the data process has errors,involved uncertain fuzzy number-s,or it is difficult to resolve the original optimization programming directly.When dealing with such issues,it is necessary to improve the optimization model and al-gorithm.In 1986,Tank and Hop field first introduced Hop field neural network to solve the linear optimization problems,that is neural network approach,which is d-ifferent from classical optimization theory.After that,many scholars have developed other types of neural networks for real-valued and complex-valued optimization on the basis of Hop field neural network.The non-commutative of quaternion multi-plication makes the quaternion-variable convex programming more complicated than the complex-valued convex programming.Thus,the research of quaternion-valued convex optimization is relatively scarce.In this paper,a quaternion-valued one-layer recurrent neural network approach is proposed to resolve constrained convex function optimization problems with quaternion variables,which realizes solving quaternion convex programming in quaternion domain directly.Moreover,it avoids the integrity of the original structure and the coupling of the internal structure by real or complex decomposition.To be concrete,the contributions of this dissertation are as follows:As preparatory knowledge,the chapter 1 mainly introduces the definitions and basic properties of quaternion algebra,the derivative of quaternion function,and the generalized gradient.Chapter 2 studies the generalized gradient of quaternion function.First,the gradi-ent of quaternion functions are proved to be the conjugate gradient while the gradient is extended to the generalized gradient when the partial derivative does not exist.Then,we obtain that the generalized gradient of a convex function is the subdifferen-tial.At the same time,the generalized gradient can be defined.Finally,the definition and properties of quaternion convex functions are given,as well as the proofs.Chapter 3 studies a quaternion-valued one-layer recurrent neural network ap-proach to resolve constrained convex optimization problems.First,based on the o-riginal convex programm,we designed quaternion-valued one-layer recurrent neural network with the novel generalized Hamilton-real(GHR)calculus and quaternion gen-eralized gradient.Then,on the basis of chain rules and Lyapunov function theorem,we can obtain that the deliberately designed quaternion-valued one-layer recurrent neural network is a stable system and the arbitrary initial state of the neural network can reaches the feasible domain and converges to the equilibrium point in finite time.Then,leveraging the method of proofs by contradiction,it shows that the equilibrium point of the quaternion-valued recurrent neural network is the optimal point of the original convex programming.Finally,Numerical simulations verify the accuracy and feasibility.
Keywords/Search Tags:Quaternion-valued recurrent neural network, quaternion-valued convex optimization, quaternion generalized gradient, Lyapunov function, stability
PDF Full Text Request
Related items