Font Size: a A A

The Neural Network Method For The Absolute Value Equations And Its Sparse Solutions

Posted on:2018-11-04Degree:MasterType:Thesis
Country:ChinaCandidate:X M ZhangFull Text:PDF
GTID:2310330515464366Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
We have already known the advantages of using the neural network to solve the problems: (?)The neural network based on the differential equation solution is differen-tiable, and it can be used in any subsequent calculations. On the other hand, most of the other techniques are just to provide a discrete solution. (?) Based on the neural network method to solve the differential equation provides a good generalization of the solution performance. (?) The computational complexity of the neural network does not increase rapidly due to increasing number of sampling points. (?)The neural network method can realize parallel computations. In view of these advantages of the neural networks,the neural network method is mainly used to solve the solutions of the absolute value equation Ax-|x|=b and its sparse solutions, where A is matrix, b is the relevant vector.For the solutions of the absolute value equation of using the neural network method,the absolute equation is a non-differentiable problem, so reformulating the absolute value equation into an unconstrained optimization problem may cause some difficulty. So we can do this by means of two smooth functions: one is Coherency function, the other is??(x)=(?). Then we establish a gradient neural network method to solve this problem and prove that the solution of the absolute value equation is the equilibrium point of the neural network model. And the neural network is Lyapunov stable and asymptotically stable at the equilibrium point. Numerical experiments show that the gradient neural network algorithm is effective in solving the absolute value equation.Finally, we also compare the error for the solution of the absolute value equation with two different smooth functions and the solution time, and this is helpful in some practical applications.For the sparse solution of the absolute value equation for the neural network algo-rithm, previous results show that the minimum l1 norm method for solving the absolute equation is effective. In this thesis, by the equivalence between the absolute value equa-tion and the complementary problem, we transform the absolute value equation into an equation:x=(A+I)-1 {[(A+I)x-b-((A-I)x-b)]++b}Where [·]+ represents the projection operator. Then the solution of the absolute equa-tion is transformed into an l1 regularized projection minimization model. Next, using the projection neural network algorithm to solve this model, we can get the approximate sparse solution of the problem. Finally, the numerical experiments show that the effec-tiveness of the projection neural network algorithm and the accuracy of the solution is satisfactory.
Keywords/Search Tags:Neural network, Absolute value equations, Projection, The sparse solution, Smooth function
PDF Full Text Request
Related items