Font Size: a A A

Research On Optimization Of Reluplex Algorithm Of Deep Neural Network Based On Formal Verification

Posted on:2022-11-10Degree:MasterType:Thesis
Country:ChinaCandidate:M Y LuFull Text:PDF
GTID:2518306743974009Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Formal verification is a method which uses mathematical language to prove whether a hardware system or a software system can operate normally in theory.Verification of deep neural networks can help to detect and improve the overall performance of the neural network such as robustness and security,so as to make the application of the neural network in the fields of industry,transportation,medicine and other fields more secure,reliable and efficient.It is necessary to optimize the neural network verification algorithms by using methods based on formal verification.Reluplex is one of the algorithms for verifying deep neural networks,but it still has many flaws and inadequacies,such as being limited to only can verify neural networks based on Relu activation functions,has a lot of timeout points that cannot be verified,and costing a lot of time and resources,etc.The project tries to transform and optimize Reluplex algorithm to improve its efficiency and expand its verification ability.First,make it possible to verify the Elu activation function,and then add optimizations such as multi-thread parallelism,compacting the upper and lower bounds,conflicting analysis to undo splits to the algorithm to save the running time and solve at least part of the timeout points.These optimizations are not only for testing the neural network based on the Relu activation function,but also for testing the neural network based on the Elu activation function,so as to carry out comparative analysis respectively to accept the optimization effect.Finally,a total of four sets of experimental data are obtained by testing the Relu activation function and the Elu activation function before and after optimization.By comparison,it can be seen that after the optimization measures are adopted,both the verification of the Relu activation function and the Elu activation function have been improved their calculating speed,and some timeout points under the original algorithm are solved.It can also be seen that the test speed under the Elu activation function is faster than the Relu activation function,because it saves more training time of the neural network than Relu and it can increase the corresponding degree of accuracy.These can be helpful for researches on the validation and optimization of deep neural networks in the future.
Keywords/Search Tags:Formal verification, deep neural network, activation function, Relu, Elu
PDF Full Text Request
Related items