| As an important extension of the Boolean satisfiability problem,model counting is an important reasoning task from artificial intelligence to formal verification,and it has a wide range of applications in probabilistic reasoning,neural network verification,automatic planning and other fields.The satisfiability problem is to judge whether there is an assignment that satisfies the formula for a given propositional logic formula,while the model counting problem is to calculate the number of assignments that satisfy the formula,and the computational complexity is#P-complete,therefore,the model counting problem is more challenging than the satisfiability problem.with the continuous deepening of the research on model counting related algorithms in the academic circle and the continuous growth of the demand for model counting applications,coupled with the promotion of the model counting competition,dozens of model counting solvers have appeared in the community.The solution algorithms used by these solvers fall into three main categories : search-based,compilation-based and variable elimination-based.The search-based model counting method extends the DPLL framework through a more intelligent enumeration partial solution;the compilation-based technology relies on an efficient knowledge compiler,which compiles the formula expressed in the input language into the target language,after compilation,model counting is performed efficiently using the representation in the target language;methods based on variable elimination perform model counting primarily by performing a series of sums over variables.The search-based model counting method extends the DPLL framework through a more intelligent enumeration partial solution;the compilation-based technology relies on an efficient knowledge compiler,which compiles the formula expressed in the input language into the target language,after compilation,model counting is performed efficiently using the representation in the target language;methods based on variable elimination perform model counting primarily by performing a series of sums over variables.With the widespread application of model counting solvers,it is gradually found that there is no single solver that performs better than other solvers on all instances of a large-scale instance set,that is,there is no single dominant solver.On the contrary,different solvers perform differently on different instances,the main reason is that different instances have different characteristics,and solvers using different solving algorithms will show large differences on instances with different characteristics.If different solvers can be selected for different instances according to the characteristics of the instance,the number of solved instances will be effectively increased and the solution time will be greatly reduced.Based on this problem,this paper designs and implements a model counting solver SmartMC based on solver selection.The main idea is to calculate the characteristics of instances and use machine learning classifiers to find the best solver according to the characteristics.The features used by SmartMC are based on 9 commonly used features that represent the characteristics of propositional logic formulas,and feature selection is performed on this basis.In addition,features related to the model counting task are added in this paper.This paper also researches and explores the neural network solver,proposes and implements the belief propagation-based neural network solver BPMC,which can give the approximate model count value of the instance in a very short time,and the solution time is basically not limited by the scale of the instance.The basic principle of BPMC is: after the propositional logic formula is expressed as a factor graph,the normalization constant on the factor graph is the number of solutions of the formula.However,it is very difficult to solve directly,so the belief propagation algorithm is used to propagate beliefs on the factor graph to obtain the edge probability of the nodes in the factor graph,then use the Bethe free energy algorithm to obtain the estimated value of the normalization constant on the factor graph based on the edge probability of the nodes in the factor graph.This paper combines the neural network solver BPMC with SmartMC,that is,when SmartMC cannot solve an instance,use BPMC to obtain an approximate solution in a very short time,ensuring that SmartMC can give a solution for any instance,and further improving the completeness of SmartMC.Two sets of experiments are designed in this paper.The first set of experiments compares the neural network solver BPMC with the current advanced approximate model count solver Approx MC and the belief propagation neural network BPNN,experimental results show that the accuracy of BPMC is close to Approx MC,better than BPNN,but in terms of solution time,BPMC takes much less solution time than Approx MC,and BPMC solution time is hardly affected by the instance size.The second set of experiments compared SmartMC with the neural network solver BPMC turned off with the current advanced model count solvers sharp SAT-TD,Exact MC,Ganak,and DPMC,experimental results show that SmartMC outperforms current state-of-the-art model counting solvers in terms of the number of solving instances and the average solving time. |