Chapter

GRASP - A New Search Algorithm for Satisfiability

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The search procedure within DNN verifiers resembles similar procedures applied in SAT and SMT solving, and we seek to improve it by applying tried and tested techniques from those domains. In particular, we focus on Conflict-Driven Clause Learning (CDCL) [16,73,74]. There, the search procedure identifies subspaces of the search space which are "similar" to subspaces already traversed, and which were shown not to contain any satisfying assignments. ...
... Here we take the next step, and present a CDCL(T )-based DNN verifier. Conflict-Driven Clause Learning (CDCL) [16,73,74] enhances DPLL by adding learned conflict clauses to the formula being solved. Each such clause represents the negation of a partial assignment that caused unsatisfiability, and is hence implied by the original formula. ...
... The usefulness of such a clause is that it prevents the solver from revisiting a similar subspace of the search space, which is guaranteed to contain no satisfying assignments. When used within a SAT solver, CDCL will often significantly prune the resulting search tree, improving scalability considerably [16,73,74]. SMT solvers extensively use CDCL(T ), the first order extension of CDCL [13]. ...
Preprint
The widespread adoption of deep neural networks (DNNs) requires efficient techniques for safety verification. Existing methods struggle to scale to real-world DNNs, and tremendous efforts are being put into improving their scalability. In this work, we propose an approach for improving the scalability of DNN verifiers using Conflict-Driven Clause Learning (CDCL) -- an approach that has proven highly successful in SAT and SMT solving. We present a novel algorithm for deriving conflict clauses using UNSAT proofs, and propose several optimizations for expediting it. Our approach allows a modular integration of SAT solvers and DNN verifiers, and we implement it on top of an interface designed for this purpose. The evaluation of our implementation over several benchmarks suggests a 2X--3X improvement over a similar approach, with specific cases outperforming the state of the art.
... There have been tremendous advances in its resolution during the last two decades and nowadays, SAT solvers are used in industry to solve several challenging problems. The key of this great success lies in a very subtle combination of several features within the so-called CDCL (Conflict-Driven Clause Learning) [6][7][8]10] SAT solvers. The latter include conflicts analysis with clause learning, efficient unit propagation through watched literals, dynamic branching/polarity heuristics and sporadic restarts. ...
... The most widespread algorithm today for solving SAT is known as CDCL (Conflict-Driven Clause Learning) [6][7][8]10]. The principle of CDCL can be summarized as follows: the algorithm performs a sequence of unit propagations until a fixed point is reached (i.e. ...
... Moreover, we integrated ECDCL with this optimization in one of the current top performing SAT solvers, MapleLCMDistChronoBT 8 , in order to6 http://sat2018.forsyte.tuwien.ac.at/benchmarks/Main.zip 7 https://satcompetition.github.io/2020/downloads/sc2020-main.uri 8 http://sat2018.forsyte.tuwien.ac.at/solvers/ evaluate the impact on its performance. The versions of the solvers used in these experiments were named in the same way as before, i.e. ...
Article
Full-text available
The extension rule first introduced by G. Tseitin is a simple but powerful rule that, when added to resolution, leads to an exponentially stronger proof system known as extended resolution (ER). Despite the outstanding theoretical results obtained with ER, its exploitation in practice to improve SAT solvers' efficiency still poses some challenging issues. There have been several attempts in the literature aiming at integrating the extension rule within CDCL SAT solvers but the results are in general not as promising as in theory. An important remark that can be made on these attempts is that most of them focus on reducing the sizes of the proofs using the extended variables introduced in the solver. We adopt in this work a different view. We see extended variables as a means to enhance reasoning in solvers and therefore to give them the ability of reasoning on various semantic aspects of variables. Experiments carried out on the 2018 and 2020 SAT competitions' benchmarks show the use of the extension rule in CDCL SAT solvers to be practically beneficial for both satisfiable and unsatisfiable instances.
... The Boolean Satisfiability Problem (SAT) consists of determining whether there exists an assignment of truth values to variables of a given propositional logic formula in order to make it evaluate to true. This problem of great importance in Computer Science is a subject of special attention since the advent of modern SAT solvers based on the so-called CDCL (Conflict-Driven Clause Learning) procedure [4,7,8,10,13]. SAT is known to be NP-complete [1] and therefore is very hard to solve (unless P = N P ). ...
... Most sequential SAT solvers are based on the CDCL (Conflict-Driven Clause Learning) procedure which relies on various features heavily improved over the years. These features include fast unit propagation using watched literals [7,13], learning mechanisms [4], deterministic and randomized restart strategies [6,10,19], effective constraint database management and smart static and dynamic branching heuristics [2,9]. We briefly describe some of these features next. ...
... Clause learning: learning [4] is considered as one of the most important features in CDCL SAT solvers. The idea here is to examine each conflict, produce a learned clause representing its cause and store it in a learned clause database. ...
Article
Full-text available
International audience Search space splitting and portfolio are the two main approaches used in parallel SAT solving. Each of them has its strengths but also, its weaknesses. Decomposition in search space splitting can help improve speedup on satisfiable instances while competition in portfolio increases robustness. Many parallel hybrid approaches have been proposed in the literature but most of them still cope with load balancing issues that are the cause of a non-negligible overhead. In this paper, we describe a new parallel hybridization scheme based on both search space splitting and portfolio that does not require the use of load balancing mechanisms (such as dynamic work stealing). Les deux principales approches utilisées dans la résolution parallèle du problème de satisfiabilité propositionnelle sont DPR (Diviser Pour Régner) et portfolio. Chacune d’elles comporte des forces et des faiblesses. La décomposition dans DPR permet d’améliorer le speedup sur les instancessatisfiables tandis que la compétition dans les portfolios accroit la robustesse. Plusieurs approches hybrides pour la résolution parallèle de SAT ont été présentées dans la littérature mais la plupart d’entre elles souffrent encore des problèmes dus aux mécanismes de rééquilibrage dynamique decharges qui sont à l’origine d’un surcoût non négligeable. Nous décrivons dans ce papier un nouveau schéma d’hybridation parallèle basé sur les deux approches DPR et portfolio ne nécessitant pas la mise en œuvre des mécanismes de rééquilibrage de charges (tels que le vol de tâche).
... First, a new efficient technique is developed e.g. BDDs [8] or SAT-solving with clause learning [25,28] and then new verification tools are built around it.) In this section, we justify our interest in PQE by listing some well-known problems that reduce to PQE. ...
... After assigning x 1 = 1, x 2 = 0 to satisfy C 2 , C 3 , the clause C 1 is falsified. By conflict analysis [25], one derives the conflict clause C 4 = y (obtained by resolving C 1 with clauses C 2 and C 3 ). Adding C 4 to C 1 ∧ F makes C 1 redundant in subspace y = 0. Note that C 1 is not redundant in ∃X[C 1 ∧ F ] in subspace y = 0. Formula F is satisfiable in subspace y = 0 whereas C 1 ∧ F is not. ...
... The algorithm of PrvRed is similar to that of a SAT-solver [25]. PrvRed makes decision assignments and runs Boolean Constraint Propagation (BCP). ...
Preprint
We study a modification of the Quantifier Elimination (QE) problem called Partial QE (PQE) for propositional CNF formulas. In PQE, only a small subset of target clauses is taken out of the scope of quantifiers. The appeal of PQE is that many verification problems, e.g. equivalence checking and model checking, reduce to PQE and, intuitively, the latter should be much easier than QE. One can perform PQE by adding a set of clauses depending only on free variables that make the target clauses redundant. Proving redundancy of a target clause is done by derivation of a "certificate" clause implying the former. This idea is implemented in our PQE algorithm called START. It bears some similarity to a SAT-solver with conflict driven learning. A major difference here is that START backtracks when a target clause is proved redundant in the current subspace (a conflict being just one of backtracking conditions). We experimentally evaluate START on a practical problem. We use this problem to show that PQE can be much easier than QE.
... After assigning x 1 = 1, x 2 = 0 to satisfy C 2 , C 3 , the clause C 1 is falsified. Using the standard conflict analysis [30] one derives a conflict clause C 4 = y. Adding C 4 to C 1 ∧G makes C 1 redundant in subspace y = 0. ...
... The current target clause C trg is set to C pr . The main work is done in a while loop that is similar to the main loop of a SAT-solver [30]. In particular, PrvRed uses the notion of a decision level. ...
... When BCP reports a backtracking condition, the Lrn procedure (line 11 of Fig 2) generates a conflict clause C or a D-sequent S. Lrn generates a conflict clause when BCP returns a falsified clause C ′ and every implied assignment used by Lrn to construct C is derived from a clause [30]. Adding C to F 1 ∧ F 2 makes the current target clause C trg redundant in subspace a. Otherwise 6 , Lrn generates a Dsequent S for C trg . ...
Preprint
We consider a modification of the Quantifier Elimination (QE) problem called Partial QE (PQE). In PQE, only a small part of the formula is taken out of the scope of quantifiers. The appeal of PQE is that many verification problems, e.g. equivalence checking and model checking, reduce to PQE and the latter is much easier to solve than complete QE. Earlier, we introduced a PQE algorithm based on the machinery of D-sequents. A D-sequent is a record stating that a clause is redundant in a quantified CNF formula in a specified subspace. To make this algorithm efficient, it is important to reuse learned D-sequents. However, reusing D-sequents is not as easy as conflict clauses in SAT-solvers because redundancy is a structural rather than a semantic property. (So, a clause is redundant only in some subset of logically equivalent CNF formulas.) We address this problem by introducing a modified definition of D-sequents that facilitates their safe reusing. We also present a new PQE algorithm that proves redundancy of target clauses one by one rather than all at once as in the previous PQE algorithm. We experimentally show the improved performance of this algorithm. We demonstrate that reusing D-sequents makes the new PQE algorithm even more powerful.
... While traditionally, NP-hard problems were considered computationally intractable, today SAT solvers routinely and successfully solve instances of NP-hard problems from virtually all application domains, and in particular problem instances of industrial relevance [Var14]. Starting with the classic DPLL algorithm from the 1960s [DP60,DLL62], there have been a number of milestones in the evolution of SAT solving, but clearly one of the breakthrough achievements was the introduction of clause learning in the late 1990s, leading to the paradigm of conflict-driven clause learning (CDCL) [MS96,ZMMM01], the predominant technique of modern SAT solving. CDCL ingeniously combines a number of crucial ingredients, among them variable decision heuristics, unit propagation, clause learning from conflicts, and restarts (cf. ...
... While it is relatively easy to see that the classic DPLL branching algorithm [DP60,DLL62] exactly corresponds to tree-like resolution (where resolution derivations are in form of a tree), the relation between CDCL and resolution is far more complex. On the one hand, resolution proofs can be generated efficiently from traces of CDCL runs on unsatisfiable formulas [BKS04], a crucial observation being that learned clauses are derivable by resolution [BKS04,MS96]. The opposite simulation is considerably more difficult, with a series of works [BKS04,HBPV08,PD11,AFT11] culminating in the result that CDCL can efficiently simulate arbitrary resolution proofs, i.e., resolution and CDCL are equivalent. ...
Article
Full-text available
QBF solvers implementing the QCDCL paradigm are powerful algorithms that successfully tackle many computationally complex applications. However, our theoretical understanding of the strength and limitations of these QCDCL solvers is very limited. In this paper we suggest to formally model QCDCL solvers as proof systems. We define different policies that can be used for decision heuristics and unit propagation and give rise to a number of sound and complete QBF proof systems (and hence new QCDCL algorithms). With respect to the standard policies used in practical QCDCL solving, we show that the corresponding QCDCL proof system is incomparable (via exponential separations) to Q-resolution, the classical QBF resolution system used in the literature. This is in stark contrast to the propositional setting where CDCL and resolution are known to be p-equivalent. This raises the question what formulas are hard for standard QCDCL, since Q-resolution lower bounds do not necessarily apply to QCDCL as we show here. In answer to this question we prove several lower bounds for QCDCL, including exponential lower bounds for a large class of random QBFs. We also introduce a strengthening of the decision heuristic used in classical QCDCL, which does not necessarily decide variables in order of the prefix, but still allows to learn asserting clauses. We show that with this decision policy, QCDCL can be exponentially faster on some formulas. We further exhibit a QCDCL proof system that is p-equivalent to Q-resolution. In comparison to classical QCDCL, this new QCDCL version adapts both decision and unit propagation policies.
... The verification problem is often formulated as Boolean Satisfiability (SAT) problem in the form of a miter circuit for SAT solving using state of the art solvers like Z3 Satisfiability Modulo Theory (SMT) solver [1]. The SAT problem is the first discovered NP-Complete problem [2], and often relies on heuristics for solutions, e.g. the widely used CDCL algorithm [3] in modern SAT solvers, that also suffers from exponential run-time in the worst case. ...
Preprint
Full-text available
The use of Boolean Satisfiability (SAT) solver for hardware verification incurs exponential run-time in several instances. In this work we have proposed an efficient quantum SAT (qSAT) solver for equivalence checking of Boolean circuits employing Grover's algorithm. The Exclusive-Sum-of-Product based generation of the Conjunctive Normal Form equivalent clauses demand less qubits and minimizes the gates and depth of quantum circuit interpretation. The consideration of reference circuits for verification affecting Grover's iterations and quantum resources are also presented as a case study. Experimental results are presented assessing the benefits of the proposed verification approach using open-source Qiskit platform and IBM quantum computer.
... After assigning x 2 = 1 to satisfy C 1 , the clause C 0 turns into unit too and a conflict occurs (to satisfy C 0 and C 2 , one has to assign the opposite values to x 3 ). After a standard conflict analysis [7], the conflict clause K = y 0 is obtained by resolving ...
Preprint
Full-text available
Earlier, we introduced Partial Quantifier Elimination (PQE). It is a generalization\mathit{generalization} of regular quantifier elimination where one can take a part\mathit{part} of the formula out of the scope of quantifiers. We apply PQE to CNF formulas of propositional logic with existential quantifiers. The appeal of PQE is that many problems like equivalence checking and model checking can be solved in terms of PQE and the latter can be very efficient. The main flaw of current PQE solvers is that they do not reuse\mathit{reuse} learned information. The problem here is that these PQE solvers are based on the notion of clause redundancy and the latter is a structural\mathit{structural} rather than semantic\mathit{semantic} property. In this paper, we provide two important theoretical results that enable reusing the information learned by a PQE solver. Such reusing can dramatically boost the efficiency of PQE like conflict clause learning boosts SAT solving.
... Significant progress in developing practical solvers for maximum satisfiability (MaxSAT) [3]-as the optimization extension of the Boolean satisfiability (SAT) problem-has established MaxSAT as a viable choice for a declarative optimization paradigm, enabling efficiently solving various types of real-world combinatorial optimization problems. Leveraging on the extraordinary success of SAT solvers [24,6,23,31,25] as "real-life NP oracles", together with non-trivial algorithmic advances, research on MaxSAT solving techniques has until recently mostly focused on complete (or exact) algorithms, yielding solvers which are guaranteed to provide provablyoptimal solutions given enough computational resources. However, due to intrinsic computational barriers in scaling and speeding up exact approaches in general, the development of practical incomplete (or in-exact) MaxSAT solvers has recently gained significant traction [26,4,7,1,10]. ...
Chapter
Full-text available
Significant advances have been recently made in the development of increasingly effective in-exact (or incomplete) search algorithms—particularly geared towards finding good though not provably optimal solutions fast—for the constraint optimization paradigm of maximum satisfiability (MaxSAT). One of the most successful recent approaches is a new type of stochastic local search in which a Boolean satisfiability (SAT) solver is used as a decision oracle for moving from a solution to another. In this work, we strive for extending the success of the approach to the more general realm of pseudo-Boolean optimization (PBO), where constraints are expressed as linear inequalities over binary variables. As a basis for the approach, we make use of recent advances in practical approaches to satisfiability checking pseudo-Boolean constraints. We outline various heuristics within the oracle-based approach to anytime PBO solving, and show that the approach compares in practice favorably both to a recently-proposed local search approach for PBO that is in comparison a more traditional instantiation of the stochastic local search paradigm as well as a recent exact PBO approach when used as an anytime solver.
... These problems occur in various different domains, like planning [4], artificial intelligence [2], formal verification [5], automatic test-pattern generation [6], and more. Thus, in the past decades, several different methods for solving satisfiability problems have been developed [7][8][9][10][11]. Despite these efforts, to this day, no algorithm is known that can solve any instance of satisfiability problems in worst-case polynomial time. ...
Article
Full-text available
One way of solving 3sat instances on a quantum computer is to transform the 3sat instances into instances of Quadratic Unconstrained Binary Optimizations (QUBOs), which can be used as an input for the QAOA algorithm on quantum gate systems or as an input for quantum annealers. This mapping is performed by a 3sat-to-QUBO transformation. Recently, it has been shown that the choice of the 3sat-to-QUBO transformation can significantly impact the solution quality of quantum annealing. It has been shown that the solution quality can vary up to an order of magnitude difference in the number of correct solutions received, depending solely on the 3sat-to-QUBO transformation. An open question is: what causes these differences in the solution quality when solving 3sat-instances with different 3sat-to-QUBO transformations? To be able to conduct meaningful studies that assess the reasons for the differences in the performance, a larger number of different 3sat-to-QUBO transformations would be needed. However, currently, there are only a few known 3sat-to-QUBO transformations, and all of them were created manually by experts, who used time and clever reasoning to create these transformations. In this paper, we will solve this problem by proposing an algorithmic method that is able to create thousands of new and different 3sat-to-QUBO transformations, and thus enables researchers to systematically study the reasons for the significant difference in the performance of different 3sat-to-QUBO transformations. Our algorithmic method is an exhaustive search procedure that exploits properties of 4×4 dimensional pattern QUBOs, a concept which has been used implicitly in the creation of 3sat-to-QUBO transformations before, but was never described explicitly. We will thus also formally and explicitly introduce the concept of pattern QUBOs in this paper.
... Due to the generality of NP-complete problems [14], SAT solvers are powerful tools. Since 1962, the DPLL [15] method of backtracking search has served as the core tool in SAT solving, with the more recent development of Conflict-Driven Clause Learning (CDCL) [37] in 1996 aiding in optimizing the search for a satisfying model or contradiction indicating UNSAT. ...
Preprint
Full-text available
Protecting the confidentiality of private data and using it for useful collaboration have long been at odds. Modern cryptography is bridging this gap through rapid growth in secure protocols such as multi-party computation, fully-homomorphic encryption, and zero-knowledge proofs. However, even with provable indistinguishability or zero-knowledgeness, confidentiality loss from leakage inherent to the functionality may partially or even completely compromise secret values without ever falsifying proofs of security. In this work, we describe McFIL, an algorithmic approach and accompanying software implementation which automatically quantifies intrinsic leakage for a given functionality. Extending and generalizing the Chosen-Ciphertext attack framework of Beck et al. with a practical heuristic, our approach not only quantifies but maximizes functionality-inherent leakage using Maximum Model Counting within a SAT solver. As a result, McFIL automatically derives approximately-optimal adversary inputs that, when used in secure protocols, maximize information leakage of private values.
... In the past few decades, we have observed tremendous progress for practical problemsolving in NP-hard domains with wide applicability in, for example, circuit design Hong et al. [2010], hardware verification Gupta et al. [2006], and mathematical discovery Konev and Lisitsa [2014]. SAT solvers based on conflict-driven clause learning can solve instances with thousands of variables and clauses in seconds, which demonstrates surprising scaling performance despite SAT being an NP-complete task Silva and Sakallah [2003]. ...
Preprint
Despite the success of practical solvers in various NP-complete domains such as SAT and CSP as well as using deep reinforcement learning to tackle two-player games such as Go, certain classes of PSPACE-hard planning problems have remained out of reach. Even carefully designed domain-specialized solvers can fail quickly due to the exponential search space on hard instances. Recent works that combine traditional search methods, such as best-first search and Monte Carlo tree search, with Deep Neural Networks' (DNN) heuristics have shown promising progress and can solve a significant number of hard planning instances beyond specialized solvers. To better understand why these approaches work, we studied the interplay of the policy and value networks of DNN-based best-first search on Sokoban and show the surprising effectiveness of the policy network, further enhanced by the value network, as a guiding heuristic for the search. To further understand the phenomena, we studied the cost distribution of the search algorithms and found that Sokoban instances can have heavy-tailed runtime distributions, with tails both on the left and right-hand sides. In particular, for the first time, we show the existence of \textit{left heavy tails} and propose an abstract tree model that can empirically explain the appearance of these tails. The experiments show the critical role of the policy network as a powerful heuristic guiding the search, which can lead to left heavy tails with polynomial scaling by avoiding exploring exponentially sized subtrees. Our results also demonstrate the importance of random restarts, as are widely used in traditional combinatorial solvers, for DNN-based search methods to avoid left and right heavy tails.
... Modern tools for generation of fault detecting tests are a combination of dedicated ATPG methods pioneered by the Dalgorithm [11] and SAT-based algorithms [9], [10]. To make a generic SAT-solver work in the ATPG setting, some extra work is done. ...
Preprint
Incompleteness of a specification Spec\mathit{Spec} creates two problems. First, an implementation Impl\mathit{Impl} of Spec\mathit{Spec} may have some unwanted\mathit{unwanted} properties that Spec\mathit{Spec} does not forbid. Second, Impl\mathit{Impl} may break some desired\mathit{desired} properties that are not in Spec\mathit{Spec}. In either case, Spec\mathit{Spec} fails to expose bugs of Impl\mathit{Impl}. In an earlier paper, we addressed the first problem above by a technique called Partial Quantifier Elimination (PQE). In contrast to complete QE, in PQE, one takes out of the scope of quantifiers only a small piece of the formula. We used PQE to generate properties of Impl\mathit{Impl} i.e. those consistent\mathit{consistent} with Impl\mathit{Impl}. Generation of an unwanted property means that Impl\mathit{Impl} is buggy. In this paper, we address the second problem above by using PQE to generate false properties i.e those that are inconsistent\mathit{inconsistent} with Impl\mathit{Impl}. Such properties are meant to imitate the missing properties of Spec\mathit{Spec} that are not satisfied by Impl\mathit{Impl} (if any). A false property is generated by modifying a piece of a quantified formula describing 'the truth table' of Impl\mathit{Impl} and taking this piece out of the scope of quantifiers. By modifying different pieces of this formula one can generate a "structurally complete" set of false properties. By generating tests detecting false properties of Impl\mathit{Impl} one produces a high quality test set. We apply our approach to verification of combinational and sequential circuits.
... Checking the satisfiability of a set of Boolean constraints in the conjunctive normal form (CNF) form is known as an SAT problem. A significant amount of research has been put in SAT solvers, as they have a wide range of direct applications (e.g., hardware verification); in particular, modern SAT solvers use the conflict-driven clause learning algorithm (CDCL) [99]. The CDCL algorithm was inspired by another algorithm, the Davis-Putnam-Logemann-Loveland algorithm (DPLL) [100] to check for satisfiability. ...
Thesis
In this thesis, we describe and evaluate approaches for the efficient reasoning of realworld C programs using either Bounded Model Checking (BMC) or symbolic execution. We present three main contributions. First, we describe three new technologies developed in a software verification tool to handle real-world programs: (1) a frontend based on a state-of-the-art compiler, (2) a new SMT backend with support for floating-point arithmetic and (3) an incremental bounded model checking algorithm. These technologies are implemented in ESBMC, an SMT-based bounded model checker for C programs; results show that these technologies enable the verification of a large number of programs. Second, we formalise and evaluate the bkind algorithm: a novel extension to the kinduction algorithm that improves its bug-finding capabilities by performing backward searches in the state space. The bkind algorithm is the main scientific contribution of this thesis. It was implemented in ESBMC, and we show that it uses fewer resources compared to the original k-induction algorithm to verify the same programs without impacting the results. Third, we evaluate the use of SMT solvers in a state-of-the-art symbolic execution tool to reduce the number of false bugs reported to the user. Our SMT-based refutation of false bugs algorithm was implemented in the clang static analyser and evaluated on a large set of real-world projects, including the MacOS kernel. Results show that our refutation algorithm cannot only remove false bugs but also speed up the analysis when bugs are refuted. The algorithm does not remove any true bug and only introduces a 1% slowdown if it is unable to remove any bugs<br/
Article
Full-text available
The Conflict-Driven Clause Learning (CDCL) framework integrates multiple heuristic components to solve Boolean satisfiability (SAT) problems through synergistic cooperation. Understanding the characteristics of these components in the underlying architecture provides crucial insights for designing corresponding methods to enhance the performance of CDCL solvers. Although numerous studies from diverse perspectives have been conducted, there remains a need to develop efficient methods and algorithms to meet the requirements for enhancing the performance efficiency of SAT solving. In this paper, we introduced two fundamental innovations: deep restart, a strategic reset mechanism that clears variable activity states while preserving learned clauses and making phase randomization, and assignment coverage time (CoverT), a novel metric quantifying the minimum—conflict count required to assign all variables at least once during search exploration. The CoverT metric provided unique insight into the characteristics of the instance structure, allowing dynamic adaptation of branching heuristics in our proposed Deep Restart-Enhanced Conflict-Driven Clause Learning algorithm framework (DR-CDCL). Experimental validation in 2021–2023 SAT Competition benchmarks demonstrated statistically significant improvements: 14 additional solved instances for satisfiable cases (352 \rightarrow 366, p<0.05p<0.05 via McNemar’s test). 7.3% reduction in average runtime for SAT instances under the 5000 s timeout threshold. Notably, the performance trade-off analysis revealed that while deep restart enhances solution diversity for satisfiable instances, it introduced a 2.1% overhead on unsatisfiable proofs due to clause learning pattern disruption, a phenomenon requiring further investigation. This work advances solver architecture design by establishing formal connections between exploratory search patterns and instance structural complexity. The implemented solution prototype and benchmark data are publicly available to facilitate reproducibility.
Article
Full-text available
Integrated circuit (IC) testing presents complex problems that for large circuits are exceptionally difficult to solve by traditional computing techniques. To deal with unmanageable time complexity, engineers often rely on human “hunches" and “heuristics" learned through experience. Training computers to adopt these human skills is referred to as machine intelligence (MI) or machine learning (ML). This survey examines applications of such methods to test analog, radio frequency (RF), digital, and memory circuits. It also summarizes ML applications to hardware security and emerging technologies, highlighting challenges and potential research directions. The present work is an extension of a recent paper from IEEE VLSI Test Symposium (VTS’21), and includes recent applications of artificial neural network (ANN) and principal component analysis (PCA) to automatic test pattern generation (ATPG).
Article
Full-text available
Quantified conflict-driven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system \textsf{QCDCL}^\textsf {{A\tiny {\MakeUppercase {ny}}}} QCDCL A NY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses ( \textsf{QCDCL}^\textsf {{U\tiny {\MakeUppercase {ni}}-A\tiny {\MakeUppercase {ny}}}} QCDCL U NI - A NY ) and asserting cubes ( \textsf{QCDCL}^\textsf {{E\tiny {\MakeUppercase {xi}}-A\tiny {\MakeUppercase {ny}}}} QCDCL E XI - A NY ), respectively. We model all four approaches by formal proof systems and show that \textsf{QCDCL}^\textsf {{U\tiny {\MakeUppercase {ni}}-A\tiny {\MakeUppercase {ny}}}} QCDCL U NI - A NY is exponentially better than QCDCL\mathsf{{QCDCL}} QCDCL on false formulas, whereas \textsf{QCDCL}^\textsf {{E\tiny {\MakeUppercase {xi}}-A\tiny {\MakeUppercase {ny}}}} QCDCL E XI - A NY is exponentially better than QCDCL\mathsf{{QCDCL}} QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings.
Article
Full-text available
Zusammenfassung Efficient and verifiable SAT proof checking is an important topic in logic-based AI as well as for trusted explanations from machine learning models. However, the development of efficient and verifiable methods often leads to advanced methods in interactive theorem provers or workarounds, such as untrusted proof-annotation phases, due to the difficulty of the implementation of efficient unit propagation. In this work-in-progress paper, we explore a way towards efficient and verifiable SAT proof checkers for the RUP proof format. In essence, we propose an architecture that is based on several assumptions corresponding to well-known theorems in propositional logic, that allows C-similar performance for the proof checker as well as a significant reduction in the verification effort. The architecture guarantees soundness of the resulting proof checker.
Chapter
We address the problem of variable and truth-value choice in modern search-based Boolean satisfiability (SAT) solvers depending on the problem domain. The SAT problem is the task to determine truth-value assignment for variables of a given Boolean formula under which the formula evaluates to true. The SAT problem is often used as a canonical representation of combinatorial problems in many domains of computer science ranging from artificial intelligence to software engineering. Modern complete search-based SAT solvers represent a universal problem solving tool which often provide higher efficiency than ad-hoc direct solving approaches. Many efficient variable and truth-value selection heuristics were devised. Heuristics can usually be fine tuned by single or multiple numerical parameters prior to executing the search process over the concrete SAT instance. In this paper we present a machine learning approach that predicts the parameters of heuristic from the underlying structure of a graph derived from the input SAT instance. Using this approach we effectively fine tune the SAT solver for specific problem domain.
Article
The Boolean satisfiability problem (SAT) is a fundamental NP-complete decision problem in automated reasoning and mathematical logic. As evidenced by the results of SAT competitions, the performance of SAT solvers varies substantially between different SAT categories (random, crafted, and industrial). A suggested explanation is that SAT solvers may exploit the underlying structure inherent to SAT instances. There have been attempts to define the structure of SAT in terms of structural measures such as phase transition, backbones, backdoors, small-world, scale-free, treewidth, centrality, community, self-similarity, and entropy. Still, the empirical evidence of structural measures for SAT has been provided for only some SAT categories. Furthermore, the evidence has not been theoretically proven. Also, the impact of structural measures on the behavior of SAT solvers has not been extensively examined. This work provides a comprehensive study on structural measures for SAT that have been presented in the literature. We provide an overview of the works on structural measures for SAT and their relatedness to the performance of SAT solvers. Accordingly, a taxonomy of structural measures for SAT is presented. We also review in detail important applications of structural measures for SAT, focusing mainly on enhancing SAT solvers, generating SAT instances, and classifying SAT instances.
Article
Full-text available
This paper presents a new way to improve the performance of the SAT-based bounded model checking problem on sequential and parallel procedures by exploiting relevant information identified through the characteristics of the original problem. This led us to design a new way of building interesting heuristics based on the structure of the underlying problem. The proposed methodology is generic and can be applied for any SAT problem. This paper compares the state-of-the-art approaches with two new heuristics for sequential procedures: Structure-based and Linear Programming heuristics. We extend these study and applied the above methodology on parallel approaches, especially to refine the sharing measure which shows promising results.
Article
Full-text available
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy.
Article
The satisfiability problem (SAT) is one of the most famous problems in computer science. Traditionally, its NP-completeness has been used to argue that SAT is intractable. However, there have been tremendous practical advances in recent years that allow modern SAT solvers to solve instances with millions of variables and clauses. A particularly successful paradigm in this context is stochastic local search (SLS). In most cases, there are different ways of formulating the underlying SAT problem. While it is known that the precise formulation of the problem has a significant impact on the runtime of solvers, finding a helpful formulation is generally non-trivial. The recently introduced GapSAT solver [Lorenz and Wörz 2020] demonstrated a successful way to improve the performance of an SLS solver on average by learning additional information which logically entails from the original problem. Still, there were also cases in which the performance slightly deteriorated. This justifies in-depth investigations into how learning logical implications affects runtimes for SLS algorithms. In this work, we propose a method for generating logically equivalent problem formulations, generalizing the ideas of GapSAT . This method allows a rigorous mathematical study of the effect on the runtime of SLS SAT solvers. Initially, we conduct empirical investigations. If the modification process is treated as random, Johnson SB distributions provide a perfect characterization of the hardness. Since the observed Johnson SB distributions approach lognormal distributions, our analysis also suggests that the hardness is long-tailed. As a second contribution, we theoretically prove that restarts are useful for long-tailed distributions. This implies that incorporating additional restarts can further refine all algorithms employing above mentioned modification technique. Since the empirical studies compellingly suggest that the runtime distributions follow Johnson SB distributions, we also investigate this property on a theoretical basis. We succeed in proving that the runtimes for the special case of Schöning’s random walk algorithm [Schöning 2002] are approximately Johnson SB distributed.
Article
Full-text available
With the continuous increase of the number of network layers, while the network performance is constantly improving, the complexity of the network is also increasing exponentially, which limits the application scenarios of neural networks. To solve this problem, many network compression and acceleration methods have been proposed. Unstructured pruning of networks is a common method that generates compact networks by pruning each parameter of the network. A common operation of pruning is to evaluate the importance of the network at a certain scale to allocate the proportion of pruning. We propose a method that combines the importance of multiple scales to comprehensively evaluate the importance of parameters. Firstly, the network is pruned layer by layer, and the sum of the classification accuracy of the network after pruning and training network for one epoch as the sensitivity of the layer network. Secondly, on the filter, the channel attention module is used to evaluate the importance of each channel of the filter, thirdly, the KL (Kullback Leibler) divergence is used to evaluate the importance of the parameters in the convolution kernel. Finally, the importance of each parameter is generated by multiplying the importance of the three scales, and the result is obtained after fine-tuning the network.
Article
Boolean satisfiability (SAT) plays a key role in diverse areas such as spanning planning, inference, data mining, testing and optimization. Apart from the classical problem of checking Boolean satisfiability, generating random satisfying assignments has attracted significant theoretical and practical interests over the past years. In practical applications, usually a large number of satisfying assignments for a given Boolean formula are needed, the generation of which turns out to be a computational hard problem in both theory and practice. In this work, we propose a novel approach to derive a large set of satisfying assignments from a given one in an efficient way. Our approach is based on an insight that flipping the truth values of properly chosen variables of a satisfying assignment could result in satisfying assignments without invoking computationally expensive SAT solving. We propose a derivation algorithm to discover such variables for each given satisfying assignment. Our approach is orthogonal to the previous techniques for generating satisfying assignments and could be integrated into the existing SAT samplers. We implement our approach as an open-source tool ESampler using two representative state-of-the-art samplers (QuickSampler and UniGen3) as the underlying satisfying assignment generation engine. We conduct extensive experiments on various publicly available benchmarks and apply ESampler to solve Bayesian inference. The results show that ESampler can efficiently boost the sampling of satisfying assignments of both QuickSampler and UniGen3 on a large portion of the benchmarks and is at least comparable on the others. ESampler performs considerably better than QuickSampler and UniGen3, as well as another state-of-the-art sampler SearchTreeSampler.
Article
In this paper we revisit the topic of generalizing proof obligations in bit-level Property Directed Reachability (PDR). We provide a comprehensive study which (1) determines the complexity of the problem, (2) thoroughly analyzes limitations of existing methods, (3) introduces approaches to proof obligation generalization that have never been used in the context of PDR, (4) compares the strengths of different methods from a theoretical point of view, and (5) intensively evaluates the methods on various benchmarks from Hardware Model Checking as well as from AI Planning.
Article
The Electric Vehicle Routing Problem with Time Windows, Piecewise-Linear Recharging and Capacitated Recharging Stations aims to design minimum-cost routes for a fleet of electric vehicles subject to intra-route and inter-route constraints. Every vehicle is equipped with a rechargeable battery that depletes while it transports goods along its route. A vehicle must detour to a recharging station to recharge before draining its battery. To approximate a real recharging process, the amount of energy restored is modeled as a piecewise-linear function of the time spent recharging. Furthermore, each station has a small number of chargers, and hence, when and where a vehicle can recharge must be scheduled around the availability of a charger. This interaction between vehicles does not appear in classical vehicle routing problems and motivates the development of new methods that can exploit the joint routing and scheduling structure. This paper proposes a branch-and-cut-and-price algorithm that designates the routing to integer programming using Dantzig–Wolfe decomposition and the scheduling to constraint programming using logic-based Benders decomposition. Experimental results indicate that this hybrid method solves 34% of the instances with 100 customers.
Article
The Multi-Agent Path Finding problem aims to find a set of collision-free paths that minimizes the total cost of all paths. The problem is extensively studied in artificial intelligence due to its relevance to robotics, video games and logistics applications, but is seldom considered in the mathematical optimization community. This paper tackles the problem using a branch-and-cut-and-price algorithm that incorporates a shortest path pricing problem for finding paths for every agent independently and thirteen classes of constraints for resolving different types of conflicts. Experimental results show that this mathematical approach solves 2402 of 4430 instances compared to 2039 and 1939 by the state-of-the-art solvers Lazy CBS and CBSH2-RTC published in artificial intelligence venues.
Article
The Boolean satisfiability (SAT) methods are one of the efficient approaches used to solve the problems of Boolean matching and the equivalence checking of digital circuits. In combination with classic routing algorithms and optimization techniques, these methods demonstrate better results than the classic routing algorithms in terms of the speed of operation and the quality of the results. In this paper, the modern practice of using the SAT methods in computer-aided design systems for VLSI is analyzed. The examples of modern SAT approaches to the problems of routing and the formal equivalence checking of digital circuits’ descriptions within the technological mapping as a part of the FPGA design flow are considered. The algorithm of the detailed routing of the FPGA switching blocks using the satisfiability problem is developed and presented. The results of its work are demonstrated on the example of the programmable logic block of the integrated circuit 5400TP094 made in Russia. The block has the island architecture, where the configurable logic blocks and switching blocks form a regularly repeated layout template. The properties of the chosen classic architecture allow us to expand the region of the presented algorithm to the entire class of island style of FPGA. The algorithm is tested on the project benchmarks ISCAS-85, ISCAS-89, and LGSynth-89. The comparison of the developed SAT-based algorithm with the well-known routing algorithm Pathfinder is presented by the criteria of the elapsed time and the achieved degree of routed nets in the switching blocks. It is specified that the considered Boolean satisfiability methods for the routing problem are capable to prove the circuit’s unroutability, unlike the Pathfinder algorithm whose results can only implicitly indicate it. The paper demonstrates that the application of a more efficient SAT solver significantly accelerates the work of the suggested detailed routing algorithm.
Chapter
Boolean satisfiability (SAT) has played a key role in diverse areas spanning planning, inferencing, data mining, testing and optimization. Apart from the classical problem of checking Boolean satisfiability, generating random satisfying assignments has attracted significant theoretical and practical interests over the years. For practical applications, a large number of satisfying assignments for a given Boolean formula are needed, the generation of which turns out to be a hard problem in both theory and practice. In this work, we propose a novel approach to derive a large set of satisfying assignments from a given one in an efficient way. Our approach is orthogonal to the previous techniques for generating satisfying assignments and could be integrated into the existing SAT samplers. We implement our approach as an open-source tool ESampler and conduct extensive experiments on real-world benchmarks. Experimental results show that ESampler performs better than three state-of-the-art samplers on a large portion of the benchmarks, and is at least comparable on the others, showcasing the efficacy of our approach.
Article
The paper describes the use of dual-rail MaxSAT systems to solve Boolean satisfiability (SAT), namely to determine if a set of clauses is satisfiable. The MaxSAT problem is the problem of satisfying the maximum number of clauses in an instance of SAT. The dual-rail encoding adds extra variables for the complements of variables, and allows encoding an instance of SAT as a Horn MaxSAT problem. We discuss three implementations of dual-rail MaxSAT: core-guided systems, minimal hitting set (MaxHS) systems, and MaxSAT resolution inference systems. All three of these can be more efficient than resolution and thus than conflict-driven clause learning (CDCL). All three systems can give polynomial size refutations for the pigeonhole principle, the doubled pigeonhole principle and the mutilated chessboard principles. The dual-rail MaxHS MaxSat system can give polynomial size proofs of the parity principle. However, dual-rail MaxSAT resolution requires exponential size proofs for the parity principle; this is proved by showing that constant depth Frege augmented with the pigeonhole principle can polynomially simulate dual-rail MaxSAT resolution. Consequently, dual-rail MaxSAT resolution does not simulate cutting planes. We further show that core-guided dual-rail MaxSAT and weighted dual-rail MaxSAT resolution polynomially simulate resolution. Finally, we report the results of experiments with core-guided dual-rail MaxSAT and MaxHS dual-rail MaxSAT showing strong performance by these systems.
Article
We define and evaluate a new preprocessing technique for propositional model counting. This technique leverages definability, i.e., the ability to determine that some gates are implied by the input formula Σ. Such gates can be exploited to simplify Σ without modifying its number of models. Unlike previous techniques based on gate detection and replacement, gates do not need to be made explicit in our approach. Our preprocessing technique thus consists of two phases: computing a bipartition 〈I,O〉 of the variables of Σ where the variables from O are defined in Σ in terms of I, then eliminating some variables of O in Σ. Our experiments show the computational benefits which can be achieved by taking advantage of our preprocessing technique for model counting.
Article
We provide the first proof complexity results for QBF dependency calculi. By showing that the reflexive resolution path dependency scheme admits exponentially shorter Q-resolution proofs on a known family of instances, we answer a question first posed by Slivovsky and Szeider in 2014 [37]. Further, we conceive a method of QBF solving in which dependency recomputation is utilised as a form of inprocessing. Formalising this notion, we introduce a new version of Q-resolution in which a dependency scheme is applied dynamically. We demonstrate the further potential of this approach beyond that of the existing static system with an exponential separation. Last, we show that the same picture emerges in an analogous approach to the universal expansion paradigm.
Article
Original and learnt clauses in Conflict-Driven Clause Learning (CDCL) SAT solvers often contain redundant literals. This may have a negative impact on solver performance, because redundant literals may deteriorate both the effectiveness of Boolean constraint propagation and the quality of subsequent learnt clauses. To overcome this drawback, we propose a clause vivification approach that eliminates redundant literals by applying unit propagation. The proposed clause vivification is activated before the SAT solver triggers some selected restarts, and only affects a subset of original and learnt clauses, which are considered to be more relevant according to metrics like the literal block distance (LBD). Moreover, we conducted an empirical investigation with instances coming from the hard combinatorial and application categories of recent SAT competitions. The results show that a significant number of additional instances are solved when the proposed approach is incorporated into five of the best performing CDCL SAT solvers (Glucose, TC_Glucose, COMiniSatPS, MapleCOMSPS and MapleCOMSPS_LRB). More importantly, the empirical investigation includes an in-depth analysis of the effectiveness of clause vivification. It is worth mentioning that one of the SAT solvers described here was ranked first in the main track of SAT Competition 2017 thanks to the incorporation of the proposed clause vivification. That solver was further improved in this paper and won the bronze medal in the main track of SAT Competition 2018.
Article
Full-text available
This paper introduces a new efficient satisfiability problem (SAT) solver, negative-literal Van der Waerden numbers SAT solver (NegVanSAT). It is a modification of the well-known SAT solver MINISAT where the constructor of the literals has been adjusted to start with the negated literals first. It reduces the calculations needed to solve a problem. The NegVanSAT is specifically designed for solving the satisfiability problem of finding Van der Waerden numbers, which are known to be very difficult to compute. Comparisons between the MINISAT and the proposed NegVanSAT show that the latter outperforms the MINISAT in finding many of them.
Conference Paper
The SAT problem is one of basic issues of artificial intelligence and computer science. Maple solver is an algorithm solver that specializes in solving SAT problems. In order to improve the efficiency of the solver, decision level reward based branching heuristic was proposed. Firstly, this paper introduces its major framework and two excellent branching heuristics: Variable State Independent Decaying Sum(VSIDS) Decision Heuristic and Learning Rate Based(LRB) Branching Heuristic. Then, a new method named DLR is proposed in view of LRB considering the decision level rate. Finally, experimental results of different sets of instances indicate that the Maple solver with DLR strategy outperforms original version with LRB strategy by reducing the number of conflicts and decisions.
Article
We consider feasibility of linear integer problems in the context of verification systems such as SMT solvers or theorem provers. Although satisfiability of linear integer problems is decidable, many state-of-the-art implementations neglect termination in favor of efficiency. We present the calculus CutSat++ that is sound, terminating, complete, and leaves enough space for model assumptions and simplification rules in order to be efficient in practice. CutSat++ combines model-driven reasoning and quantifier elimination to the feasibility of linear integer problems.
ResearchGate has not been able to resolve any references for this publication.