Article

Graph-based algorithms for Boolean function manipulation

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A data structure is presented for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by C. Y. Lee (1959) and S. B. Akers (1978), but with further restrictions on the ordering of decision variables in the graph. Although, in the worst case, a function requires a graph where the number of vertices grows exponentially with the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. The algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. Experimental results are presented from applying these algorithms to problems in logic design verification that demonstrate the practicality of the approach.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In this article, we provide a comprehensive overview and describe the key concepts of symbolic search for classical planning with the three expressive extensions (see Figure 1). More specifically, we describe theoretically and analyze empirically how symbolic search can support expressive model extensions with different symbolic data structures such as Binary Decision Diagrams (Bryant, 1986) or Edge-Valued Multi-Valued Decision Diagrams (Ciardo & Siminiceanu, 2002) in a unified framework. Based on this, we show that it is possible to support all model extensions simultaneously, resulting in optimal planning algorithms that support conditional effects, derived predicates with axioms, and state-dependent action costs. ...
... A decision diagram is called reduced if isomorphic subgraphs are merged and any node is eliminated whose two children are identical. For fixed variable orders, reduced and ordered decision diagrams are unique (Bryant, 1986;Bahar et al., 1997;Lai et al., 1996). Note that for EVBDDs and EVMDDs the corresponding edge values must be taken into account. ...
... The size |D| of a decision diagram D is the number of nodes in D. The size of a decision diagram strongly depends on the variable order, so that a good order can lead to an exponentially more compact decision diagram . For some functions the size of the corresponding decision diagram is exponential, independent of the underlying variable order (Bryant, 1986;. ...
Article
In classical planning, the task is to derive a sequence of deterministic actions that changes the current fully-observable world state into one that satisfies a set of goal criteria. Algorithms for classical planning are domain-independent, i.e., they are not limited to a particular application and instead can be used to solve different types of reasoning problems. The main language for modeling such problems is the Planning Domain Definition Language (PDDL). Even though it provides many language features for expressing a wide range of planning tasks, most of today’s classical planners, especially optimal ones, support only a small subset of its features. The most widely supported fragment is lifted STRIPS plus types and action costs. While this fragment suffices to model some interesting planning tasks, using it to model more realistic problems often incurs a much higher modeling effort. Even if modeling is possible at all, solving the resulting tasks is often infeasible in practice, as the required encoding size increases exponentially. To address these issues, we show how to support more expressive modeling languages natively in optimal classical planning algorithms. Specifically, we focus on symbolic search, a state-of-the-art search algorithm that operates on sets of world states. We show how to extend symbolic search to support classical planning with conditional effects, axioms, and state-dependent action costs. All of these modeling features are expressive in the sense that compiling them away incurs a significant blow-up, so is it often necessary to support them natively. Except for blind (non-symbolic) search, our new symbolic search is the first optimal classical planning algorithm that supports these three modeling extensions in combination, and it even compares favorably to other state-of-the-art approaches that only support a subset of the extensions.
... Many search procedures for SAT-related problems (e.g. Analytic Tableaux [38], DPLL [14], circuit AllSAT [17]) and many formula compilers (e.g., d-DNNFs [11,13], OBDDs [4] and SDDs [12]) rely their efficiency and effectiveness on the detection of partial truth assignments µ satisfying an input propositional formula φ, which allows to state that not only φ is satisfiable, but also all total assignments extending µ satisfy φ. In particular, when it comes to SAT-based problems requiring the complete enumeration of satisfying assignments (e.g. ...
... A few notable exceptions are the Dualiza procedure [27] and the procedures we described in [28,30,16]; also OBDDs [4] and SDDs [12] formula compilers implicitly use entailment to prune branches so that to guarantee canonicity (see below and §4). ...
... OBDDs [4] and SDDs [12] are subcases of d-DNNFs [13] which are canonical under some order condition -i.e., two equivalent subformulas φ 1 and φ 2 are encoded into the same OBDD or SDD, and as such are shared inside the DAG representation. The OBDD and SDD compilers typically build the encoding bottom-up, and they are able to encode φ| µ into ⊤ as soon as φ| µ becomes valid. ...
Preprint
Many procedures for SAT-related problems, in particular for those requiring the complete enumeration of satisfying truth assignments, rely their efficiency and effectiveness on the detection of (possibly small) partial assignments satisfying an input formula. Surprisingly, there seems to be no unique universally-agreed definition of formula satisfaction by a partial assignment in the literature. In this paper we analyze in deep the issue of satisfaction by partial assignments, raising a flag about some ambiguities and subtleties of this concept, and investigating their practical consequences. We identify two alternative notions that are implicitly used in the literature, namely verification and entailment, which coincide if applied to CNF formulas but differ and present complementary properties if applied to non-CNF or to existentially-quantified formulas. We show that, although the former is easier to check and as such is implicitly used by most current search procedures, the latter has better theoretical properties, and can improve the efficiency and effectiveness of enumeration procedures.
... Later algorithmic advancements brought about the ability to perform circuit simulation much more efficiently in practical cases. One such advance was the development of a data structure called the Reduced Ordered Binary Decision Diagram (ROBDD) [5], which can greatly compress the Boolean description of digital circuits and allow direct manipulation of the compressed form. Software simulation may also play a vital role in the development of quantum hardware by enabling the modeling and analysis of large-scale designs that cannot be implemented physically with current technology. ...
... The goal of the work reported here is to develop a practical software means of simulating quantum computers efficiently on classical computers. We propose a new data structure called the Quantum Information Decision Diagram (QuIDD) which is based on decision diagram concepts that are well-known in the context of simulating classical computer hardware [6,2,5]. As we demonstrate, QuIDDs allow simulations of n-qubit systems to achieve run-time and memory complexities that range from O (1) to O(2 n ), and the worst case is not typical. ...
... Moreover, exponential memory and runtime are required in many practical cases, making this data structure impractical for simulation of large logic circuits. To address this limitation, Bryant developed the Reduced Ordered BDD (ROBDD) [5], where all variables are ordered, and decisions are made in that order. A key advantage of the ROBDD is that variable-ordering facilitates an efficient implementation of reduction rules that automatically eliminate redundancy from the basic BDD representation and may be summarized as follows: ...
Preprint
Simulating quantum computation on a classical computer is a difficult problem. The matrices representing quantum gates, and the vectors modeling qubit states grow exponentially with an increase in the number of qubits. However, by using a novel data structure called the Quantum Information Decision Diagram (QuIDD) that exploits the structure of quantum operators, a useful subset of operator matrices and state vectors can be represented in a form that grows polynomially with the number of qubits. This subset contains, but is not limited to, any equal superposition of n qubits, any computational basis state, n-qubit Pauli matrices, and n-qubit Hadamard matrices. It does not, however, contain the discrete Fourier transform (employed in Shor's algorithm) and some oracles used in Grover's algorithm. We first introduce and motivate decision diagrams and QuIDDs. We then analyze the runtime and memory complexity of QuIDD operations. Finally, we empirically validate QuIDD-based simulation by means of a general-purpose quantum computing simulator QuIDDPro implemented in C++. We simulate various instances of Grover's algorithm with QuIDDPro, and the results demonstrate that QuIDDs asymptotically outperform all other known simulation techniques. Our simulations also show that well-known worst-case instances of classical searching can be circumvented in many specific cases by data compression techniques.
... To construct BDDs for the elements of a transition mapping → T , an encoding has to be chosen to represent elements (q k , u k , q k ) ∈ Q × U × Q. To construct singleton BDDs according to (6), one needs to separate the logical variables for the different parts of the elements, while also separating those used for the initial and final transition states q k and q k . Therefore 2n + m variables are defined: z q,1 , . . . ...
... • Conjunction/disjunction of two BDDs B 1 and B 2 requires O(|B 1 ||B 2 |) time, producing a BDD with the same bound in size [6]. ...
... The BDD data structure is based on a reduced binary tree whose size, i.e. number of nodes, varies not only with the number of elements it represents but also with the encodings and the evaluation order defined for the variables. Choices of evaluation order and encodings are therefore vital when using BDDs and deserves careful consideration, as time and memory used by the logical operations are dependent on the size of the BDD structures involved [6]. ...
Preprint
This paper presents a control synthesis algorithm for dynamical systems to satisfy specifications given in a fragment of linear temporal logic. It is based on an abstraction-refinement scheme with nonuniform partitions of the state space. A novel encoding of the resulting transition system is proposed that uses binary decision diagrams for efficiency. We discuss several factors affecting scalability and present some benchmark results demonstrating the effectiveness of the new encodings. These ideas are also being implemented on a publicly available prototype tool, ARCS, that we briefly introduce in the paper.
... The QuIDD is a variant of the reduced ordered binary decision diagram (ROBDD or BDD) datastructure [7] applied to quantum circuit simulation [24,23]. Like other DD variants, it has all of the key properties of BDDs as well as a few other application-specific attributes (see Figure 2 for examples). ...
... The algorithms which manipulate DDs are just as important as the properties of the DDs. In particular, the Apply algorithm (see Figure 3) performs recursive traversals on DD operands to build new DDs using any desired unary or binary function [7]. Although originally intended for digital logic operations, Apply has been extended to linearalgebraic operations such as matrix addition and multiplication [2,8], as well as quantum-mechanical operations such as measurement and partial trace [24,23]. ...
... Although originally intended for digital logic operations, Apply has been extended to linearalgebraic operations such as matrix addition and multiplication [2,8], as well as quantum-mechanical operations such as measurement and partial trace [24,23]. The runtime and memory complexity of Apply is O(|A||B|), where |A| and |B| are the sizes in number of internal and terminal nodes of the DDs A and B, respectively [7]. 2 Thus, the complexity of DD-based algorithms is tied to the compression achieved by the datastructure. These complexity bounds are important for analyzing many of the algorithms presented in this work. ...
Preprint
Quantum computing promises exponential speed-ups for important simulation and optimization problems. It also poses new CAD problems that are similar to, but more challenging, than the related problems in classical (non-quantum) CAD, such as determining if two states or circuits are functionally equivalent. While differences in classical states are easy to detect, quantum states, which are represented by complex-valued vectors, exhibit subtle differences leading to several notions of equivalence. This provides flexibility in optimizing quantum circuits, but leads to difficult new equivalence-checking issues for simulation and synthesis. We identify several different equivalence-checking problems and present algorithms for practical benchmarks, including quantum communication and search circuits, which are shown to be very fast and robust for hundreds of qubits.
... Compressed SDDs contain OBDDs, 3 and are regarded as a natural SDD class because of their canonicity: two compressed SDDs computing the same function are syntactically equal up to syntactic manipulations preserving polynomial size [7]. The restriction to compressed SDDs makes our result stronger, because general SDDs are believed (despite not known) to be exponentially more succinct than compressed SDDs [2]. ...
... We separate compressed SDDs and OBDDs by a function, which we call the generalized hidden weighted bit function because, indeed, it contains the hidden weighted bit function (HWB) as a subfunction. HWB is perhaps the simplest function known to be hard on OBDDs [3]: it computes the subsets of {1, . . . , n} having size i and containing the number i, for i = 1, . . . ...
... It is well known that the hidden weighted bit function has exponential OBDD size [3]. ...
Preprint
Introduced by Darwiche (2011), sentential decision diagrams (SDDs) are essentially as tractable as ordered binary decision diagrams (OBDDs), but tend to be more succinct in practice. This makes SDDs a prominent representation language, with many applications in artificial intelligence and knowledge compilation. We prove that SDDs are more succinct than OBDDs also in theory, by constructing a family of boolean functions where each member has polynomial SDD size but exponential OBDD size. This exponential separation improves a quasipolynomial separation recently established by Razgon (2013), and settles an open problem in knowledge compilation.
... In his seminal paper Bryant showed that ordered binary decision diagrams, or OBDDs for short, are well suited as data structure for Boolean functions [11]. Since some important functions have exponential OBDD size, many variants and extensions have been considered (for an extensive discussion see, e.g., the monograph of Wegener [32]). ...
... FBDDs (with some restrictions) and k-OBDDs, where k does not depend on the number of Boolean variables the represented function is defined on, allow polynomial time algorithms for important operations. OBDDs introduced by Bryant [11] are restricted FBDDs and restricted k-OBDDs. ...
... , G ′ ℓ(r) have a common 1-input. Since all these OBDDs respect the same variable ordering, Bryant's apply algorithm [11] can be used to obtain an OBDD of size O(|G| k ) for the conjunction of the functions represented by G ′ ℓ(1) , . . . , G ′ ℓ(r) in time O(|G| k ). ...
Preprint
Sentential decision diagrams (SDDs) introduced by Darwiche in 2011 are a promising representation type used in knowledge compilation. The relative succinctness of representation types is an important subject in this area. The aim of the paper is to identify which kind of Boolean functions can be represented by SDDs of small size with respect to the number of variables the functions are defined on. For this reason the sets of Boolean functions representable by different representation types in polynomial size are investigated and SDDs are compared with representation types from the classical knowledge compilation map of Darwiche and Marquis. Ordered binary decision diagrams (OBDDs) which are a popular data structure for Boolean functions are one of these representation types. SDDs are more general than OBDDs by definition but only recently, a Boolean function was presented with polynomial SDD size but exponential OBDD size. This result is strengthened in several ways. The main result is a quasipolynomial simulation of SDDs by equivalent unambiguous nondeterministic OBDDs, a nondeterministic variant where there exists exactly one accepting computation for each satisfying input. As a side effect an open problem about the relative succinctness between SDDs and free binary decision diagrams (FBDDs) which are more general than OBDDs is answered.
... Reduced Ordered Binary Decision Diagrams (ROBDDs), and operations for manipulating them were originally developed by Bryant [1] to handle large Boolean functions efficiently. A BDD is a directed acyclic graph (DAG) with up to two outgoing edges per node, labeled "then" and "else". ...
... This property is critical for computing the dot-products required in matrix multiplication, where terminal values are multiplied (♭) to produce products that are then added (♯) to create the new terminal values of the resulting matrix. The matrix multiplication algorithm itself is a recursive procedure similar to the Apply function [1], but tailored to implement the dot-product. Another important issue in matrix multiplication is compression. ...
... The pseudo-code for the whole algorithm is presented in Figure 4. It has worst-case time and space complexity O(2 2n ), but can be performed in O (1) or O(n) time and space complexity depending on how much block regularity can be exploited in the operands. As noted earlier, such compression is almost always achieved in the quantum domain. ...
Preprint
While thousands of experimental physicists and chemists are currently trying to build scalable quantum computers, it appears that simulation of quantum computation will be at least as critical as circuit simulation in classical VLSI design. However, since the work of Richard Feynman in the early 1980s little progress was made in practical quantum simulation. Most researchers focused on polynomial-time simulation of restricted types of quantum circuits that fall short of the full power of quantum computation. Simulating quantum computing devices and useful quantum algorithms on classical hardware now requires excessive computational resources, making many important simulation tasks infeasible. In this work we propose a new technique for gate-level simulation of quantum circuits which greatly reduces the difficulty and cost of such simulations. The proposed technique is implemented in a simulation tool called the Quantum Information Decision Diagram (QuIDD) and evaluated by simulating Grover's quantum search algorithm. The back-end of our package, QuIDD Pro, is based on Binary Decision Diagrams, well-known for their ability to efficiently represent many seemingly intractable combinatorial structures. This reliance on a well-established area of research allows us to take advantage of existing software for BDD manipulation and achieve unparalleled empirical results for quantum simulation.
... In contrast, reduced ordered binary decision diagrams (BDDs) [16] are very popular to represent large objects compactly, exploiting bitwise representation with automatic reduction mechanisms. However, they were barely used as policy representation of controllers due to several advantages of DTs: common in classical controller representation via BDDs. ...
... Binary decision diagrams (BDDs) [16] are concisely representing Boolean functions, widely used in symbolic verification [44] and circuit design [46]. In the context of controllers, they have been used to represent planning strategies [21], hybrid system controllers [40,56], to compress numerical controllers [26], as well as for small representations of controllers through determinization [63]. ...
Preprint
Full-text available
Safety-critical controllers of complex systems are hard to construct manually. Automated approaches such as controller synthesis or learning provide a tempting alternative but usually lack explainability. To this end, learning decision trees (DTs) have been prevalently used towards an interpretable model of the generated controllers. However, DTs do not exploit shared decision-making, a key concept exploited in binary decision diagrams (BDDs) to reduce their size and thus improve explainability. In this work, we introduce predicate decision diagrams (PDDs) that extend BDDs with predicates and thus unite the advantages of DTs and BDDs for controller representation. We establish a synthesis pipeline for efficient construction of PDDs from DTs representing controllers, exploiting reduction techniques for BDDs also for PDDs.
... While these approaches handle the transformations, they lack scalability for large-scale networks such as DCNs. The main reason is that they rely on Binary Decision Diagrams (BDDs) [7] to compute and manage the header spaces of Equivalence Classes (ECs). BDD treats packet headers just as sequences of bits and symbolically stores them. ...
... 6 Hdr.add(htr.value) 7 ...
Article
In current large-scale networks (e.g., datacenter networks), packet forwarding is dynamically customized beyond traditional shortest-path routing to meet various application demands. Such forwarding behavior is tremendously complex to manage and sometimes causes serious network failures. We present Graft, a new realtime data plane verification framework to verify complex forwarding behavior on large-scale networks. For scalable realtime verification, we first propose an optimized algorithm to efficiently compute and manage large packet header spaces and their forwarding paths. Second, we propose a data plane model and algorithms with formal network semantics to precisely model the customized forwarding behavior. We validate its effectiveness using synthetic and production datacenter networks. To the best of our knowledge, we are the first to verify customized forwarding behavior in production large-scale networks. For scalability, we show that Graft is 100x faster than prior works in the synthetic networks and 20000x faster in the production network. For expressiveness, we demonstrate that Graft is enough to model the customized forwarding behavior by verifying the correctness of SRv6-based SFCs in the production network. Finally, we demonstrate that Graft verifies a real failure of a distributed NAT system in the production network.
... Combination of the above four stages are referred as a round in AES encryption and decryption. Different Key sizes have different rounds of operation and the relation between key sizes and rounds of AES operation is given in Table 1 [8,9]. ...
... Isomorphic and Inverse Isomorphic operations are performed as same as convention CFA structure and it is given in Eq. (8) Two different multiplication operations are performed with the output of multiplicative inverse block preceding the inverse isomorphic mapping and affine transformation. These three operations are very similar to the operations performed in standard Composite Field Arithmetic architecture. ...
Article
Full-text available
To implement Advanced Encryption Standard Algorithm that is efficient in terms of its speed, area and power consumption is the major objective of this paper. In the four stages of AES operation, SubByte stage consumes more area, time and power on comparing with the other stages. To design an efficient AES, it is necessary to design an efficient SubByte stage for AES encryption and decryption process. In this research, a modified version of the SubByte stage is proposed and the results are compared with the SubByte stage proposed by Rijndael. Propagation delay, Area and Power Consumption of the proposed architecture are calculated in this research article. This proposed AES structure is implemented in FPGAs Virtex 6 and Spartan 6. Implementation result shows the improvement in Speed, Area and Power Consumption of the proposed architecture than the conventional architecture.
... Then it is possible to remove one of these gates, and associate with the remaining gate the children of the removed one. In graphs, this operation is similar to the procedure used to glue the vertices when constructing a Reduced Ordered Binary Decision Diagram (ROBDD) [21] and can be performed effectively thanks to the use of hash tables. If the circuits S f and S f ′ do not differ much, then when transitioning to LEC for these circuits in SAT form, it is reasonable to glue together all the gates for which it is possible w.r.t. said above. ...
... In the 80-s and 90-s of the XX-th century in EDA for formal circuit verification they usually employed the methods based on the Binary Decision Diagrams (BDD) [21], [59]. However, since 2000-s the BDD-based algorithms in hardware and software verification are being actively displaced by complete SAT solvers [60]. ...
Article
Full-text available
Many industrial verification problems are solved via reduction to CircuitSAT. It is often the case that the resulting SAT instances are very hard and require the use of parallel computing to be solved in reasonable time. The particularly relevant problem in this context is how to best plan the use of the computing resources, because SAT solvers’ runtime is well known to be hard to predict. In the present paper we propose two methods that employ the knowledge about a circuit’s structure to partition a CircuitSAT instance into a specific number of simpler subproblems. A distinctive feature of the proposed partitioning methods is that they make it possible to estimate the hardness (e.g. the total runtime of a SAT solver on all subproblems) of a partitioning via the Monte Carlo method. In the experimental evaluation we apply these methods to hard CircuitSAT instances and compare their performance with the well known Cube and Conquer approach. The proposed partitioning methods not only often outperform Cube and Conquer, but also show remarkably small variance in the runtime of a SAT solver on subproblems from a partitioning, thus making it possible to construct accurate estimations of time required to process all subproblems, using random samples of small size. As a consequence, we have the efficient stochastic estimation procedure which provides an additional opportunity to employ hyperparameter tuning methods to further increase the SAT solver performance on (partitioned) hard SAT instances. We demonstrate the effectiveness of the proposed constructions by applying them to some problems associated with CircuitSAT, in particular, Logical Equivalence Checking benchmarks, Automated Test Pattern Generation benchmarks and the inversion problems of some cryptographic functions.
... Some commonly used data structure are the so-called binary decision diagrams (BDD), the reduced ordered binary decision diagrams (ROBDD) [15] and its variants, such as the zero-suppressed BDD (ZDD) [16]. The idea can be summarized as follows. ...
... The idea can be summarized as follows. Let S be some finite set and F a family of subsets of S. Then F can be represented by a boolean function f : t0, 1u |S| Ñ t0, 1u whose evaluation indicates whether some given subset A S belongs to F or not, i.e. f pAq 1 if and only if A F. BDDs and ZDDs have been extensively studied: Donald Knuth gave two celebrated lectures about the subject [17,18], in which he mentioned that Bryant's paper [15] was the most-cited in the field for several years. BDD and ZDD have many applications, ranging from genetics [19] to data mining [20] and game theory [21]. ...
Article
Full-text available
The problem of finding a minimum feedback vertex set (MFVS) in a directed graph has been known to be NP-hard for around 40 years: It is one of the problems listed in Karp’s famous 1972 paper. Several strategies to solve the MFVS problem, both exact and approximate, have been proposed. In particular, in 2000, Lin and Jou presented an exact algorithm based on eight graph contraction operators whose complexity is polynomial for a particular class of graphs called DOME-contractible graphs. This paper proposes two contributions. First, we introduce a data structure called union-cat tree that provides, in some cases, a compact representation of a family of constant size subsets of a given finite set. Secondly, we extend Lin and Jou’s algorithm to compute the set of all MFVSs of any directed graph.
... More precisely, an n-bit Boolean function can be computed by using folded view with O(n) rounds of classical communication of O(n 6 log n) bits, while the algorithm in [30] computes the function in O(n 2 ) rounds with the same amount of classical communication (their second algorithm can compute a symmetric Boolean function with lower communication complexity, i.e., O(n 5 (log n) 2 ), and O(n 3 log n) rounds, but it requires that every party knows the topology of the network). From a technical viewpoint, folded view is a generalization of Ordered Binary Decision Diagrams (OBDD) [15], which are used in commercial VLSI CAD systems as data structures to process Boolean functions; folded view would be interesting in its own right. ...
... Note that the size of T h X (v) is exponential in h, which results in exponential time/communication complexity in n when we construct it if h = 2(n − 1). To reduce the time/communication complexity to something bounded by a polynomial, we create the new technique called folded view by generalizing Ordered Binary Decision Diagrams (OBDD) [15]. A folded view (f-view) of depth h is a vertex-and edge-labeled directed acyclic multigraph obtained by merging nodes at the same level in T h X (v) into one node if the subtrees rooted at them are isomorphic. ...
Preprint
This paper gives the first separation of quantum and classical pure (i.e., non-cryptographic) computing abilities with no restriction on the amount of available computing resources, by considering the exact solvability of a celebrated unsolvable problem in classical distributed computing, the ``leader election problem'' on anonymous networks. The goal of the leader election problem is to elect a unique leader from among distributed parties. The paper considers this problem for anonymous networks, in which each party has the same identifier. It is well-known that no classical algorithm can solve exactly (i.e., in bounded time without error) the leader election problem in anonymous networks, even if it is given the number of parties. This paper gives two quantum algorithms that, given the number of parties, can exactly solve the problem for any network topology in polynomial rounds and polynomial communication/time complexity with respect to the number of parties, when the parties are connected by quantum communication links.
... There are numerous tools, implementing or incorporating control synthesis, such as PESSOA, SCOTS, CoSyMA, LTLMoP, TuLiP, see [18], [24], [19], [7], and [32] correspondingly. Internally, they either use an explicit control law representation in a table form or employ Reduced Ordered Binary Decision Diagrams, introduced by [4] and called RO-BDDs or simply BDDs, in an attempt to optimise the memory needed to store the synthesised control law. RO-BDDs are canonical, efficiently manipulable, and in many cases allow for compact data representation. ...
... Binary Decision Diagrams (BDDs), represented with rooted directed acyclic graphs were introduced by [4], as a compact representation for boolean func- ...
Preprint
Controller synthesis techniques based on symbolic abstractions appeal by producing correct-by-design controllers, under intricate behavioural constraints. Yet, being relations between abstract states and inputs, such controllers are immense in size, which makes them futile for em- bedded platforms. Control-synthesis tools such as PESSOA, SCOTS, and CoSyMA tackle the problem by storing controllers as binary decision di- agrams (BDDs). However, due to redundantly keeping multiple inputs per-state, the resulting controllers are still too large. In this work, we first show that choosing an optimal controller determinization is an NP- complete problem. Further, we consider the previously known controller determinization technique and discuss its weaknesses. We suggest several new approaches to the problem, based on greedy algorithms, symbolic regression, and (muli-terminal) BDDs. Finally, we empirically compare the techniques and show that some of the new algorithms can produce up to 85% smaller controllers than those obtained with the previous technique.
... A binary-decision-diagram (BDD) [36] is a succinct representation of a set of Boolean evaluations and, motivated by this, we examine the possibility of applying symbolic model checking via BDDs to verify consistency and stability. This way, we avoid the combinatorial explosion of evaluations. ...
... For instance the CUDD library [36] can be used to manipulate BDDs in MC-MAS. The first step can be implemented using the function "Cudd Xeqy" in CUDD, which constructs a BDD for the function x = y for two sets of BDD variables x and y. ...
Preprint
Most autonomous robotic agents use logic inference to keep themselves to safe and permitted behaviour. Given a set of rules, it is important that the robot is able to establish the consistency between its rules, its perception-based beliefs, its planned actions and their consequences. This paper investigates how a robotic agent can use model checking to examine the consistency of its rules, beliefs and actions. A rule set is modelled by a Boolean evolution system with synchronous semantics, which can be translated into a labelled transition system (LTS). It is proven that stability and consistency can be formulated as computation tree logic (CTL) and linear temporal logic (LTL) properties. Two new algorithms are presented to perform realtime consistency and stability checks respectively. Their implementation provides us a computational tool, which can form the basis of efficient consistency checks on-board robots.
... A binary decision diagram (BDD for short) is a graphical representation of Boolean functions in a compressed form [36] [1] [12]. We follow the notation and terminology in Knuth's book [35]. ...
... if all variables are assigned values then 11 report ν; 12 if dl ≤ 0 then halt; 13 compute a blocking clause C from ν; solution is found, a solver is enforced to restart from scrach with an extended CNF formula, not resuming search. ...
Preprint
All solutions SAT (AllSAT for short) is a variant of propositional satisfiability problem. Despite its significance, AllSAT has been relatively unexplored compared to other variants. We thus survey and discuss major techniques of AllSAT solvers. We faithfully implement them and conduct comprehensive experiments using a large number of instances and various types of solvers including one of the few public softwares. The experiments reveal solver's characteristics. Our implemented solvers are made publicly available so that other researchers can easily develop their solver by modifying our codes and compare it with existing methods.
... Minato's algorithm [30] is a recursive way to rewrite a Boolean formula as a primeirredundant cover, which is very compact in general. The algorithm works recursively using formulas in three-valued logic, and Minato [30,Section 4.4] suggests an implementation of this algorithm using Binary Decision Diagrams [8] where a three-valued formula is simply bounded using two Boolean functions: ( low , high ) and the algorithm generates an irredundant sum-of-product ′ such that low ⇒ ′ ⇒ high . In other words, ′ is generated as a disjunction of conjunctions of literals, such that no conjunct is uncessary, and no literal can be removed from any conjunct. ...
Preprint
We consider the problem of the verification of an LTL specification φ\varphi on a system S given some prior knowledge K, an LTL formula that S is known to satisfy. The automata-theoretic approach to LTL model checking is implemented as an emptiness check of the product SA¬φS\otimes A_{\lnot\varphi} where A¬φA_{\lnot\varphi} is an automaton for the negation of the property. We propose new operations that simplify an automaton A¬φA_{\lnot\varphi} \emph{given} some knowledge automaton AKA_K, to produce an automaton B that can be used instead of A¬φA_{\lnot\varphi} for more efficient model checking. Our evaluation of these operations on a large benchmark derived from the MCC'22 competition shows that even with simple knowledge, half of the problems can be definitely answered without running an LTL model checker, and the remaining problems can be simplified significantly.
... The logical formula in propositional logic can be converted into a representation by pre-computation, allowing various queries to be quickly answered on such converted representations. Typical target representations for knowledge compilation include Ordered Binary Decision Diagrams (OB-DDs) (Bryant 1986), deterministic Decomposable Negation Normal Form (d-DNNF) (Darwiche 2001b), and Sentential Decision Diagrams (SDDs) (Darwiche 2011). A Negation Normal Form (NNF) (Darwiche 2001a) is a generalization of all these approaches and Boolean function representation classes can be considered as adding constraints, such as determinism and structured decomposability, to the NNF. ...
Preprint
A knowledge compilation map analyzes tractable operations in Boolean function representations and compares their succinctness. This enables the selection of appropriate representations for different applications. In the knowledge compilation map, all representation classes are subsets of the negation normal form (NNF). However, Boolean functions may be better expressed by a representation that is different from that of the NNF subsets. In this study, we treat tensor trains as Boolean function representations and analyze their succinctness and tractability. Our study is the first to evaluate the expressiveness of a tensor decomposition method using criteria from knowledge compilation literature. Our main results demonstrate that tensor trains are more succinct than ordered binary decision diagrams (OBDDs) and support the same polytime operations as OBDDs. Our study broadens their application by providing a theoretical link between tensor decomposition and existing NNF subsets.
... Following this work, DDs were adopted to represent data and operations for various applications, due to their ability to reduce both temporal and spatial costs. The first attempt to adapt this data structure for quantum computation was made in [35], where it was used to simulate a quantum circuit [13,36]. This adaptation is known as reduced ordered binary decision diagram (ROBDD). ...
Article
Full-text available
Simulating quantum circuits efficiently on classical computers is crucial given the limitations of current noisy intermediate-scale quantum devices. This paper adapts and extends two methods used to contract tensor networks within the fast tensor decision diagram (FTDD) framework. The methods, called iterative pairing and block contraction, exploit the advantages of tensor decision diagrams to reduce both the temporal and spatial cost of quantum circuit simulations. The iterative pairing method minimizes intermediate diagram sizes, while the block contraction algorithm efficiently handles circuits with repetitive structures, such as those found in quantum walks and Grover’s algorithm. Experimental results demonstrate that, in some cases, these methods significantly outperform traditional contraction orders like sequential and cotengra in terms of both memory usage and execution time. Furthermore, simulation tools based on decision diagrams, such as FTDD, show superior performance to matrix-based simulation tools, such as Google tensor networks, enabling the simulation of larger circuits more efficiently. These findings show the potential of decision diagram-based approaches to improve the simulation of quantum circuits on classical platforms.
... , [86] , ( , DeepProbLog [25] Semantic Loss [26] ), S y (BDD) [87,88] (SDD) [89,90] . ...
... In this work, we make use of the Apply operation on ADDs (Bryant 1986;Bahar et al. 1993). The Apply operation takes as input a binary operator ▷◁, two ADDs ψ 1 , ψ 2 , and outputs an ADD ψ 3 such that the Func(ψ 3 ) = Func(ψ 1 ) ▷◁ Func(ψ 2 ). ...
Preprint
Model counting is a fundamental task that involves determining the number of satisfying assignments to a logical formula, typically in conjunctive normal form (CNF). While CNF model counting has received extensive attention over recent decades, interest in Pseudo-Boolean (PB) model counting is just emerging partly due to the greater flexibility of PB formulas. As such, we observed feature gaps in existing PB counters such as a lack of support for projected and incremental settings, which could hinder adoption. In this work, our main contribution is the introduction of the PB model counter PBCount2, the first exact PB model counter with support for projected and incremental model counting. Our counter, PBCount2, uses our Least Occurrence Weighted Min Degree (LOW-MD) computation ordering heuristic to support projected model counting and a cache mechanism to enable incremental model counting. In our evaluations, PBCount2 completed at least 1.40x the number of benchmarks of competing methods for projected model counting and at least 1.18x of competing methods in incremental model counting.
... The concept of debugging can also be applied to classical circuits. While automated circuit verification is a commonly employed method to test circuits for their correctness [27]- [29], several methods have been proposed to automatically find error causes in different types of circuits [30]- [34] once verification has discovered the presence of errors. ...
Preprint
Recent advancements in quantum computing software are gradually increasing the scope and size of quantum programs being developed. At the same time, however, these larger programs provide more possibilities for functional errors that are harder to detect and resolve. Meanwhile, debugging tools that could aid developers in resolving these errors are still barely existent and far from what we take for granted in classical design automation and software engineering. As a result, even if one manages to identify the incorrect behavior of a developed quantum program, detecting and resolving the underlying errors in the program remains a time-consuming and tedious task. Moreover, the exponential growth of the state space in quantum programs makes the efficient manual investigation of errors radically difficult even for respectively simple algorithms, and almost impossible as the number of qubits increases. To address this problem, this work proposes a debugging framework, available as an open-source implementation at https://github.com/cda-tum/mqt-debugger. It assists developers in debugging errors in quantum programs, allowing them to efficiently identify the existence of errors and diagnose their causes. Users are given the ability to place assertions in the code that test for the correctness of a given algorithm and are evaluated using classical simulations of the underlying quantum program. Once an assertion fails, the proposed framework employs different diagnostic methods to point towards possible error causes. This way, the debugging workload for quantum programs is drastically reduced.
... A BDD [25] is a compact and efficiently manipulable data structure for a Boolean function f : {0, 1} n → {0, 1} . Its roots are obtained by the Shannon expansion of the cofactors of the function: if f x and f x denote the partial evaluation of f (x, . . ...
Preprint
A recent trend in probabilistic inference emphasizes the codification of models in a formal syntax, with suitable high-level features such as individuals, relations, and connectives, enabling descriptive clarity, succinctness and circumventing the need for the modeler to engineer a custom solver. Unfortunately, bringing these linguistic and pragmatic benefits to numerical optimization has proven surprisingly challenging. In this paper, we turn to these challenges: we introduce a rich modeling language, for which an interior-point method computes approximate solutions in a generic way. While logical features easily complicates the underlying model, often yielding intricate dependencies, we exploit and cache local structure using algebraic decision diagrams (ADDs). Indeed, standard matrix-vector algebra is efficiently realizable in ADDs, but we argue and show that well-known optimization methods are not ideal for ADDs. Our engine, therefore, invokes a sophisticated matrix-free approach. We demonstrate the flexibility of the resulting symbolic-numeric optimizer on decision making and compressed sensing tasks with millions of non-zero entries.
... However, there are techniques to reduce the size of an automaton, which allows to handle large automata that appear in practical applications. A well-known technique is the BDD diagrams [19]. Another technique is the state-tree structures [20] or the method using extended finite-state machines and abstractions [21]. ...
Preprint
Complexity analysis becomes a common task in supervisory control. However, many results of interest are spread across different topics. The aim of this paper is to bring several interesting results from complexity theory and to illustrate their relevance to supervisory control by proving new nontrivial results concerning nonblockingness in modular supervisory control of discrete event systems modeled by finite automata.
... A key ingredient of our approach is a technique for solving the synthesis problem for a specification ϕ by composing solutions of synthesis problems corresponding to sub-formulas in ϕ. Since Boolean functions are often represented using DAG-like structures (such as circuits, AIGs [15], ROBDDs [1,7]), we assume w.l.o.g. that ϕ is given as a DAG. The DAG structure provides a natural decomposition of the original problem into sub-problems with a partial order of dependencies between them. ...
Preprint
Full-text available
Given a relational specification R(X, Y), where X and Y are sequences of input and output variables, we wish to synthesize each output as a function of the inputs such that the specification holds. This is called the Boolean functional synthesis problem and has applications in several areas. In this paper, we present the first parallel approach for solving this problem, using compositional and CEGAR-style reasoning as key building blocks. We show by means of extensive experiments that our approach outperforms existing tools on a large class of benchmarks.
... We extend the classical concept of reduction proposed by Bryant (1986) as a VPO for network models. Reduction is an operation applied to DDs that merge nodes which share isomorphic subgraphs. ...
Preprint
This paper provides a novel framework for solving multiobjective discrete optimization problems with an arbitrary number of objectives. Our framework formulates these problems as network models, in that enumerating the Pareto frontier amounts to solving a multicriteria shortest path problem in an auxiliary network. We design techniques for exploiting the network model in order to accelerate the identification of the Pareto frontier, most notably a number of operations to simplify the network by removing nodes and arcs while preserving the set of nondominated solutions. We show that the proposed framework yields orders-of-magnitude performance improvements over existing state-of-the-art algorithms on five problem classes containing both linear and nonlinear objective functions.
... [58,43,37]) are used routinely in the computeraided design of integrated circuits and have been widely applied to find bugs in software, analyze embedded systems, and find security vulnerabilities. At the heart of these advances are computational proof engines such as Boolean satisfiability (SAT) solvers [50], Boolean reasoning and manipulation routines based on Binary Decision Diagrams (BDDs) [9], and satisfiability modulo theories (SMT) solvers [6]. ...
Preprint
Verified artificial intelligence (AI) is the goal of designing AI-based systems that that have strong, ideally provable, assurances of correctness with respect to mathematically-specified requirements. This paper considers Verified AI from a formal methods perspective. We describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges.
... Burch and Dill's work has generated considerable interest in the use of uninterpreted functions to abstract data operations in processor verification. A common theme has been to adopt Boolean methods, either to allow integration of uninterpreted functions into symbolic model checkers [DPR98,BBCZ98], or to allow the use of Binary Decision Diagrams (BDDs) [Bry86] in the decision procedure [HKGB97,GSZAS98,VB98]. Boolean methods allow a more direct modeling of the control logic of hardware designs and thus can be applied to actual processor designs rather than highly abstracted models. ...
Preprint
The logic of equality with uninterpreted functions (EUF) provides a means of abstracting the manipulation of data by a processor when verifying the correctness of its control logic. By reducing formulas in this logic to propositional formulas, we can apply Boolean methods such as Ordered Binary Decision Diagrams (BDDs) and Boolean satisfiability checkers to perform the verification. We can exploit characteristics of the formulas describing the verification conditions to greatly simplify the propositional formulas generated. In particular, we exploit the property that many equations appear only in positive form. We can therefore reduce the set of interpretations of the function symbols that must be considered to prove that a formula is universally valid to those that are ``maximally diverse.'' We present experimental results demonstrating the efficiency of this approach when verifying pipelined processors using the method proposed by Burch and Dill.
... Orthogonally, we plan to improve the efficiency of the livenessbased garbage collector using heuristics such as limiting the depth of DFA, merging nearly-equivalent states and using better represen-tation (for example BDDs [7]) and algorithms for automata manipulation. We also need to investigate the interaction of liveness with other collection schemes, such as incremental and generational collection. ...
Preprint
We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable.
... The main problem we solve is how to encode compactly the set of possible assignments to the variables in an LBM in a single formalism handling both MRFs of bounded tree width and PCFGs. The casefactor diagrams (CFDs) we introduce for that purpose are similar to binary-decision diagrams (BDDs) [4]. CFDs differ from BDDs in two ways. ...
Preprint
We introduce a probabilistic formalism subsuming Markov random fields of bounded tree width and probabilistic context free grammars. Our models are based on a representation of Boolean formulas that we call case-factor diagrams (CFDs). CFDs are similar to binary decision diagrams (BDDs) but are concise for circuits of bounded tree width (unlike BDDs) and can concisely represent the set of parse trees over a given string undera given context free grammar (also unlike BDDs). A probabilistic model consists of aCFD defining a feasible set of Boolean assignments and a weight (or cost) for each individual Boolean variable. We give an insideoutside algorithm for simultaneously computing the marginal of each Boolean variable, and a Viterbi algorithm for finding the mininum cost variable assignment. Both algorithms run in time proportional to the size of the CFD.
... This disadvantage can be avoided by constructing a parsing tree of a formula, and storing in every node of the tree information for which states a formula has been checked and for which it is true. The size of sets may be as large as GS (the number of states in RG), but ROBDD representation [14,16,17] can provide a compact representation of sets of states. The idea is discussed in detail in the description of the algorithm for checking using ROBDD [18]. ...
Preprint
Classical algorithms of evaluation of temporal CTL formulas are constructed "bottom-up". A formula must be evaluated completely to give the result. In the paper, a new concept of "top-down" evaluation of temporal QsCTL (CTL with state quantifiers) formulas, called "Checking By Spheres" is presented. The new algorithm has two general advantages: the evaluation may be stopped on certain conditions in early steps of the algorithm (not the whole formula and not whole state space should be analyzed), and state quantification may be used in formulas (even if a range of a quantifier is not statically obtainable).
... Definition 1 (Ordered Binary Decision Diagram). [2] An ordered binary decision diagram on n binary variables X = {x1, . . . , xn} is a layered directed acyclic graph G(V, E) with n + 1 layers (some of which may be empty) and exactly one root. ...
Preprint
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances.
... Several approaches have been proposed to check an arithmetic circuit against its functional specification. Different variants of canonical, graph-based representations have been proposed, including Binary Decision Diagrams (BDDs) [14], Binary Moment Diagrams (BMDs) [15] [16], Taylor Expansion Diagrams (TED) [17], and other hybrid diagrams. While BDDs have been used extensively in logic synthesis, their application to verification of arithmetic circuits is limited by the prohibitively high memory requirement for complex arithmetic circuits, such as multipliers. ...
Preprint
Galois field (GF) arithmetic circuits find numerous applications in communications, signal processing, and security engineering. Formal verification techniques of GF circuits are scarce and limited to circuits with known bit positions of the primary inputs and outputs. They also require knowledge of the irreducible polynomial P(x), which affects final hardware implementation. This paper presents a computer algebra technique that performs verification and reverse engineering of GF(2m2^m) multipliers directly from the gate-level implementation. The approach is based on extracting a unique irreducible polynomial in a parallel fashion and proceeds in three steps: 1) determine the bit position of the output bits; 2) determine the bit position of the input bits; and 3) extract the irreducible polynomial used in the design. We demonstrate that this method is able to reverse engineer GF(2m2^m) multipliers in \textit{m} threads. Experiments performed on synthesized \textit{Mastrovito} and \textit{Montgomery} multipliers with different P(x), including NIST-recommended polynomials, demonstrate high efficiency of the proposed method.
... Xeve takes as inputs the FSM model expressed as a set of boolean equations in Blif format generated by the Esterel compiler. It makes use of the symbolic state space construction algorithm by means of Binary Decision Diagrams (BDDs) (Bryant 1986), the internal representation of an FSM model for the reachable state space. Xeve provides two verification functions: minimising the number of states of the FSM model and checking the emission status of output signals. ...
Preprint
The software approach to developing Digital Signal Processing (DSP) applications brings some great features such as flexibility, re-usability of resources and easy upgrading of applications. However, it requires long and tedious tests and verification phases because of the increasing complexity of the software. This implies the need of a software programming environment capable of putting together DSP modules and providing facilities to debug, verify and validate the code. The objective of the work is to provide such facilities as simulation and verification for developing DSP software applications. This led us to develop an extension toolkit, Epspectra, built upon Pspectra, one of the first toolkits available to design basic software radio applications on standard PC workstations. In this paper, we first present Epspectra, an Esterel-based extension of Pspectra that makes the design and implementation of portable DSP applications easier. It allows drastic reduction of testing and verification time while requiring relatively little expertise in formal verification methods. Second, we demonstrate the use of Epspectra, taking as an example the radio interface part of a GSM base station. We also present the verification procedures for the three safety properties of the implementation programs which have complex control-paths. These have to obey strict scheduling rules. In addition, Epspectra achieves the verification of the targeted application since the same model is used for the executable code generation and for the formal verification.
... Similarly we can use the second and third diagram. Binary Decision Diagrams (BDDs) were introduced by [2] and are particularly compact decision diagrams, obtained using two reduction rules. The first rule identifies isomorphic subgraphs, i.e. we merge nodes that have the same label and the same children. ...
Preprint
Full-text available
Binary decision diagrams (BDDs) are widely used to mitigate the state-explosion problem in model checking. A variation of BDDs are Zero-suppressed Decision Diagrams (ZDDs) which omit variables that must be false, instead of omitting variables that do not matter. We use ZDDs to symbolically encode Kripke models used in Dynamic Epistemic Logic, a framework to reason about knowledge and information dynamics in multi-agent systems. We compare the memory usage of different ZDD variants for three well-known examples from the literature: the Muddy Children, the Sum and Product puzzle and the Dining Cryptographers. Our implementation is based on the existing model checker SMCDEL and the CUDD library. Our results show that replacing BDDs with the right variant of ZDDs can significantly reduce memory usage. This suggests that ZDDs are a useful tool for model checking multi-agent systems.
Chapter
Field-Programmable Gate Arrays (FPGAs) emerged in the mid-1980s, providing a way to produce complex custom hardware at one’s desk. The concept of hardware reconfigurability that was thus created is truly fascinating: An integrated circuit can be customized with no manufacturing steps whatsoever; all that is needed is loading an appropriate configuration into the FPGA’s memory. The key principle that made this possible was the use of stored-select multiplexers for connecting prefabricated logic blocks. However, this opened two fundamental questions of reconfigurable architecture design: (1) which logic blocks should be prefabricated and (2) how should the stored-select multiplexer network be organized.
Article
The landscape of Verilog toolchains for electronic design automation (EDA) is diverse, and their reliability is crucial, as errors can lead to significant debugging challenges and delays in development. Methodologies such as testing and formal verification have been applied to identify and eliminate defects in these toolchains. We propose a framework named VeriXmith to interconnect design tools involved in logical synthesis and simulation for cross-checking. These tools process circuit designs and produce outputs in different languages, such as Verilog netlists from synthesizers and C++ programs from simulators. Since these outputs represent the same circuit semantics, we can leverage this semantic consistency to verify the tools that translate one representation into another. Our approach involves creating semantics extractors to extend the range of circuit representations available for semantic equivalence checking by converting them into a canonical and comparable form. Additionally, we develop mutation operators for Verilog designs to introduce new data/control paths and language constructs, enhancing the diversity of circuit designs as test inputs. By validating semantic equivalence, our framework successfully identifies defects in existing Verilog toolchains. An exploratory experiment uncovers 31 previously unknown bugs in well-known open-source Verilog tools, including Verilator and Yosys.
Article
Full-text available
With the ongoing digitization, digital circuits have become increasingly present in everyday life. However, as circuits can be faulty, their verification poses a challenging but essential challenge. In contrast to formal verification techniques, simulation techniques fail to fully guarantee the correctness of a circuit. However, due to the exponential complexity of the verification problem, formal verification can fail due to time or space constraints. To overcome this challenge, recently Polynomial Formal Verification (PFV) has been introduced. Here, it has been shown that several circuits and circuit classes can be formally verified in polynomial time and space. In general, these proofs have to be conducted manually, requiring a lot of time. However, in recent research, a method for automated PFV has been proposed, where a proof engine automatically generates human-readable proofs that show the polynomial size of a Binary Decision Diagram (BDD) for a given function. The engine analyses the BDD and finds a pattern, which is then proven by induction. In this article, we formalize the previously presented BDD patterns and propose algorithms for the pattern detection, establishing new possibilities for the automated proof generation for more complex functions. Furthermore, we show an exemplary proof that can be generated using the presented methods. This article is part of the theme issue ‘Emerging technologies for future secure computing platforms’.
Preprint
Recent work on weighted model counting has been very successfully applied to the problem of probabilistic inference in Bayesian networks. The probability distribution is encoded into a Boolean normal form and compiled to a target language, in order to represent local structure expressed among conditional probabilities more efficiently. We show that further improvements are possible, by exploiting the knowledge that is lost during the encoding phase and incorporating it into a compiler inspired by Satisfiability Modulo Theories. Constraints among variables are used as a background theory, which allows us to optimize the Shannon decomposition. We propose a new language, called Weighted Positive Binary Decision Diagrams, that reduces the cost of probabilistic inference by using this decomposition variant to induce an arithmetic circuit of reduced size.
Article
Experimental three-value simulation system, composed of series of computer programs, is designed for use as engineering tool to verify large (5000 block) logic designs prior to actual construction of hardware and also ensures microprogram and logic compatibility in microprogram-controlled machine; besides binary 1 and 0 values, simulator uses X value, which represents unknown condition and is considered to occur during any transition from one binary value to another; use of this third value to represent transition allows simulator to detect combinational hazards, critical races, and feedback oscillations in machine design.
Article
A binary-decision program is a program consisting of a string of two-address conditional transfer instructions. The paper shows the relationship between switching circuits and binary-decision programs and gives a set of simple rules by which one can transform binary-decision programs to switching circuits. It then shows that, in regard to the computation of switching functions, binary-decision programming representation is superior to the usual Boolean representation.
Book
1. Introduction.- 1.1 Design Styles for VLSI Systems.- 1.2 Automatic Logic Synthesis.- 1.3 PLA Implementation.- 1.4 History of Logic Minimization.- 1.5 ESPRESSO-II.- 1.6 Organization of the Book.- 2. Basic Definitions.- 2.1 Operations on Logic Functions.- 2.2 Algebraic Representation of a Logic Function.- 2.3 Cubes and Covers.- 3. Decomposition and Unate Functions.- 3.1 Cofactors and the Shannon Expansion.- 3.2 Merging.- 3.3 Unate Functions.- 3.4 The Choice of the Splitting Variable.- 3.5 Unate Complementation.- 3.6 SIMPLIFY.- 4. The ESPRESSO Minimization Loop and Algorithms.- 4.0 Introduction.- 4.1 Complementation.- 4.2 Tautology.- 4.2.1 Vanilla Recursive Tautology.- 4.2.2 Efficiency Results for Tautology.- 4.2.3 Improving the Efficiency of Tautology.- 4.2.4 Tautology for Multiple-Output Functions.- 4.3 Expand.- 4.3.1 The Blocking Matrix.- 4.3.2 The Covering Matrix.- 4.3.3 Multiple-Output Functions.- 4.3.4 Reduction of the Blocking and Covering Matrices.- 4.3.5 The Raising Set and Maximal Feasible Covering Set.- 4.3.6 The Endgame.- 4.3.7 The Primality of c+.- 4.4 Essential Primes.- 4.5 Irredundant Cover.- 4.6 Reduction.- 4.6.1 The Unate Recursive Paradigm for Reduction.- 4.6.2 Establishing the Recursive Paradigm.- 4.6.3 The Unate Case.- 4.7 Lastgasp.- 4.8 Makesparse.- 4.9 Output Splitting.- 5. Multiple-Valued Minimization.- 6. Experimental Results.- 6.1 Analysis of Raw Data for ESPRESSO-IIAPL.- 6.2 Analysis of Algorithms.- 6.3 Optimality of ESPRESSO-II Results.- 7. Comparisons and Conclusions.- 7.1 Qualitative Evaluation of Algorithms of ESPRESSO-II.- 7.2 Comparison with ESPRESSO-IIC.- 7.3 Comparison of ESPRESSO-II with Other Programs.- 7.4 Other Applications of Logic Minimization.- 7.5 Directions for Future Research.- References.
Article
It is suggested that the economics of present large-scale scientific computers could benefit from a greater investment in hardware to mechanize multiplication and division than is now common. As a move in this direction, a design is developed for a multiplier which generates the product of two numbers using purely combinational logic, i.e., in one gating step. Using straightforward diode-transistor logic, it appears presently possible to obtain products in under 1, ¿sec, and quotients in 3 ¿sec. A rapid square-root process is also outlined. Approximate component counts are given for the proposed design, and it is found that the cost of the unit would be about 10 per cent of the cost of a modern large-scale computer.
Article
This paper describes a method for defining, analyzing, testing, and implementing large digital functions by means of a binary decision diagram. This diagram provides a complete, concise, "implementation-free" description of the digital functions involved. Methods are described for deriving these diagrams and examples are given for a number of basic combinational and sequential devices. Techniques are then outlined for using the diagrams to analyze the functions involved, for test generation, and for obtaining various implementations. It is shown that the diagrams are especially suited for processing by a computer. Finally, methods are described for introducing inversion and for directly "interconnecting" diagrams to define still larger functions. An example of the carry look-ahead adder is included.