Article

Algebric Decision Diagrams and Their Applications

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we present theory and experimental results on Algebraic Decision Diagrams. These diagrams extend BDDs by allowing values from an arbitrary finite domain to be associated with the terminal nodes of the diagram. We present a treatment founded in Boolean algebras and discuss algorithms and results in several areas of application: Matrix multiplication, shortest path algorithms, and direct methods for numerical linear algebra. Although we report an essentially negative result for Gaussian elimination per se, we propose a modified form of ADDs which appears to circumvent the difficulties in some cases. We discuss the relevance of our findings and point to directions for future work.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A representation by minimal energy levels is symbolic in the sense of antichains [12], also used in the algorithm of Chatterjee et al. [10]. These algorithms are thus symbolic in the quantitative values but concrete in the representation of game states and transitions 1 . ...
... Our second implementation is based on Algebraic Decision Diagrams (ADDs) [1], a special case of Multi-Terminal Binary Decision Diagrams (MTBDDs) [15]. ADDs have numbers as terminal nodesinstead of TRUE and FALSE in the case of BDDs -and provide efficient implementations of symbolic algebraic operations. ...
... Bryant [8] showed how Boolean operations on BDDs can be efficiently implemented, including logical connectives and existential and universal abstractions. ADDs [1] are a generalization of BDDs, such that the terminal nodes may take on values belonging to a set of constants different from 0 and 1, such as integers or reals. Another name for ADDs is Multi-Terminal Binary Decision Diagrams (MTBDDs) [15], which reflects their structure rather than their application for computations in algebras. ...
Preprint
Energy games, which model quantitative consumption of a limited resource, e.g., time or energy, play a central role in quantitative models for reactive systems. Reactive synthesis constructs a controller which satisfies a given specification, if one exists. For energy games a synthesized controller ensures to satisfy not only the safety constraints of the specification but also the quantitative constraints expressed in the energy game. A symbolic algorithm for energy games, recently presented by Chatterjee et al., is symbolic in its representation of quantitative values but concrete in the representation of game states and transitions. In this paper we present an algorithm that is symbolic both in the quantitative values and in the underlying game representation. We have implemented our algorithm using two different symbolic representations for reactive games, Binary Decision Diagrams (BDD) and Algebraic Decision Diagrams (ADD). We investigate the commonalities and differences of the two implementations and compare their running times on specifications of energy games.
... The idea is that instead of saying stochastic actions have non-deterministic effects, DSp says stochastic actions have non-deterministic alternatives which are mutually observationally indistinguishable from the agent's perspective and each of which has a deterministic effect. In the coffee robot example, to express that action east might end up in moving 1 or 2 units east non-deterministically, DSp uses two actions east (1) and east (2), and they are interpreted in that the robot intends to move a unit east but nature may select 1 or 2 as outcomes. ...
... E.g. R([east (2)]h = 2, D dyn ) := (∃y.east (2) = east (y)∧2 = y+h∨∀y.a = east (y)∧ x = h) ≡ (h = 0), given D dyn as Example 1. ...
... Since W is uncountable, NORM just requires the sum here to be "welldefined". See also[6] for details.2 Free variables are implicitly universally quantified. The 2 has lower syntactic precedence than the connectives, and [·] has the highest priority.D. Liu et al. / Verifying Belief-Based Programs via Symbolic Dynamic Programming ...
Chapter
Full-text available
Belief-based programming is a probabilistic extension of the Golog programming language family, where every action and sensing could be noisy and every test refers to the subjective beliefs of the agent. Such characteristics make it rather suitable for robot control in a partial-observable uncertain environment. Recently, efforts have been made in providing formal semantics for belief programs and investigating the hardness of verifying belief programs. Nevertheless, a general algorithm that actually conducts the verification is missing. In this paper, we propose an algorithm based on symbolic dynamic programming to verify belief programs, an approach that generalizes the dynamic programming technique for solving (partially observable) Markov decision processes, i.e. (PO)MDP, by exploiting the symbolic structure in the solution of first-order (PO)MDPs induced by belief program execution.
... Quasimodo is specifically designed for easy extensibility to other backends to make it possible to experiment with a variety of symbolic data-structures. Quasimodo currently supports (i) BDDs [3,5,7], (ii) a weighted variant of BDDs [9,14], [19,Ch. 5], and (iii) Context-Free-Language Ordered Binary Decision Diagrams CFLOBDDs [11], a recent canonical representation of Boolean functions that has been shown to outperform BDDs in many quantum-simulation tasks. ...
... Quasimodo allows different backend data-structures to be used for representing quantum states. It comes with BDDs [3,5,7], a weighted variant of BDDs [9,14], [19,Ch. 5], and CFLOBDDs [11]. ...
... Binary Decision Diagrams (BDDs). Quasimodo provides an option to use Binary Decision Diagrams (BDDs) [3,5,7] as the underlying data-structure. A BDD is a data-structure used to efficiently represent a function from Boolean variables to some space of values (Boolean or non-Boolean). ...
Chapter
Full-text available
The simulation of quantum circuits on classical computers is an important problem in quantum computing. Such simulation requires representations of distributions over very large sets of basis vectors, and recent work has used symbolic data-structures such as Binary Decision Diagrams (BDDs) for this purpose. In this tool paper, we present Quasimodo , an extensible, open-source Python library for symbolic simulation of quantum circuits. Quasimodo is specifically designed for easy extensibility to other backends. Quasimodo allows simulations of quantum circuits, checking properties of the outputs of quantum circuits, and debugging quantum circuits. It also allows the user to choose from among several symbolic data-structures—both unweighted and weighted BDDs, and a recent structure called Context-Free-Language Ordered Binary Decision Diagrams (CFLOBDDs)—and can be easily extended to support other symbolic data-structures.
... We develop a novel SMC algorithm that is familybased, i.e. it can sample executions from all variants at once and keep track of the occurrence probability of these executions in any given variant. The effectiveness of this algorithm relies on Algebraic Decision Diagram (ADD) [2], a dedicated data structure that enables an orthogonal treatment of variability, stochasticity and property satisfaction. ...
... We model stochastic system behaviours into Discrete-Time Markov Chains (DTMCs). In such models, (1) the state space S of the system is countable, (2) time elapses at discrete steps and (3) the transitions between states T ⊆ S × S are stochastic. Hence, one can see DTMCs as a Kripke structure where each transition between two states has a probability to occur at each discrete time step. ...
... Therefore, our work takes inspiration from the principles of Delahaye et al. [3,15] but develops a novel SMC approach to verify FDTMCs. The implementation of our algorithms relies on a dedicated data structure -based on Algebraic Decision Diagrams (ADDs) [2] -to account for the binary nature and relationships of FDTMCs parameters. Indeed, the advantage of our data structure over Delahaye's parametric approach is that ADDs can record (1) which variants can (or cannot) execute a given FDTMC path and (2) with which probability -and it can do so while keeping its structure concise as it accumulates more probability profiles. ...
Chapter
Full-text available
We propose a simulation-based approach to verify Variability-Intensive Systems (VISs) with stochastic behaviour. Given an LTL formula and a model of the VIS behaviour, our method estimates the probability for each variant to satisfy the formula. This allows us to learn the products of the VIS for which the probability stands above a certain threshold. To achieve this, our method samples VIS executions from all variants at once and keeps track of the occurrence probability of these executions in any given variant. The efficiency of this algorithm relies on Algebraic Decision Diagram (ADD), a dedicated data structure that enables orthogonal treatment of variability, stochasticity and property satisfaction. We implemented our approach as an extension of the ProVeLines model checker. Our experiments validate that our method can produce accurate estimations of the probability for the variants to satisfy the given properties.
... A DTProbLog program is converted into a compact form based on Algebraic Decision Diagram (ADD) (Bahar et al . 1997) with a process called knowledge compilation (Darwiche and Marquis 2002), extensively adopted in Probabilistic Logic Programming (De Raedt et al . 2007;Riguzzi 2022). ADDs are an extension of Binary Decision Diagrams (BDDs) (Akers 1978). BDDs are rooted directed acyclic graphs where there are only two terminal nodes, 0 and 1. Every inte ...
... present large search spaces, even if finding the optimal ordering of the variables that minimize the size of the BDD is a complex task (Meinel and Slobodová 1994). In ADDs, leaf nodes may be associated with elements belonging to a set of constants (e.g., natural numbers) instead of only 0 and 1, that has been proved effective in multiple scenarios (Bahar et al . 1997). Kiesel et al . (2022) introduced Second Level Algebraic Model Counting (2AMC), needed to solve tasks such as MAP inference (Shterionov et al . 2015) and Decision theoretic inference (Van den Broeck et al . 2010), in Probabilistic Logic Programming, and inference in smProbLog (Totis et al . 2023) programs. These problems are characteriz ...
Article
Full-text available
Solving a decision theory problem usually involves finding the actions, among a set of possible ones, which optimize the expected reward, while possibly accounting for the uncertainty of the environment. In this paper, we introduce the possibility to encode decision theory problems with Probabilistic Answer Set Programming under the credal semantics via decision atoms and utility attributes. To solve the task, we propose an algorithm based on three layers of Algebraic Model Counting, that we test on several synthetic datasets against an algorithm that adopts answer set enumeration. Empirical results show that our algorithm can manage non-trivial instances of programs in a reasonable amount of time.
... A DTProbLog program is converted into a compact form based on Algebraic Decision Diagram (ADD) (Bahar et al. 1997) with a process called knowledge compilation (Darwiche and Marquis 2002), extensively adopted in Probabilistic Logic Programming (De Raedt et al. 2007;Riguzzi 2022). ADDs are an extension of Binary Decision Diagrams (BDDs) (Akers 1978). ...
... Several additional imposed properties such as variable ordering allow one to compactly represent large search spaces, even if finding the optimal ordering of the variables that minimize the size of the BDD is a hard task (Meinel and Slobodová 1994). In ADDs, leaf nodes may be associated with elements belonging to a set of constants (for example natural numbers) instead of only 0 and 1, that has been proved effective in multiple scenarios (Bahar et al. 1997). Kiesel et al. (2022) introduced Second Level Algebraic Model Counting (2AMC), needed to solve tasks such as MAP inference (Shterionov et al. 2015) and Decision theoretic inference (Van den Broeck et al. 2010), in Probabilistic Logic Programming, and inference in smProbLog (Totis et al. 2023) programs. ...
Preprint
Full-text available
Solving a decision theory problem usually involves finding the actions, among a set of possible ones, which optimize the expected reward, possibly accounting for the uncertainty of the environment. In this paper, we introduce the possibility to encode decision theory problems with Probabilistic Answer Set Programming under the credal semantics via decision atoms and utility attributes. To solve the task we propose an algorithm based on three layers of Algebraic Model Counting, that we test on several synthetic datasets against an algorithm that adopts answer set enumeration. Empirical results show that our algorithm can manage non trivial instances of programs in a reasonable amount of time. Under consideration in Theory and Practice of Logic Programming (TPLP).
... , } ⊆ PS a finite set of propositional symbols, we call pseudo-Boolean function (PBF) over a total function of the type : 2 → Q. (The somewhat abusive expression "over " must be understood as "over variables from . ") There are several ways to represent pseudo-Boolean functions, notably generalizations of BDDs; let us mention Algebraic Decision Diagrams (ADDs) [1], Semiring-Labelled Decision Diagrams (SLDDs) [13,28], Affine Algebraic Decision Diagrams (AADDs) [21], and Probabilistic Sentential Decision Diagrams (PSDDs) [16]. These languages are of varying succinctness (i.e., some are able to represent PBFs more compactly than others) and do not have the same efficiency for operations (such as summing PBFs or "forgetting" variables). ...
... We used a (heavily) modified version of the ADD package in pyddlib [6]. Let us now briefly introduce ADDs: Algebraic decision diagrams [1] are a generalization to non-Boolean values of the BDD language, in which the two terminal nodes 0 and 1 of BDDs are replaced by as many nodes as necessary. More precisely, an ADD is a directed acyclic graph with one root, where every node is labelled with a symbol ∈ PS and has two outgoing arcs for then and else, respectively. ...
Conference Paper
Full-text available
Probabilistic Dynamic Epistemic Logic (PDEL) is a formalism for reasoning about the higher-order probabilistic knowledge of agents, and about how this knowledge changes when events occur. While PDEL has been studied for its theoretical appeal, it was only ever applied to toy examples: the combinatorial explosion of probabilis-tic Kripke structures makes the PDEL framework impractical for realistic applications, such as card games. This paper is a first step towards the use of PDEL in more practical settings: in line with recent work applying ideas from symbolic model checking to (non-probabilistic) DEL, we propose a "symbolic" representation of probabilistic Kripke structures as pseudo-Boolean functions, which can be represented with several data structures of the decision diagram family, in particular Algebraic Decision Diagrams (ADDs). We show that ADDs scale much better than explicit Kripke structures, and that they allow for efficient symbolic model checking, even on the realistic example of the Hanabi card game, thus paving the way towards the practical application of epistemic planning techniques.
... However, factored probabilistic MOMDPs have not yet been proposed to the best of our knowledge, probably because of the intricacy of reasoning with a mixture of a finite state subspace and an infinite belief state subspace due to the probabilistic model -contrary to the possibilistic case where both subspaces are finite. The famous algorithm SPUDD (Hoey et al. 1999) solves factored probabilistic MDPs by using symbolic functional representations of value functions and policies in the form of Algebraic Decision Diagrams (ADDs) (Bahar et al. 1997), which compactly encode realvalued functions of Boolean variables: ADDs are directed acyclic graphs whose nodes represent state variables and leaves are the function's values. Instead of updating states individually at each iteration of the algorithm, states are aggregated within ADDs and operations are symbolically and directly performed on ADDs over many states at once. ...
... Thus, a factored π-MOMDP can be defined with transition functions T a,i = π ( X i | parents(X i ), a ) for each action a and variable X i . Each transition function can be compactly encoded in an Algebraic Decision Diagram (ADD) (Bahar et al. 1997). An ADD, as illustrated in Figure 3a, is a directed acyclic graph which compactly represents a real-valued function of binary variables, whose identical sub-graphs are merged and zero-valued leaves are not memorized. ...
Article
Qualitative Possibilistic Mixed-Observable MDPs (pi-MOMDPs), generalizing pi-MDPs and pi-POMDPs, are well-suited models to planning under uncertainty with mixed-observability when transition, observation and reward functions are not precisely known and can be qualitatively described. Functions defining the model as well as intermediate calculations are valued in a finite possibilistic scale L, which induces a finite belief state space under partial observability contrary to its probabilistic counterpart. In this paper, we propose the first study of factored pi-MOMDP models in order to solve large structured planning problems under qualitative uncertainty, or considered as qualitative approximations of probabilistic problems. Building upon the SPUDD algorithm for solving factored (probabilistic) MDPs, we conceived a symbolic algorithm named PPUDD for solving factored pi-MOMDPs. Whereas SPUDD's decision diagrams' leaves may be as large as the state space since their values are real numbers aggregated through additions and multiplications, PPUDD's ones always remain in the finite scale L via min and max operations only. Our experiments show that PPUDD's computation time is much lower than SPUDD, Symbolic-HSVI and APPL for possibilistic and probabilistic versions of the same benchmarks under either total or mixed observability, while still providing high-quality policies.
... Here we exploit this possibility and propose a novel QSP algorithm for quantum states represented by reduced ordered decision diagrams. Decision diagrams are directed acyclic graphs over a set of Boolean variables and a non-empty terminal set with exactly one root node [24]. Decision diagrams avoid redundancies and leads to a more compact representation of logic functions. ...
... Algebraic decision diagram. An Algebraic Decision Diagram (ADD) is the same as a BDD, except that its terminal nodes can have any values [24]. In another word, BDDs are ADDs whose terminal nodes have binary values. ...
Preprint
Loading classical data into quantum registers is one of the most important primitives of quantum computing. While the complexity of preparing a generic quantum state is exponential in the number of qubits, in many practical tasks the state to prepare has a certain structure that allows for faster preparation. In this paper, we consider quantum states that can be efficiently represented by (reduced) decision diagrams, a versatile data structure for the representation and analysis of Boolean functions. We design an algorithm that utilises the structure of decision diagrams to prepare their associated quantum states. Our algorithm has a circuit complexity that is linear in the number of paths in the decision diagram. Numerical experiments show that our algorithm reduces the circuit complexity by up to 31.85\% compared to the state-of-the-art algorithm, when preparing generic n-qubit states with different degrees of non-zero amplitudes. Additionally, for states with sparse decision diagrams, including the initial state of the quantum Byzantine agreement protocol, our algorithm reduces the number of CNOTs by 86.61\% \sim 99.9\%.
... metrics . accuracy ) 25 # ******************************************************* Listing 1: A simplified illustration of the core functionality enabled by DataScope-given an end-to-end ML pipeline (Line [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19], and a utility (e.g., sklearn.metrics.accuracy), DataScope computes the Shapley value of each training example as its importance with respect to the given utility. ...
... One key result from this line of work is that, if we can construct certain polynomial-size data structures to represent our logical formula, then we can perform model counting in polynomial time. Among the most notable of such data structures are decision diagrams, specifically binary decision diagrams [41,12] and their various derivatives [6,62,40]. For our purpose in this paper, we use the additive decision diagrams (ADD), as detailed below. ...
Preprint
Full-text available
Developing modern machine learning (ML) applications is data-centric, of which one fundamental challenge is to understand the influence of data quality to ML training -- "Which training examples are 'guilty' in making the trained ML model predictions inaccurate or unfair?" Modeling data influence for ML training has attracted intensive interest over the last decade, and one popular framework is to compute the Shapley value of each training example with respect to utilities such as validation accuracy and fairness of the trained ML model. Unfortunately, despite recent intensive interest and research, existing methods only consider a single ML model "in isolation" and do not consider an end-to-end ML pipeline that consists of data transformations, feature extractors, and ML training. We present Ease.ML/DataScope, the first system that efficiently computes Shapley values of training examples over an end-to-end ML pipeline, and illustrate its applications in data debugging for ML training. To this end, we first develop a novel algorithmic framework that computes Shapley value over a specific family of ML pipelines that we call canonical pipelines: a positive relational algebra query followed by a K-nearest-neighbor (KNN) classifier. We show that, for many subfamilies of canonical pipelines, computing Shapley value is in PTIME, contrasting the exponential complexity of computing Shapley value in general. We then put this to practice -- given an sklearn pipeline, we approximate it with a canonical pipeline to use as a proxy. We conduct extensive experiments illustrating different use cases and utilities. Our results show that DataScope is up to four orders of magnitude faster over state-of-the-art Monte Carlo-based methods, while being comparably, and often even more, effective in data debugging.
... We stated earlier that some optimizations possible would be: 1) That zero-padding can be removed almost entirely by the work of [78], and 2) that kernel tiling algorithms can be used for improving speed [72]. However, it is worth considering whether algebraic decision diagrams (ADD) [15] can be used in the discrete-reward setting for reducing space complexity. Current model-checking tools such as [64] employ decision diagrams such as the multi-terminal binary decision diagrams (MTBDD) to help circumvent state space explosion. ...
... Current model-checking tools such as [64] employ decision diagrams such as the multi-terminal binary decision diagrams (MTBDD) to help circumvent state space explosion. The paper [15] presents an algorithm for matrix multiplication when matrices are represented as ADDs, and the exact-power method for example requires not much more than that in terms of unique operations. ...
Preprint
Full-text available
Probabilistic model-checking is a field which seeks to automate the formal analysis of probabilistic models such as Markov chains. In this thesis, we study and develop the stochastic Markov reward model (sMRM) which extends the Markov chain with rewards as random variables. The model recently being introduced, does not have much in the way of techniques and algorithms for their analysis. The purpose of this study is to derive such algorithms that are both scalable and accurate. Additionally, we derive the necessary theory for probabilistic model-checking of sMRMs against existing temporal logics such as PRCTL. We present the equations for computing \textit{first-passage reward densities}, \textit{expected value problems}, and other \textit{reachability problems}. Our focus however is on finding strictly numerical solutions for \textit{first-passage reward densities}. We solve for these by firstly adapting known direct linear algebra algorithms such as Gaussian elimination, and iterative methods such as the power method, Jacobi and Gauss-Seidel. We provide solutions for both discrete-reward sMRMs, where all rewards discrete (lattice) random variables. And also for continuous-reward sMRMs, where all rewards are strictly continuous random variables, but not necessarily having continuous probability density functions (pdfs). Our solutions involve the use of fast Fourier transform (FFT) for faster computation, and we adapted existing quadrature rules for convolution to gain more accurate solutions, rules such as the trapezoid rule, Simpson's rule or Romberg's method.
... The symbolic dynamic-programming approach we propose here is inspired by progress in weighted model counting, which is the problem of counting the number of satisfying (weighted) assignments of boolean formulas. Dudek, Phan, and Vardi proposed in [14], an approach based on Algebraic Decision Diagrams, which are the quantitative variants of BDDs [4]. Dudek et al. pointed out that a monolithic approach is not likely to be scalable, and proposed a factored approach, ADDMC, analogous to the approach in [28], in which conjunction is done lazily and quantification eagerly. ...
Chapter
Full-text available
Inspired by recent progress in dynamic programming approaches for weighted model counting, we investigate a dynamic-programming approach in the context of boolean realizability and synthesis, which takes a conjunctive-normal-form boolean formula over input and output variables, and aims at synthesizing witness functions for the output variables in terms of the inputs. We show how graded project-join trees, obtained via tree decomposition, can be used to compute a BDD representing the realizability set for the input formulas in a bottom-up order. We then show how the intermediate BDDs generated during realizability checking phase can be applied to synthesizing the witness functions in a top-down manner. An experimental evaluation of a solver – DPSynth – based on these ideas demonstrates that our approach for Boolean realizabilty and synthesis has superior time and space performance over a heuristics-based approach using same symbolic representations. We discuss the advantage on scalability of the new approach, and also investigate our findings on the performance of the DP framework.
... This parallel structure is much harder for humans to follow and therefore poses a challenge with regards to explainability. In the paper "Algebraic Aggregation of Random Forests: Towards Explainability and Rapid Evaluation" [17], the authors transform each tree in a random forest into a semantically equivalent algebraic decision diagram (ADD) [5] and leverage their algebraic properties to merge the trees into one singular ADD that faithfully captures the semantics of the entire forest. Structurally, the resulting ADD is equivalent to a single decision structure and uses only predicates from each singular tree, making it a comprehensible model. ...
Article
Full-text available
In this paper, we present the envisioned style and scope of the new topic “Explanation Paradigms Leveraging Analytic Intuition” (ExPLAIn) with the International Journal on Software Tools for Technology Transfer (STTT). Intention behind this new topic is to (1) explicitly address all aspects and issues that arise when trying to, if possible, reveal and then confirm hidden properties of black-box systems, or (2) to enforce vital properties by embedding them into appropriate system contexts. Machine-learned systems, such as Deep Neural Networks, are particularly challenging black-box systems, and there is a wealth of formal methods for analysis and verification waiting to be adapted and applied. The selection of papers of this first Special Section of ExPLAIn, most of which were co-authored by editorial board members, is an illustrative example of the style and scope envisioned: In addition to methodological papers on verification, explanation, and their scalability, case studies, tool papers, literature reviews, and position papers are also welcome.
... While there are frameworks which partially support symbolic evaluation of such functions, e.g. algebraic decision diagrams (Bahar et al. 1997), we are not aware of any implementation that would support all the operations required by BMA. We thus opted to enumerate the whole function table and re-encode it back into individual BDDs. ...
Article
Full-text available
Motivation: Boolean networks are simple but efficient mathematical formalism for modelling complex biological systems. However, having only two levels of activation is sometimes not enough to fully capture the dynamics of real-world biological systems. Hence, the need for multi-valued networks (MVNs), a generalization of Boolean networks. Despite the importance of MVNs for modelling biological systems, only limited progress has been made on developing theories, analysis methods, and tools that can support them. In particular, the recent use of trap spaces in Boolean networks made a great impact on the field of systems biology, but there has been no similar concept defined and studied for MVNs to date. Results: In this work, we generalize the concept of trap spaces in Boolean networks to that in MVNs. We then develop the theory and the analysis methods for trap spaces in MVNs. In particular, we implement all proposed methods in a Python package called trapmvn. Not only showing the applicability of our approach via a realistic case study, we also evaluate the time efficiency of the method on a large collection of real-world models. The experimental results confirm the time efficiency, which we believe enables more accurate analysis on larger and more complex multi-valued models. Availability and implementation: Source code and data are freely available at https://github.com/giang-trinh/trap-mvn.
... Our implementation heuristically reduces the number of semantically equivalent nodes of the ADSs. However, in contrast to Algebraic Decision Diagrams [5], which are known for their normal forms, we cannot guarantee canonicity here. ...
Article
Full-text available
In this paper, we present an algebraic approach to the precise and global verification and explanation of Rectifier Neural Networks, a subclass of Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras, which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified, or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function.
... Algebraic Decision Diagrams (ADDs), also known as Multi-Terminal BDDs, are a generalization of BDDs, where the realization of a boolean formula can evaluate to some set [23]. Mathematically, ADDs represent boolean function f : 2 X → C where X is a set of boolean variables and C ⊂ R is a finite set of constants. ...
Preprint
Full-text available
This work introduces efficient symbolic algorithms for quantitative reactive synthesis. We consider resource-constrained robotic manipulators that need to interact with a human to achieve a complex task expressed in linear temporal logic. Our framework generates reactive strategies that not only guarantee task completion but also seek cooperation with the human when possible. We model the interaction as a two-player game and consider regret-minimizing strategies to encourage cooperation. We use symbolic representation of the game to enable scalability. For synthesis, we first introduce value iteration algorithms for such games with min-max objectives. Then, we extend our method to the regret-minimizing objectives. Our benchmarks reveal that our symbolic framework not only significantly improves computation time (up to an order of magnitude) but also can scale up to much larger instances of manipulation problems with up to 2x number of objects and locations than the state of the art.
... While SAT-based methods for model checking have become increasingly popular, doing reachability analysis with decision diagrams is still an important component of many state-of-the-art model checking tools, as can be seen in the Model Checking Contest (MCC) [30,40]. Symbolic reachability analysis with binary decision diagrams (BDDs) and other variants [6,20,27,41] is done by encoding both the initial system state S init and its transition relation R in the diagram. The set of reachable states S is then iteratively computed using the image operation [33], denoted by S.R, starting from S init . ...
Preprint
Full-text available
Saturation is considered the state-of-the-art method for computing fixpoints with decision diagrams. We present a relatively simple decision diagram operation called REACH that also computes fixpoints. In contrast to saturation, it does not require a partitioning of the transition relation. We give sequential algorithms implementing the new operation for both binary and multi-valued decision diagrams, and moreover provide parallel counterparts. We implement these algorithms and experimentally compare their performance against saturation on 692 model checking benchmarks in different languages. The results show that the REACH operation often outperforms saturation, especially on transition relations with low locality. In a comparison between parallelized versions of REACH and saturation we find that REACH obtains comparable speedups up to 16 cores, although falls behind saturation at 64 cores. Finally, in a comparison with the state-of-the-art model checking tool ITS-tools we find that REACH outperforms ITS-tools on 29% of models, suggesting that REACH can be useful as a complementary method in an ensemble tool.
... IBAL [Pfeffer 2001] compiles to a factor graph; Fun [Borgström et al. 2011] compiles to factor graphs with gates [Minka and Winn 2008]. Dice [Holtzen et al. 2020] and BernoulliProb [Claret et al. 2013] compile to binary/algebraic decision diagrams [Bahar et al. 1997;Bryant 1986]. SPPL [Saad et al. 2021] compiles to sum-product networks [Poon and Domingos 2011], and FSPN [Stuhlmüller and Goodman 2012] generalizes sum-product networks to represent recursive dependencies. ...
Preprint
Recursive calls over recursive data are widely useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm.
... In the design automation for conventional circuits and systems, decision diagrams have been established since the 90's (see, e.g., [15], [16]). There, it has been shown that the size of decision diagrams significantly depends on the order in which the variables (in case of decision diagrams for quantum computing, the qubits) are encoded. ...
Chapter
Full-text available
Decision diagrams have proven to be a useful data structure in both, conventional and quantum computing, to compactly represent exponentially large data in many cases. Several approaches exist to further reduce the size of decision diagrams, i.e., their number of nodes. Reordering is one such approach to shrink decision diagrams by changing the order of variables in the representation. In the conventional world, this approach is established and its availability taken for granted. For quantum computing however, first approaches exist, but could not fully exploit a similar potential yet. In this paper, we investigate the differences between reordering decision diagrams in the conventional and the quantum world and, afterwards, unveil challenges that explain why reordering is much harder in the latter. A case study shows that, also for quantum computing, reordering may lead to improvements of several orders of magnitude in the size of the decision diagrams, but also requires substantially more runtime.
... Following the success of OBDD, several variants of decision diagrams have been proposed (e.g., (Minato 1993;Bahar et al. 1997)). The Sentential Decision Diagram (SDD) (Darwiche 2011) is such a prominent variant of the OBDD. ...
Article
The Sentential Decision Diagram (SDD) is a prominent knowledge representation language that subsumes the Ordered Binary Decision Diagram (OBDD) as a strict subset. Like OBDDs, SDDs have canonical forms and support bottom-up operations for combining SDDs, but they are more succinct than OBDDs. In this paper we introduce an SDD variant, called the Zero-suppressed Sentential Decision Diagram (ZSDD). The key idea of ZSDD is to employ new trimming rules for obtaining a canonical form. As a result, ZSDD subsumes the Zero-suppressed Binary Decision Diagram (ZDD) as a strict subset. ZDDs are known for their effectiveness on representing sparse Boolean functions. Likewise, ZSDDs can be more succinct than SDDs when representing sparse Boolean functions. We propose several polytime bottom-up operations over ZSDDs, and a technique for reducing ZSDD size, while maintaining applicability to important queries. We also specify two distinct upper bounds on ZSDD sizes; one is derived from the treewidth of a CNF and the other from the size of a family of sets. Experiments show that ZSDDs are smaller than SDDs or ZDDs for a standard benchmark dataset.
Article
In Weighted Model Counting (WMC), we assign weights to literals and compute the sum of the weights of the models of a given propositional formula where the weight of an assignment is the product of the weights of its literals. The current WMC solvers work on Conjunctive Normal Form (CNF) formulas. However, CNF is not a natural representation for human-being in many applications. Motivated by the stronger expressive power of Pseudo-Boolean (PB) formulas than CNF, we propose to perform WMC on PB formulas. Based on a recent dynamic programming algorithm framework called ADDMC for WMC, we implement a weighted PB counting tool PBCounter. We compare PBCounter with the state-of-the-art weighted model counters SharpSAT-TD, ExactMC, D4, and ADDMC, where the latter tools work on CNF with encoding methods that convert PB constraints into a CNF formula. The experiments on three domains of benchmarks show that PBCounter is superior to the model counters on CNF formulas.
Chapter
Explaining a classification made by tree-ensembles is an inherently hard problem that is traditionally solved approximately, without guaranteeing sufficiency or necessity. Abductive explanations were the first attempt to provide concise sufficient information: Given a sample, they consist of the minimal set of features that are relevant for the outcome. Inflated explanations are a refinement that additionally specify how much at least one feature must be altered in order to allow a change of the prediction. In this paper, we present the first algorithm for generating inflated explanations for gradient boosted trees, today’s de facto standard for tree-based classifiers. Key to our algorithm is a compilation approach based on algebraic decision diagrams. The impact of our approach is illustrated along a number of popular data sets.
Article
This paper presents a new data structure, called Weighted Context-Free-Language Ordered BDDs (WCFLOBDDs), which are a hierarchically structured decision diagram, akin to Weighted BDDs (WBDDs) enhanced with a procedure-call mechanism. For some functions, WCFLOBDDs are exponentially more succinct than WBDDs. They are potentially beneficial for representing functions of type B n → D , when a function’s image V ⊆ D has many different values. We apply WCFLOBDDs in quantum-circuit simulation, and find that they perform better than WBDDs on certain benchmarks. With a 15-minute timeout, the number of qubits that can be handled by WCFLOBDDs is 1-64× that of WBDDs(and 1-128× that of CFLOBDDs, which are an unweighted version of WCFLOBDDs). These results support the conclusion that for this application—from the standpoint of problem size, measured as the number of qubits—WCFLOBDDs provide the best of both worlds: performance roughly matches whichever of WBDDs and CFLOBDDs is better.(From the standpoint of running time, the results are more nuanced.)
Chapter
Full-text available
We present QReach, the first reachability analysis tool for quantum Markov chains based on decision diagrams CFLOBDD (presented at CAV 2023). QReach provides a novel framework for finding reachable subspaces, as well as a series of model-checking subprocedures like image computation. Experiments indicate its practicality in verification of quantum circuits and algorithms. QReach is expected to play a central role in future quantum model checkers.
Article
Due to their quantitative nature, probabilistic programs pose non-trivial challenges for designing compositional and efficient program analyses. Many analyses for probabilistic programs rely on iterative approximation. This article presents an interprocedural dataflow-analysis framework, called NPA-PMA, for designing and implementing (partially) non-iterative program analyses of probabilistic programs with unstructured control-flow, nondeterminism, and general recursion. NPA-PMA is based on Newtonian Program Analysis (NPA), a generalization of Newton's method to solve equation systems over semirings. The key challenge for developing NPA-PMA is to handle multiple kinds of confluences in both the algebraic structures that specify analyses and the equation systems that encode control flow: semirings support a single confluence operation, whereas NPA-PMA involves three confluence operations (conditional, probabilistic, and nondeterministic). Our work introduces ω-continuous pre-Markov algebras (ωPMAs) to factor out common parts of different analyses; adopts regular infinite-tree expressions to encode probabilistic programs with unstructured control-flow; and presents a linearization method that makes Newton's method applicable to the setting of regular-infinite-tree equations over ωPMAs. NPA-PMA allows analyses to supply a non-iterative strategy to solve linearized equations. Our experimental evaluation demonstrates that (i) NPA-PMA holds considerable promise for outperforming Kleene iteration, and (ii) provides great generality for designing program analyses.
Article
Full-text available
Recently, decision trees (DT) have been used as an explainable representation of controllers (a.k.a. strategies, policies, schedulers). Although they are often very efficient and produce small and understandable controllers for discrete systems, complex continuous dynamics still pose a challenge. In particular, when the relationships between variables take more complex forms, such as polynomials, they cannot be obtained using the available DT learning procedures. In contrast, support vector machines provide a more powerful representation, capable of discovering many such relationships, but not in an explainable form. Therefore, we suggest to combine the two frameworks to obtain an understandable representation over richer, domain-relevant algebraic predicates. We demonstrate and evaluate the proposed method experimentally on established benchmarks.
Chapter
Weighted model counting (WMC) is an extension of propositional model counting with applications to probabilistic inference and other areas of artificial intelligence. In recent experiments, WMC algorithms perform similarly overall but with significant differences on specific subsets of benchmarks. A good understanding of the differences in the performance of algorithms requires identifying key characteristics that favour some algorithms over others. In this paper, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We then use this model to experimentally compare the performance of WMC algorithms c2d, Cachet, d4, DPMC, and miniC2D. Using these random instances, we show that the easy-hard-easy pattern is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also show how all WMC algorithms scale exponentially with respect to primal treewidth and how this scalability varies across algorithms and densities. Finally, we combine insights from experiments involving both random and competition instances to determine how the best-performing WMC algorithm varies depending on clause density and primal treewidth.KeywordsWeighted model countingRandom modelParameterised complexity
Article
Recursive calls over recursive data are useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm.
Chapter
Saturation is considered the state-of-the-art method for computing fixpoints with decision diagrams. We present a relatively simple decision diagram operation called Reach that also computes fixpoints. In contrast to saturation, it does not require a partitioning of the transition relation. We give sequential algorithms implementing the new operation for both binary and multi-valued decision diagrams, and moreover provide parallel counterparts. We implement these algorithms and experimentally compare their performance against saturation on 692 model checking benchmarks in different languages. The results show that the Reach operation often outperforms saturation, especially on transition relations with low locality. In a comparison between parallelized versions of Reach and saturation we find that Reach obtains comparable speedups up to 16 cores, although falls behind saturation at 64 cores. Finally, in a comparison with the state-of-the-art model checking tool ITS-tools we find that Reach outperforms ITS-tools on 29% of models, suggesting that Reach can be useful as a complementary method in an ensemble tool.
Chapter
Full-text available
Stochastic games are a convenient formalism for modelling systems that comprise rational agents competing or collaborating within uncertain environments. Probabilistic model checking techniques for this class of models allow us to formally specify quantitative specifications of either collective or individual behaviour and then automatically synthesise strategies for the agents under which these specifications are guaranteed to be satisfied. Although good progress has been made on algorithms and tool support, efficiency and scalability remain a challenge. In this paper, we investigate a symbolic implementation based on multi-terminal binary decision diagrams. We describe how to build and verify turn-based stochastic games against either zero-sum or Nash equilibrium based temporal logic specifications. We collate a set of benchmarks for this class of games, and evaluate the performance of our approach, showing that it is superior in a number of cases and that strategies synthesised in a symbolic fashion can be considerably more compact.
Chapter
The field of machine learning focuses on computationally efficient, yet approximate algorithms. On the contrary, the field of formal methods focuses on mathematical rigor and provable correctness. Despite their superficial differences, both fields offer mutual benefit. Formal methods offer methods to verify and explain machine learning systems, aiding their adoption in safety critical domains. Machine learning offers approximate, computationally efficient approaches that let formal methods scale to larger problems. This paper gives an introduction to the track “Formal Methods Meets Machine Learning” (F3ML) and shortly presents its scientific contributions, structured into two thematic subthemes: One, concerning formal methods based approaches for the explanation and verification of machine learning systems, and one concerning the employment of machine learning approaches to scale formal methods.
Article
Loading classical data into quantum registers is one of the most important primitives of quantum computing. While the complexity of preparing a generic quantum state is exponential in the number of qubits, in many practical tasks the state to prepare has a certain structure that allows for faster preparation. In this paper, we consider quantum states that can be efficiently represented by (reduced) decision diagrams, a versatile data structure for the representation and analysis of Boolean functions. We design an algorithm that utilizes the structure of decision diagrams to prepare their associated quantum states. Our algorithm has a circuit complexity that is linear in the number of paths in the decision diagram. Numerical experiments show that our algorithm reduces the circuit complexity by up to 31.85% compared to the state-of-the-art algorithm, when preparing generic n-qubit states with n3 nonzero amplitudes. Additionally, for states with sparse decision diagrams, including the initial state of the quantum Byzantine agreement protocol, our algorithm reduces the number of controlled-nots by 86.61–99.9%.
Chapter
There is widespread use of zero-suppressed decision diagrams (ZDDs) in the design of logic circuits and in the efficient solution of combinatorial problems on sets, e.g., graph problems. ZDDs are especially efficient when there are relatively few elements compared to a large number of possibilities. In this chapter, we focus on circuit design, including circuits based on the irredundant sum-of-products form. In addition to completely specified Boolean functions, this chapter also considers incompletely specified functions. Upon completion of this chapter, the reader will have an understanding of how to implement ZDD procedures on the popular decision diagram package CUDD [39]. It is assumed that the reader is familiar with Boolean algebra. For example, completion of an undergraduate course in logic design is sufficient preparation for this chapter [37].
Chapter
We study several classes of symbolic weighted formalisms: automata (swA), transducers (swT) and visibly pushdown extensions (swVPA, swVPT). They combine the respective extensions of their symbolic and weighted counterparts, allowing a quantitative evaluation of words over a large or infinite input alphabet.We present properties of closure by composition, the computation of transducer-defined distances between nested words and languages, as well as a PTIME 1-best search algorithm for swVPA. These results are applied to solve in PTIME a variant of parsing over infinite alphabets. We illustrate this approach with a motivating use case in automated music transcription.
Article
Tensor networks have been successfully applied in simulation of quantum physical systems for decades. Recently, they have also been employed in classical simulation of quantum computing, in particular, random quantum circuits. This paper proposes a decision diagram style data structure, called TDD (Tensor Decision Diagram), for more principled and convenient applications of tensor networks. This new data structure provides a compact and canonical representation for quantum circuits. By exploiting circuit partition, the TDD of a quantum circuit can be computed efficiently. Furthermore, we show that the operations of tensor networks essential in their applications (e.g., addition and contraction) can also be implemented efficiently in TDDs. A proof-of-concept implementation of TDDs is presented and its efficiency is evaluated on a set of benchmark quantum circuits. It is expected that TDDs will play an important role in various design automation tasks related to quantum circuits, including but not limited to equivalence checking, error detection, synthesis, simulation, and verification.
Article
IN “Towards Explainability in Machine Learning: The Formal Methods Way,”1 we illustrated last year how Explainable AI can profit by formal methods in terms of its explainability. In fact, Explainable AI is a new branch of AI, directed to a finer granular understanding of how the fancy heuristics and experimental fine tuning of hyperparameters influence the outcomes of advanced AI tools and algorithms. We discussed the concept of “explanation,” and showed how the stronger meaning of explanation in terms of formal models leads to a precise characterization of the phenomenon under consideration. We illustrated how, following the Algebraic Decision Diagram (ADD)-based aggregation technique originally established in Gossen and Steffen's work2 we can produce precise information about and exact, deterministic prediction of the outcome from a random forest consisting of 100 trees.
Chapter
ω-regular energy games are two-player ω-regular games augmented with a requirement to avoid the exhaustion of a finite resource, e.g., battery or disk space. ω-regular energy games can be reduced to ω-regular games by encoding the energy level into the state space. As this approach blows up the state space, it performs poorly. Moreover, it is highly affected by the chosen energy bound denoting the resource’s capacity. In this work, we present an alternative approach for solving ω-regular energy games, with two main advantages. First, our approach is efficient: it avoids the encoding of the energy level within the state space, and its performance is independent of the engineer’s choice of the energy bound. Second, our approach is defined at the logic level, not at the algorithmic level, and thus allows solving ω-regular energy games by seamless reuse of existing symbolic fixed-point algorithms for ordinary ω-regular games. We base our work on the introduction of energyμ-calculus, a multi-valued extension of game μ-calculus. We have implemented our ideas and evaluated them. The empirical evaluation provides evidence for the efficiency of our work.
Chapter
Recent work in weighted model counting proposed a unifying framework for dynamic-programming algorithms. The core of this framework is a project-join tree: an execution plan that specifies how Boolean variables are eliminated. We adapt this framework to compute exact literal-weighted projected model counts of propositional formulas in conjunctive normal form. Our key conceptual contribution is to define gradedness on project-join trees, a novel condition requiring irrelevant variables to be eliminated before relevant variables. We prove that building graded project-join trees can be reduced to building standard project-join trees and that graded project-join trees can be used to compute projected model counts. The resulting tool ProCount is competitive with the state-of-the-art tools D4P, projMC, and reSSAT, achieving the shortest solving time on 131 benchmarks of 390 benchmarks solved by at least one tool, from 849 benchmarks in total.
Chapter
Weighted model counting (WMC) is a powerful computational technique for a variety of problems, especially commonly used for probabilistic inference. However, the standard definition of WMC that puts weights on literals often necessitates WMC encodings to include additional variables and clauses just so each weight can be attached to a literal. This paper complements previous work by considering WMC instances in their full generality and using recent state-of-the-art WMC techniques based on pseudo-Boolean function manipulation, competitive with the more traditional WMC algorithms based on knowledge compilation and backtracking search. We present an algorithm that transforms WMC instances into a format based on pseudo-Boolean functions while eliminating around 43 % of variables on average across various Bayesian network encodings. Moreover, we identify sufficient conditions for such a variable removal to be possible. Our experiments show significant improvement in WMC-based Bayesian network inference, outperforming the current state of the art.
Article
A number of product-line analysis approaches lift analyses such as type checking, model checking, and theorem proving from the level of single programs to the level of product lines. These approaches share concepts and mechanisms that suggest an unexplored potential for reuse of key analysis steps and properties, implementation, and verification efforts. Despite the availability of taxonomies synthesizing such approaches, there still remains the underlying problem of not being able to describe product-line analyses and their properties precisely and uniformly. We propose a formal framework that models product-line analyses in a compositional manner, providing an overall understanding of the space of family-based, feature-based, and product-based analysis strategies. It defines precisely how the different types of product-line analyses compose and inter-relate. To ensure soundness, we formalize the framework, providing mechanized specification and proofs of key concepts and properties of the individual analyses. The formalization provides unambiguous definitions of domain terminology and assumptions as well as solid evidence of key properties based on rigorous formal proofs. To qualitatively assess the generality of the framework, we discuss to what extent it describes five representative product-line analyses targeting the following properties: safety, performance, dataflow facts, security, and functional program properties.
Chapter
Full-text available
Recent advances have shown how decision trees are apt data structures for concisely representing strategies (or controllers) satisfying various objectives. Moreover, they also make the strategy more explainable. The recent tool dtControl had provided pipelines with tools supporting strategy synthesis for hybrid systems, such as SCOTS and Uppaal Stratego. We present dtControl 2.0, a new version with several fundamentally novel features. Most importantly, the user can now provide domain knowledge to be exploited in the decision tree learning process and can also interactively steer the process based on the dynamically provided information. To this end, we also provide a graphical user interface. It allows for inspection and re-computation of parts of the result, suggesting as well as receiving advice on predicates, and visual simulation of the decision-making process. Besides, we interface model checkers of probabilistic systems, namely STORM and PRISM and provide dedicated support for categorical enumeration-type state variables. Consequently, the controllers are more explainable and smaller.
Conference Paper
Full-text available
This paper presents an original method to compare two synchronous sequential machines. The method consists in a breadth first traversal of the product machine during which symbolic expressions of its observable behaviour are computed. The method uses formal manipulations on boolean functions to avoid the state enumeration and diagram construction. For this purpose, new algorithms on boolean functions represented by Typed Decision Graphs has been defined.
Article
In this paper, we discuss the use of binary decision diagrams to represent general matrices. We demonstrate that binary decision diagrams are an efficient representation for every special-case matrix in common use, notably sparse matrices. In particular, we demonstrate that for any matrix, the BDD representation can be no larger than the corresponding sparse-matrix representation. Further, the BDD representation is often smaller than any other conventional special-case representation: for the nn Walsh matrix, for example, the BDD representation is of size O(log n). No other special-case representation in common use represents this matrix in space less than O(n2). We describe termwise, row, column, block, and diagonal selection over these matrices, standard an Strassen matrix multiplication, and LU factorization. We demonstrate that the complexity of each of these operations over the BDD representation is no greater than that over any standard representation. Further, we demonstrate that complete pivoting is no more difficult over these matrices than partial pivoting. Finally, we consider an example, the Walsh Spectrum of a Boolean function.
Article
We describe the Harwell-Boeing sparse matrix collection, a set of standard test matrices for sparse matrix problems. Our test set comprises problems in linear systems, least squares, and eigenvalue calculations from a wide variety of scientific and engineering disciplines. The problems range from small matrices, used as counter-examples to hypotheses in sparse matrix research, to large test cases arising in large-scale computation. We offer the collection to other researchers as a standard benchmark for comparative studies of algorithms. The procedures for obtaining and using the test collection are discussed. We also describe the guidelines for contributing further test problems to the collection.
Conference Paper
In this paper we present two symbolic algorithms to compute the steady-state probabilities for very large finite state machines. These algorithms, based on Algebraic Decision Diagrams (ADD's)-an extension of BDDs that allows arbitrary values to be associated with the terminal nodes of the diagrams-determine the steady-state probabilities by regarding finite state machines as homogeneous, discrete-parameter Markov chains with finite state spaces, and by solving the corresponding Chapman-Kolmogorov equations. We have implemented two solution techniques: one is based on the Gauss-Jacobi iteration, and the other one on simple matrix multiplication, we report the experimental results obtained for problems with over 108 unknowns in irreducible form
Conference Paper
The temporal logic model algorithm of E.M. Clarke et al. (ACM Trans. Prog. Lang. Syst., vol.8, no.2, p.244-63, 1986) is modified to represent a state graph using binary decision diagrams (BDDs). Because this representation captures some of the regularity in the state space of sequential circuits with data path logic, one is able to verify circuits with an extremely large number of states. This new technique is demonstrated on a synchronous pipelined design with approximately 5×1020 states. The logic that is used to specify circuits is a propositional temporal logic of branching time, called CTL or Computation Tree Logic. The model checking algorithm handles full CTL with fairness constraints. Consequently. it is possible to handle a number of important liveness and fairness properties. which would otherwise not be expressible in CTL. The method presented is not necessarily a replacement for brute-force state-enumeration methods but an alternative that may work efficiently when the brute force methods fail
Conference Paper
The authors propose a novel method based on transition relations that only requires the ability to compute the BDD (binary decision diagram) for f i and outperforms O. Coudert's (1990) algorithm for most examples. The method offers a simple notational framework to express the basic operations used in BDD-based state enumeration algorithms in a unified way and a set of techniques that can speed up range computation dramatically, including a variable ordering heuristic and a method based on transition relations
Conference Paper
Algorithms are presented for finite state machine (FSM) verification and image computation which improve on the results of O. Coudert et al (1989), giving 1-4 orders of magnitude speedup. Novel features include primary input splitting-this PODEM feature enlarges the search space but shortens the search due to implications. Another new feature, identical subtree recombination, is shown to be effective for iterative networks (eg, serial multipliers). The free-variable recognition feature prevents unbalanced bipartitioning trees in tautological subspaces. Finally, reached set pruning is significant when the image contains large numbers of previously reached states
Article
The tableau approach to automated network design optimization via implicit, variable order, variable time-step integration, and adjoint sensitivity computation is described. In this approach, the only matrix operation required is that of repeatedly solving linear algebraic equations of fixed sparsity structure. Required partial derivatives and numerical integration is done at the branch level leading to a simple input language, complete generality and maximum sparsity of the characteristic coefficient matrix. The bulk of computation and program complexity is thus located in the sparse matrix routines; described herein are the routines OPTORD and 1-2-3 GNSO. These routines account for variability type of the matrix elements in producing a machine code for solution of Ax=b in nested iterations for which a weighted sum of total operations count and round-off error incurred in the optimization is minimized.
SSK Formal Methods in System Design KL406-04-Bahar SSK Formal Methods in System Design KL406-04-Bahar Figure 13. The Four Way LDU algorithm
  • Op
  • I Op
  • ∈ S P
  • Ssk
  • Klk P
  • Ssk
  • Srk
  • Ssk
  • Klk P
  • Ssk
  • Srk
I is an identity for op, that is, a op I = I op a = a, ∀a ∈ S. P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 200 BAHAR ET AL. Figure 13. The Four Way LDU algorithm. P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 P1: SSK/KLK P2: SSK/SRK QC: SSK Formal Methods in System Design KL406-04-Bahar March 15, 1997 18:0 References
Efficient algorithm to perform sparse matrix multiplication
  • F G Gustavson
F.G. Gustavson, " Efficient algorithm to perform sparse matrix multiplication, " IBM Technical Disclosure Bulletin, Vol. 20, No. 3, pp. 1262–1264, Aug. 1977.
Verification of sequential machines using boolean functional vectors," <i>IFIP International Workshop on Applied Formal Methods for Correct VLSI Design&lt
  • . O Coudert
  • C Berthet
  • J C Madre