## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

In this paper we design minimum perfect hash functions on the basis of BDDs that represent all reachable states S ⊆ {0,1}
n
. These functions are one-to-one on S and can be evaluated quite efficiently. Such hash functions are useful to perform search in a bitvector representation of
the state space. The time to compute the hash value with standard operations on the BDD G is (n|G|), the time to compute the inverse is O(n
2|G|). When investing O(n) bits per node, we arrive at O(|G|) preprocessing time and optimal time O(n) for ranking and unranking.

To read the full-text of this research,

you can request a copy directly from the authors.

... A rank is a number uniquely representing a state and the inverse process, called unranking, reconstructs the state given its rank. The approach advocated in this paper builds on top of findings of [8], who illustrated that ranking and unranking of states in a state set represented as a BDD is available in time linear to the length of the state vector (in binary representation). In other words, BDD ranking aims at the symbolic equivalent of constructing a perfect hash function in explicit-state space search [3]. ...

... We have implemented the pseudo-code algorithms for the CUDD library [21]. The proposal in [8] does not support negated edges. Negated edges, however, are crucial, since otherwise function complementation is not a constant time operation, at least for a BDD in a shared representation [17]. ...

... In contrast to standard satisfiability count implementations (as in CUDD) this way we ensure that only satcount values of at most c are stored, where c is the satcount value of the root node. E.g., in ConnectFour 7 × 6 with 85 binary variables (yielding 2 85 possible values), long integers are sufficient to store intermediate satcount values, which are all smaller than the satcount value of the entire reachable set (4,531,985,219,092). Figures 2 and 3 extend the proposal of [8] and show the ranking and unranking functions and thus realize an invertible minimal perfect hash function for f mapping an assignment s ∈ {0, 1} n to a value r ∈ {0, . . . ,C f − 1}. ...

For the exploration of large state spaces, symbolic search using binary
decision diagrams (BDDs) can save huge amounts of memory and computation time.
State sets are represented and modified by accessing and manipulating their
characteristic functions. BDD partitioning is used to compute the image as the
disjunction of smaller subimages.
In this paper, we propose a novel BDD partitioning option. The partitioning
is lexicographical in the binary representation of the states contained in the
set that is represented by a BDD and uniform with respect to the number of
states represented. The motivation of controlling the state set sizes in the
partitioning is to eventually bridge the gap between explicit and symbolic
search.
Let n be the size of the binary state vector. We propose an O(n) ranking and
unranking scheme that supports negated edges and operates on top of precomputed
satcount values. For the uniform split of a BDD, we then use unranking to
provide paths along which we partition the BDDs. In a shared BDD representation
the efforts are O(n). The algorithms are fully integrated in the CUDD library
and evaluated in strongly solving general game playing benchmarks.

... To gain processing and reasoning abilities, a robot should be able to match the knowledge with its inner symbolic representations. In [7], Tenorth et al. proposed the KNOWROB tool to address this need. KNOWROB is based on SWI Prolog and its API for semantic web which is used for accessing the Web Ontology Language-based ontologies using Prolog. ...

Power plant scheduling can be very time-consuming in long term scenarios. At Stadtwerke München GmbH a schedule of 25 years in one hour intervals is completed in several days with the commercial solution BoFiT. In order to reduce this huge amount of time, a new software is developed. This paper describes the new system KEO and focuses on measures to reduce the CPU-time. These measures are reduction of the planning horizon, parallelization of the calculation and also, calculation with typical days. KEO significantly decrease required time to generate a schedule from 200 hours to roughly 21 minutes.

... This approach will have potential applications in action planning, general game playing, and model checking. The above algorithms are special cases of according ranking and unranking functions developed for BDDs [11]. For the sake of completeness, the according rank and unrank algorithms are shown in Algorithm 12 and Algorithm 13. ...

Abstract In this paper, we propose an efficient method of solving one- and two-player combinato- rial games by mapping,each state to a unique bit in memory. In order to avoid collisions, a concise portfolio of perfect hash functions is provided. Such perfect hash functions then address tables that serve as a compressed,representation of

... The motivation of the partitioning pointing towards future work is that explicit search can be more space-efficient if perfect hash functions are available. With ranking and unranking as proposed by Dietzfelbinger and Edelkamp (2009) we can eventually connect a symbolic state space representation with BDDs and an explicit bitvector based exploration. The BDDs can serve as a basis for a linear-time ranking and unranking scheme. ...

This work combines recent advances in combinatorial search under memory limitation, namely bitvector and symbolic search. Bitvector search assumes a bijective mapping between state and memory addresses, while symbolic search compactly represents state sets. The memory requirements vary with the structure of the problem to be solved. The integration of the two algorithms into one hybrid algorithm for strongly solving general games initiates a BDD-based solving algorithm, which consists of a forward computation of the reachable state set, possibly followed by a layered backward retrograde analysis. If the main memory becomes exhausted, it switches to explicit-state two-bit retrograde search. We use the classical game of Connect Four as a case study, and solve some instances of the problem space-efficiently with the proposed hybrid search algorithm.

This work combines recent advances in AI planning under memory limitation, namely bitvector and symbolic search. Bitvector search assumes a bijective mapping between state and memory addresses, while symbolic search compactly represents state sets. The memory requirements vary with the structure of the problem to be solved. The integration of the two algorithms into one hybrid algorithm for strongly solving general games initiates a BDD-based solving algorithm, which consists of a forward computation of the reachable state set, possibly followed by a layered backward retrograde analysis. If the main memory becomes exhausted, it switches to explicit-state two-bit retrograde search. We use the classical game of Connect Four as a case study, and solve some instances of the problem space-efficiently with the proposed hybrid search algorithm. Copyright © 2014, Association for the Advancement of Artificial Intelligence.

As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not.So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection.For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory.Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms.

Abstract In this paper, we propose an efficient method of solving one- and two-player combinato- rial games by mapping,each state to a unique bit in memory. In order to avoid collisions, a concise portfolio of perfect hash functions is provided. Such perfect hash functions then address tables that serve as a compressed,representation of

We present a simple and efficient external perfect hashing scheme (referred to as EPH algorithm) for very large static key sets. We use a number of techniques from the literature to obtain a novel scheme that is theoretically well-understood and at the same time achieves an order-of-magnitude increase in the size of the problem to be solved compared to previous "practical" methods. We demonstrate the scalability of our algorithm by constructing minimum perfect hash functions for a set of 1.024 billion URLs from the World Wide Web of average length 64 characters in approximately 62 minutes, using a commodity PC. Our scheme produces minimal perfect hash functions using approximately 3.8 bits per key. For perfect hash functions in the range {0,...,2n - 1} the space usage drops to approximately 2.7 bits per key. The main contribution is the first algorithm that has experimentally proven practicality for sets in the order of billions of keys and has time and space usage carefully analyzed without unrealistic assumptions.

The number of moves required to solve any state of Rubik's cube has been a matter of long-standing conjecture for over 25 years -- since Rubik's cube appeared. This number is sometimes called "God's number". An upper bound of 29 (in the face-turn metric) was produced in the early 1990's, followed by an upper bound of 27 in 2006. An improved upper bound of 26 is produced using 8000 CPU hours. One key to this result is a new, fast multiplication in the mathematical group of Rubik's cube. Another key is efficient out-of-core (disk-based) parallel computation using terabytes of disk storage. One can use the precomputed data structures to produce such solutions for a specific Rubik's cube position in a fraction of a second. Work in progress will use the new "brute-forcing" technique to further reduce the bound.

A perfect hash function (PHF) h: U →[0,m − 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n
2/m bits, where n = |S|. In this paper we present new algorithms for construction and evaluation of PHFs of a given set (for m = n and m = 1.23n), with the following properties:
1
Evaluation of a PHF requires constant time.
1
The algorithms are simple to describe and implement, and run in linear time.
1
The amount of space needed to represent the PHFs is around a factor 2 from the information theoretical minimum.
No previously known algorithm has these properties. To our knowledge, any algorithm in the literature with the third property either:
Requires exponential time for construction and evaluation, or
Uses near-optimal space only asymptotically, for extremely large n.
Thus, our main contribution is a scheme that gives low space usage for realistic values of n. The main technical ingredient is a new way of basing PHFs on random hypergraphs. Previously, this approach has been used to design simple PHFs with superlinear space usage.

Solid state disks based on flash memory are an apparent alternative to hard disks for external memory search. Random reads are much faster, while random writes are generally not. In this paper, we illustrate how this influences the time-space trade-offs for scaling semi-external LTL model checking algorithms that request a constant number of bits per state in internal, and full state vectors on external memory. We invent a complexity model to analyze the effect of outsourcing the perfect hash function from random access to flash memory. In this model a 1-bit semi-external I/O efficient LTL model checking algorithm is proposed that generates minimal counterexamples.

In this paper we describe a data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations of Lee and Akers, but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms are quite efficient as long as the graphs being operated on do not grow too large. We present performance measurements obtained while applying these algorithms to problems in logic design verification.

We consider the problem of storing a set S ae Sigma k as a deterministic finite automaton (DFA). We show that inserting a new string oe 2 Sigma k or deleting a string from the set S represented as a minimized DFA can be done in expected time O(kjSigmaj), while preserving the minimality of the DFA. We then discuss an application of this work to reduce the memory requirements of a model checker based on explicit state enumeration. Keywords: Finite Automata, Verification, OBDDs, Sharing trees, Data Compression 1 Introduction In this paper we consider the problem of reducing the memory requirements of model checkers such as Spin [4], that are based on an explicit state enumeration method. The memory requirements of such a model checker are dominated by state storage. Reached states are typically stored in a hash-table, mostly for efficiency reasons. We will consider here a different method for storing the states, based on the on-the-fly construction of a deterministic finite stat...

In the previous chapter, we equated a CTL formula with the set of states in which the formula is true. We showed how the CTL operators can thus be characterized as fixed points of certain continuous functionals in the lattice of subsets, and how these fixed points can be computed iteratively. This provides us with a model checking algorithm for CTL, but requires us to build a finite Kripke model for our system and hence leads us to the state explosion problem. In this chapter, we will explore a method of model checking that avoids the state explosion problem in some cases by representing the Kripke model implicitly with a Boolean formula. This allows the CTL model checking algorithm to be implemented using well developed automatic techniques for manipulating Boolean formulas. Since the Kripke model is symbolically represented, there is no need to actually construct it as an explicit data structure. Hence, the state explosion problem can be avoided.

Preface Introduction 1. Introduction 2. BPs and Decision Trees (DTs) 3. Ordered Binary Decision Diagrams (OBDDs) 4. The OBDD Size of Selected Functions 5. The Variable-Ordering Problem 6. Free BDDs (FBDDs) and Read-Once BPs 7. BDDs with Repeated Tests 8. Decision Diagrams (DDs) Based on Other Decomposition Rules 9. Integer-Valued DDs 10. Nondeterministic DDs 11. Randomized BDDs and Algorithms 12. Summary of the Theoretical Results 13. Applications in Verification and Model Checking 14. Further CAD Applications 15. Application in Optimization, Counting, and Genetic Programming Bibliography Index.

The efficiency of A* searching depends on the quality of the lower bound estimates of the solution cost. Pattern databases enumerate all possible subgoals required by any solution, subject to constraints on the subgoal size. Each subgoal in the database provides a tight lower bound on the cost of achieving it. For a given state in the search space, all possible subgoals are looked up in the pattern database, with the maximum cost over all lookups being the lower bound. For sliding tile puzzles, the database enumerates all possible patterns containing N tiles and, for each one, contains a lower bound on the distance to correctly move all N tiles into their correct final location. For the 15-Puzzle, iterative-deepening A* with pattern databases(N ="8) reduces the total number of nodes searched on a standard problem set of 100 positions by over 1000-fold.

The idea of using BDDs for optimal sequential planning is to reduce the memory requirements for the state sets as problem sizes increase. State variables are encoded binary and ordered along their causal graph dependencies. Sets of planning states are represented in form of Boolean functions, and actions are formalized as transition relations. This allows to compute the successor state set, which determines all states reached by applying one action to the states in the input set. Iterating the process (starting with the representation of the initial state) yields a symbolic implementation of breadth-first search. This paper studies the causes for good and bad BDD performance by providing lower and upper bounds for BDD growth in various domains. Besides general applicability to planning benchmarks, our approach covers different cost models; it applies to step-optimal propositional planning as well as planning with additive action costs.

A number of researchers have proposed Cayley graphs and Schreier coset graphs as models for interconnection networks. New algorithms are presented for generating Cayley graphs in a more time-efficient manner than was previously possible. Alternatively, a second algorithm is provided for storing Cayley graphs in a space-efficient manner (log2(3) bits per node), so that copies could be cheaply stored at each node of an interconnection network. The second algorithm is especially useful for providing a compact encoding of an optimal routing table (for example, a 13 kilobyte optimal table for 64,000 nodes). The algorithm relies on using a compact encoding of group elements known from computational group theory. Generalizations of all of the above are presented for Schreier coset graphs.

We present a breadth-first search algorithm, two-bit breadth-first search (TBBFS), which requires only two bits for each state in the problem space. TBBFS can be parallelized in several ways, and can store its data on magnetic disk. Using TBBFS, we perform complete breadth-first searches of the original pancake problem with 14 and 15 pancakes, and the burned pancake problem with 11 and 12 pancakes, determining the diameter of these problem spaces for the first time. We also performed a complete breadth-first search of the subspace of Rubik's Cube determined by the edge cubies.

In this paper we present a new symbolic algorithm for the classifica- tion, i. e. the calculation of the rewards for both players in case of optimal p lay, of two-player games with general rewards according to the Game Description Lan- guage. We will show that it classifies all states using a linear number of imag es concerning the depth of the game graph. We also present an extension that uses this algorithm to create symbolic endgame databases and then performs UCT to find an estimate for the classification of the game.

Many different methods have been devised for automatically verifying finite state systems by examining state-graph models of system behavior. These methods all depend on decision procedures that explicitly represent the state space using a list or a table that grows in proportion to the number of states. We describe a general method that represents the state space symbolically instead of explicitly. The generality of our method comes from using a dialect of the Mu-Calculus as the primary specification language. We describe a model checking algorithm for Mu-Calculus formulas that uses Bryant's Binary Decision Diagrams (1986) to represent relations and formulas. We then show how our new Mu-Calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment for finite !-automata.

In this paper we establishc-bit semi-external graph algorithms, - i.e., algorithms which need only a constant numberc of bits per vertex in the internal memory. In this setting, we obtain new trade-offs between time and space for I/O efficient LTL model checking. First, we design a c-bit semi-external algorithm for depth-first search. To achieve a low internal memory consumption, we con- struct a RAM-efficient perfect hash function from the vertex set stored on disk. We give a similar algorithm for double depth-first search, which checks for pres- ence of accepting cycles and thus solves the LTL model checking problem. The I/O complexity of the search itself is proportional to the time for scanning the search space. For on-the-fly model checking we apply iterative-deepening strat- egy known from bounded model checking.

The heuristics used for planning and search often take the form of pattern databases generated from abstracted versions of the given state space. Pattern databases are typically stored as lookup tables with one entry for each state in the abstract space, which limits the size of the abstract state space and therefore the quality of the heuristic that can be used with a given amount of memory. In the AIPS-2002 conference Ste- fan Edelkamp introduced an alternative representation, called symbolic pattern databases, which, for the Blocks World, re- quired two orders of magnitude less memory than a lookup table to store a pattern database. This paper presents experi- mental evidence that Edelkamp's result is not restricted to a single domain. Symbolic pattern databases, in the form of Al- gebraic Decision Diagrams, are one or more orders of magni- tude smaller than lookup tables on a wide variety of problem domains and abstractions.

A Shannon C-type strategy program, VICTOR, is written for Connect-Four, based on nine strategic rules. Each of these rules is proven to be correct, implying that conclusions made by VICTOR are correct. Using VICTOR, strategic rules where found which can be used by Black to at least draw the game, on each 7 × (2n) board, provided that White does not start at the middle column, as well as on any 6 × (2n) board. In combination with conspiracy-number search, search tables and depth-first search, VICTOR was able to show that White can win on the standard 7 × 6 board. Using a database of approximately half a million positions, VICTOR can play real time against opponents on the 7 × 6 board, always winning with White. 333333333333333333 + Published in 1988 as Report IR-163 by the Faculty of Mathematics and Computer Science at the Vrije Universiteit Amsterdam, The Netherlands. Also published in 1992 as Report CS 92-04 by the Faculty of General Sciences at the University of Limburg, Maastricht,...

The compression power of BDDs

- M Ball
- R C Holte

Ball, M., Holte, R.C.: The compression power of BDDs. In: ICAPS, pp. 2-11 (2008)

Simple and space-efficient minimal perfect hash functions

- F C Botelho
- R Pagh
- N Ziviani

Botelho, F.C., Pagh, R., Ziviani, N.: Simple and space-efficient minimal perfect
hash functions. In: Dehne, F., Sack, J.-R., Zeh, N. (eds.) WADS 2007. LNCS,
vol. 4619, pp. 139-150. Springer, Heidelberg (2007)

Symbolic classification of general two-player games

- S Edelkamp
- P Kissmann
- A R Dengel
- K Berns
- T M Breuel
- F Bomarius
- Roth-Berghofer

Edelkamp, S., Kissmann, P.: Symbolic classification of general two-player games.
In: Dengel, A.R., Berns, K., Breuel, T.M., Bomarius, F., Roth-Berghofer, T.R.
(eds.) KI 2008. LNCS (LNAI), vol. 5243, pp. 185-192. Springer, Heidelberg (2008)

Semi-external LTL model checking

- S Edelkamp
- P Sanders
- P Simecek

Edelkamp, S., Sanders, P., Simecek, P.: Semi-external LTL model checking. In:
Gupta, A., Malik, S. (eds.) CAV 2008. LNCS, vol. 5123, pp. 530-542. Springer,
Heidelberg (2008)