ArticlePDF Available

On the complexity of VLSI implementations and graph representationsof Boolean functions with application to integer multiplication

Authors:

Abstract

Lower-bound results on Boolean-function complexity under two different models are discussed. The first is an abstraction of tradeoffs between chip area and speed in very-large-scale-integrated (VLSI) circuits. The second is the ordered binary decision diagram (OBDD) representation used as a data structure for symbolically representing and manipulating Boolean functions. The lower bounds demonstrate the fundamental limitations of VLSI as an implementation medium, and that of the OBDD as a data structure. It is shown that the same technique used to prove that any VLSI implementation of a single output Boolean function has area-time complexity AT 2=Ω( n 2) also proves that any OBDD representation of the function has Ω( c n) vertices for some c >1 but that the converse is not true. An integer multiplier for word size n with outputs numbered 0 (least significant) through 2 n -1 (most significant) is described. For the Boolean function representing either output i -1 or output 2 n -i-1, where 1⩽ i ⩽ n , the following lower bounds are proved: any VLSI implementation must have AT 2=Ω( i 2) and any OBDD representation must have Ω(1.09i) vertices
a1
0
1
b1
0
T
1
a2
0
1
b2
0
T
1
a3
F
0
1
b3
F
0
T
1
a1
10
a2
1
0
a3
1
0
b1
T
1
0
b2
T
1
0
b3
T
1
F
0
b1
T
1
0
b2
T
1
F
0
a3
1
0
b1
T
1
0
b1
T
1
F
0
a2
1
0
a3
1
0
a3
1
F
0
... Proof that OBDD < is not as succinct as TT < : In the following, we demonstrate the existence of a function that can be represented in a polynomial-sized tensor train but not in a polynomial-sized OBDD. We consider the HWB function (Bryant 1991) defined as follows. Definition 9. Let x = {x 1 , x 2 , . . . ...
Preprint
A knowledge compilation map analyzes tractable operations in Boolean function representations and compares their succinctness. This enables the selection of appropriate representations for different applications. In the knowledge compilation map, all representation classes are subsets of the negation normal form (NNF). However, Boolean functions may be better expressed by a representation that is different from that of the NNF subsets. In this study, we treat tensor trains as Boolean function representations and analyze their succinctness and tractability. Our study is the first to evaluate the expressiveness of a tensor decomposition method using criteria from knowledge compilation literature. Our main results demonstrate that tensor trains are more succinct than ordered binary decision diagrams (OBDDs) and support the same polytime operations as OBDDs. Our study broadens their application by providing a theoretical link between tensor decomposition and existing NNF subsets.
... Due to the use of BDDs however, the current approach is limited to functions for which an efficient BDD representation exists. For multipliers, for example, it has been proven that no BDD of polynomial size can be found [26]. As the principle for the pattern detection is also applicable to other DD types, the basic algorithms presented in this paper can be reused and adapted for other data structures in future work. ...
Article
Full-text available
With the ongoing digitization, digital circuits have become increasingly present in everyday life. However, as circuits can be faulty, their verification poses a challenging but essential challenge. In contrast to formal verification techniques, simulation techniques fail to fully guarantee the correctness of a circuit. However, due to the exponential complexity of the verification problem, formal verification can fail due to time or space constraints. To overcome this challenge, recently Polynomial Formal Verification (PFV) has been introduced. Here, it has been shown that several circuits and circuit classes can be formally verified in polynomial time and space. In general, these proofs have to be conducted manually, requiring a lot of time. However, in recent research, a method for automated PFV has been proposed, where a proof engine automatically generates human-readable proofs that show the polynomial size of a Binary Decision Diagram (BDD) for a given function. The engine analyses the BDD and finds a pattern, which is then proven by induction. In this article, we formalize the previously presented BDD patterns and propose algorithms for the pattern detection, establishing new possibilities for the automated proof generation for more complex functions. Furthermore, we show an exemplary proof that can be generated using the presented methods. This article is part of the theme issue ‘Emerging technologies for future secure computing platforms’.
... We have not had to introduce any evolutionary constraints favoring minimal or simple circuits.To get more insight into this phenomenon, we analyze in this section in detail the case of multiplication. Several types of circuit construction problems for multipliers do become formally hard (in NP), for minimal circuit resources38 , and no scalable solutions are then expected. However, without any additional requirements, multiplication has hitherto become increasingly difficult to evolve in larger circuits, because correct solutions are lost in a large search space. ...
Preprint
We propose that genetic encoding of self-assembling components greatly enhances the evolution of complex systems and provides an efficient platform for inductive generalization, i.e. the inductive derivation of a solution to a problem with a potentially infinite number of instances from a limited set of test examples. We exemplify this in simulations by evolving scalable circuitry for several problems. One of them, digital multiplication, has been intensively studied in recent years, where hitherto the evolutionary design of only specific small multipliers was achieved. The fact that this and other problems can be solved in full generality employing self-assembly sheds light on the evolutionary role of self-assembly in biology and is of relevance for the design of complex systems in nano- and bionanotechnology.
... Symbolic model checking is most commonly applied on Boolean programs, avoiding many of the mentioned problems, especially those related to arithmetic. Computing multiplication with the standard representation, Binary Decision Diagrams [18], is exponential in the size of the representation [7]. Other representation were designed to remedy this deficiency, such as Binary Moment Diagrams [8] or Boolean Expression Diagrams [24]. ...
Preprint
A comprehensive verification of parallel software imposes three crucial requirements on the procedure that implements it. Apart from accepting real code as program input and temporal formulae as specification input, the verification should be exhaustive, with respect to both control and data flows. This paper is concerned with the third requirement, proposing to combine explicit model checking to handle the control with symbolic set representations to handle the data. The combination of explicit and symbolic approaches is first investigated theoretically and we report the requirements on the symbolic representation and the changes to the model checking process the combination entails. The feasibility and efficiency of the combination is demonstrated on a case study using the DVE modelling language and we report a marked improvement in scalability compared to previous solutions. The results described in this paper show the potential to meet all three requirements for automatic verification in a single procedure combining explicit model checking with symbolic set representations.
... where x = x 1 + · · · + x n is the number of variables set to 1 in the input x and x 0 := 0 which means that the output is 0 if x 1 + · · · + x n = 0. HWB n is an example of a function with a clear and simple structure, nevertheless the OBDD size is exponential [12]. (See Figure 5 for restricted BDDs representing the function HWB.) ...
Preprint
Sentential decision diagrams (SDDs) introduced by Darwiche in 2011 are a promising representation type used in knowledge compilation. The relative succinctness of representation types is an important subject in this area. The aim of the paper is to identify which kind of Boolean functions can be represented by SDDs of small size with respect to the number of variables the functions are defined on. For this reason the sets of Boolean functions representable by different representation types in polynomial size are investigated and SDDs are compared with representation types from the classical knowledge compilation map of Darwiche and Marquis. Ordered binary decision diagrams (OBDDs) which are a popular data structure for Boolean functions are one of these representation types. SDDs are more general than OBDDs by definition but only recently, a Boolean function was presented with polynomial SDD size but exponential OBDD size. This result is strengthened in several ways. The main result is a quasipolynomial simulation of SDDs by equivalent unambiguous nondeterministic OBDDs, a nondeterministic variant where there exists exactly one accepting computation for each satisfying input. As a side effect an open problem about the relative succinctness between SDDs and free binary decision diagrams (FBDDs) which are more general than OBDDs is answered.
Preprint
Full-text available
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security \textit{and} at the same time are efficient to implement. Unfortunately, over the last two decades no good solutions to this problem appeared in the literature. The lack of good solutions has effectively led to nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper, we revive the nonlinear filter model by constructing appropriate Boolean functions which provide required security and are also efficient to implement. We put forward concrete suggestions of stream ciphers which are κ\kappa-bit secure against known types of attacks for κ=80,128,160,192,224\kappa=80,128,160,192,224 and 256. For the 80-bit, 128-bit, and the 256-bit security levels, the circuits for the corresponding stream ciphers require about 1743.5, 2771.5, and 5607.5 NAND gates respectively. For the 80-bit and the 128-bit security levels, the gate count estimates compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the 256-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Preprint
Full-text available
We describe two new classes of functions which provide the presently best known trade-offs between low computational complexity, nonlinearity and (fast) algebraic immunity. The nonlinearity and (fast) algebraic immunity of the new functions substantially improve upon those properties of all previously known efficiently implementable functions. Appropriately chosen functions from the two new classes provide excellent solutions to the problem of designing filtering functions for use in the nonlinear filter model of stream ciphers, or in any other stream ciphers using Boolean functions for ensuring confusion. In particular, for n20n\leq 20, we show that there are functions in our first family whose implementation efficiences are significantly lower than all previously known functions achieving a comparable combination of nonlinearity and (fast) algebraic immunity. Given positive integers \ell and δ\delta, it is possible to choose a function from our second family whose linear bias is provably at most 22^{-\ell}, fast algebraic immunity is at least δ\delta (based on conjecture which is well supported by experimental results), and which can be implemented in time and space which is linear in \ell and δ\delta. Further, the functions in our second family are built using homomorphic friendly operations, making these functions well suited for the application of transciphering.
Article
Full-text available
In 1979 Thompson [1] reported that, under a suitable model for VLSI chips, the product AT2 of chip area and time T to compute the Fast Fourier Transform on n inputs must satisfy AT2 = Ω(n2). His model accounts for the chip area used by wires as well as for computational elements. He extended these results in [2] and in addition examined the sorting problem. Brent and Kung [3] introduced a somewhat different model for VLSI chips in which the area occupied by wires and circuit elements is convex. They demonstrate that AT2 = Ω(n2) to multiply two n-bit integers, a result obtained with the original model of Thompson by Abelson and Andreae [4]. Brent and Kung show that A = Ω (n) and they present algorithms that come close to meeting their lower bounds. Savage [5] obtained bounds of AT2 = Ω(p4) with both models for pxp matrix multiplication, inversion and transitive closure. Algorithms previously given by Kung and Leiserson [6] and Guibas et al. [7] were shown to be optimal. Preparata and Yuillemin [8] subsequently introduced a family of optimal matrix multiplication algorithms for Ω(1) ≤ T ≤ 0(n)
Article
A data structure is presented for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by C. Y. Lee (1959) and S. B. Akers (1978), but with further restrictions on the ordering of decision variables in the graph. Although, in the worst case, a function requires a graph where the number of vertices grows exponentially with the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. The algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. Experimental results are presented from applying these algorithms to problems in logic design verification that demonstrate the practicality of the approach.
Article
THE theory of switching circuits may be divided into two major divisions, analysis and synthesis. The problem of analysis, determining the manner of operation of a given switching circuit, is comparatively simple. The inverse problem of finding a circuit satisfying certain given operating conditions, and in particular the best circuit is, in general, more difficult and more important from the practical standpoint. A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and we shall consider some aspects of this problem.
Conference Paper
Let M &equil; {0, 1, 2, ..., m—1} , N &equil; {0, 1, 2,..., n—1} , and f:M × N → {0, 1} a Boolean-valued function. We will be interested in the following problem and its related questions. Let i ε M, j ε N be integers known only to two persons P1 and P2, respectively. For P1 and P2 to determine cooperatively the value f(i, j), they send information to each other alternately, one bit at a time, according to some algorithm. The quantity of interest, which measures the information exchange necessary for computing f, is the minimum number of bits exchanged in any algorithm. For example, if f(i, j) &equil; (i + j) mod 2. then 1 bit of information (conveying whether i is odd) sent from P1 to P2 will enable P2 to determine f(i, j), and this is clearly the best possible. The above problem is a variation of a model of Abelson [1] concerning information transfer in distributive computions.
Conference Paper
Increased use of Very Large Scale Integration (VLSI) for the fabrication of digital circuits has led to increased interest in complexity results on the inherent VLSI difficulty of various problems. Lower bounds have been obtained for problems such as integer multiplication [1,2], matrix multiplication [7], sorting [8], and discrete Fourier transform [9], all within VLSI models similar to one originally developed by Thompson [8,9]. The lower bound results all pertain to a space-time trade-off measure that arises naturally within this model. In this paper, we extend the model and the class of functions for which non-trivial bounds can be proved. In Section 2, we give a more general model than has been proposed previously. In Section 3 we show how to reduce the derivation of lower bounds within the model to a problem in distributed computing In Section 4, we consider lower bounds for a number of predicates: n-input, l-output functions (as contrasted with the n-input, n-output functions which have been studied previously). In Section 5, we show that previous lower bound results (for n-input, n-output functions) also apply even when the model is extended to allow nondeterminism, randomness, and multiple arrivials. Finally, the full details of the results presented here will appear in the final version of this paper.
Article
Exponential lower bounds on the complexity of computing the clique functions in the Boolean decision-tree model are proved. For one-time-only branching programs, large polynomial lower bounds are proved for k-clique functions if k is fixed, and exponential lower bounds if k increases with n. Finally, the hierarchy of the classes BP//d(P) of all sequences of Boolean functions that may be computed by d-times only branching programs of polynomial size is introduced. It is shown constructively that BP//2(P) is a proper subset of BP//2(P).