Conference PaperPDF Available

Optimal speedup of Las Vegas algorithms

Authors:
  • BitRipple Inc.

Abstract

Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t <sub>1</sub>, then run A independent for a fixed amount of time t <sub>2 </sub>, etc. The simulation stops if A completes its execution during any of the runs. Let S =( t <sub>1</sub>, t <sub>2</sub>,. . .) be a strategy, and let l <sub>A</sub>=inf<sub>S</sub> T ( A , S ), where T ( A , S ) is the expected value of the running time of the simulation of A under strategy S . The authors describe a simple universal strategy S <sup>univ </sup>, with the property that, for any algorithm A , T ( A , S <sup>univ</sup>)=O( l <sub>A </sub>log( l <sub>A</sub>)). Furthermore, they show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy
... It has been observed (and proved [LSZ93]) that in the absence of a priori knowledge about the cost landscape, scheduling such restarts according to a specific pattern boosts the performance of such algorithms in terms of expected time to find a solution. This has led to efficient algorithms for several problems, including SAT [PD07]. ...
... This has led to efficient algorithms for several problems, including SAT [PD07]. The major contribution of this chapter is an algorithmic scheme for distributing the multi-objective optimization effort among different values of λ according to the pattern suggested in [LSZ93]. The algorithm thus obtained is very efficient in practice. ...
... This practice of restarting in SLS optimization has been in use at least since [SKC93]. At the same time, there is a theory of restarts formulated in [LSZ93] which applies to Las Vegas algorithms which are defined for decision problems rather than optimization. Such algorithms are characterized by the fact that their run-time until giving a correct answer to the decision problem is a random variable. ...
Thesis
Dans cette thèse nous développons de nouvelles techniques pour résoudre les problèmes d'optimisation multi-critère. Ces problèmes se posent naturellement dans de nombreux domaines d'application (sinon tous) où les choix sont évalués selon différents critères conflictuels (coûts et performance par exemple). Contrairement au cas de l'optimisation classique, de tels problèmes n'admettent pas en général un optimum unique mais un ensemble de solutions incomparables, aussi connu comme le front de Pareto, qui représente les meilleurs compromis possibles entre les objectifs conflictuels. La contribution majeure de la thèse est le développement d'algorithmes pour trouver ou approximer ces solutions de Pareto pour les problèmes combinatoires difficiles. Plusieurs problèmes de ce type se posent naturellement lors du processus de placement et d'ordonnancement d'une application logicielle sur une architecture multi-coeur comme P2012, qui est actuellement développé par STMicroelectronics.
... The main advantage of doing so is that wrong decisions made at the very beginning of the search can be cancelled to avoid being stuck in a subpart of the search space. To this end, many restart schemes have been proposed [8], either static such as those based on the Luby series [22,28] or dynamic, as in PicoSAT [5] or Glucose [3]. In this section, we focus on the latter, considering restart strategies based on the quality of learned constraints. ...
... As Pueblo [36] is heavily based on MiniSat [13], it is most likely to inherit its restart policy, even though no mention of this feature is made in [36] either. Regarding more recent solver, Sat4j [25] implements PicoSAT 's static and aggressive restart scheme [6] and RoundingSat [16] uses a Luby-based restart policy [22,28]. Note that a common point to these two strategies is that they do not take into account the constraints that are being considered, as they are both static policies. ...
Conference Paper
Current implementations of pseudo-Boolean (PB) solvers working on native PB constraints are based on the CDCL architecture which empowers highly efficient modern SAT solvers. In particular, such PB solvers not only implement a (cutting-planes-based) conflict analysis procedure, but also complementary strategies for components that are crucial for the efficiency of CDCL, namely branching heuristics, learned constraint deletion and restarts. However, these strategies are mostly reused by PB solvers without considering the particular form of the PB constraints they deal with. In this paper, we present and evaluate different ways of adapting CDCL strategies to take the specificities of PB constraints into account while preserving the behavior they have in the clausal setting. We implemented these strategies in two different solvers, namely Sat4j (for which we consider three configurations) and RoundingSat. Our experiments show that these dedicated strategies allow to improve, sometimes significantly, the performance of these solvers, both on decision and optimization problems.
... The main advantage of doing so is that wrong decisions made at the very beginning of the search can be cancelled to avoid being stuck in a subpart of the search space. Many restarts schemes have been proposed [5], either static such as those based on the Luby series [18,21] or dynamic, as in PicoSAT [4] or Glucose [3]. In this section, we focus on the latter, considering restart strategies based on the quality of learned constraints. ...
Presentation
Current implementations of pseudo-Boolean (PB) solvers are based on the CDCL architecture , which is the central component of modern SAT solvers. In addition to clause learning, this architecture comes with many strategies that help the solver find its way through the search space, either to a solution or to an unsatisfiability proof. Particularly important are the decision heuristic, but also other features like learned clause deletion or restarts. Currently, these strategies are mostly used "as is" in PB solvers, without considering the particular form of the PB constraints they deal with. In this paper, we introduce new ways of adapting these strategies to better take into account the speci-ficities of such constraints, especially regarding their weights and propagation properties. In particular, our experiments show that carefully considering these criteria may have a significant impact on the performance of the solver.
... This information can be the set of learned clauses, the saved polarities, variable activities etc. There are several restart policies used in modern SAT solvers including luby restart which makes use of the luby series [3], arithmetic and geometric policies also based on series. Recently, a restart strategy based on LBD (Literal Bloc Distance) score [19] has been introduced and implemented in the solver Glucose [25]. ...
Article
Full-text available
International audience Search space splitting and portfolio are the two main approaches used in parallel SAT solving. Each of them has its strengths but also, its weaknesses. Decomposition in search space splitting can help improve speedup on satisfiable instances while competition in portfolio increases robustness. Many parallel hybrid approaches have been proposed in the literature but most of them still cope with load balancing issues that are the cause of a non-negligible overhead. In this paper, we describe a new parallel hybridization scheme based on both search space splitting and portfolio that does not require the use of load balancing mechanisms (such as dynamic work stealing). Les deux principales approches utilisées dans la résolution parallèle du problème de satisfiabilité propositionnelle sont DPR (Diviser Pour Régner) et portfolio. Chacune d’elles comporte des forces et des faiblesses. La décomposition dans DPR permet d’améliorer le speedup sur les instancessatisfiables tandis que la compétition dans les portfolios accroit la robustesse. Plusieurs approches hybrides pour la résolution parallèle de SAT ont été présentées dans la littérature mais la plupart d’entre elles souffrent encore des problèmes dus aux mécanismes de rééquilibrage dynamique decharges qui sont à l’origine d’un surcoût non négligeable. Nous décrivons dans ce papier un nouveau schéma d’hybridation parallèle basé sur les deux approches DPR et portfolio ne nécessitant pas la mise en œuvre des mécanismes de rééquilibrage de charges (tels que le vol de tâche).
... According to Ref. [28], this sequence can be chosen optimally if the runtime distribution of the algorithm is known. However, in most cases the runtime distribution is not known and also cannot be predicted. ...
Article
This paper presents a new method to automate the sizing of analog circuits. The method emulates the manual design procedure. The sizing task is formulated as a constraint programming problem. Two new algorithms are introduced: First, a hierarchical structural analysis of functional blocks that automatically sets up the analytical equations for the sizing. And second, a heuristic to guide the branching process of a constraint programming solver. We achieve a reproduction of the manual proceeding of designers and at the same time an automation of the set-up of the design problem, reducing the set-up effort to seconds. The method is primarily designed for operational amplifiers. More than 20 circuits, including different variants of two-stage, folded-cascode, telescopic and symmetrical op-amps, are considered at this time. This paper presents a runtime comparison of the newly developed branching heuristic with a generic branching heuristic on 20 different circuits. Furthermore, sizing results for a two-stage op-amp with transistors in weak inversion and a folded-cascode op-amp with common-mode feedback circuit are presented to show the effectiveness of the approach.
Article
The Fisher–Yates random shuffling (FRS) algorithm combined with the finite‐difference time‐domain (FRS‐finite‐difference time‐domain (FDTD)) method is proposed to construct a fine grid model for the forward simulation of ground‐penetrating radar (GPR) in mixed media. First, the FDTD method was used to divide the coarse grid model into several fine grid models by conforming to the boundary conditions of different media, and the corresponding dielectric parameters were assigned to Yee cells in each fine grid model. Then, the FRS algorithm was used to scramble all Yee cells with equal probability randomly, and the array of scrambled Yee cells was recombined into a coarse grid model. Finally, the geoelectric model of mixed media was generated with the FDTD method and a GPR image excited by electromagnetic wave pulses was obtained. To explore the characteristic signals and dielectric properties of the GPR electromagnetic response in mixed media, the image entropy theory was used to describe the GPR image, and the waveform analysis and wavelet transform mode maximum (WTMM) methods were used to analyze the single‐channel GPR signal of the mixed media. The results showed that the FRS‐FDTD method can be used to construct a valid and stable fine grid model for simulating GPR in mixed media. The model effectively inhibits electromagnetic attenuation and energy dissipation, and the WTMM method explains the relative dielectric permittivity distribution of the mixed media. The findings of this study can be used as a theoretical basis for correcting radar parameters and interpreting images when GPR is applied to mixed media. This article is protected by copyright. All rights reserved
Thesis
Boolean SATisfiability has been used successfully in many applicative contexts. This is due to the capability of modern SAT solvers to solve complex problems involving millions of variables. Most SAT solvers have long been sequential and based on the CDCL algorithm. The emergence of many-core machines opens new possibilities in this domain. There are numerous parallel SAT solvers that differ by their strategies, programming languages, etc. Hence, comparing the efficiency of the theoretical approaches in a fair way is a challenging task. Moreover, the introduction of a new approach needs a deep understanding of the existing solvers' implementations. We present Painless: a framework to build parallel SAT solvers for many-core environments. Thanks to its genericness and modularity, it provides the implementation of basics for parallel SAT solving. It also enables users to easily create their parallel solvers based on new strategies. Painless allowed to build and test existing strategies by using different chunk of solutions present in the literature. We were able to easily mimic the behaviour of three state-of-the-art solvers by factorising many parts of their implementations. The efficiency of Painless was highlighted as these implementations are at least efficient as the original ones. Moreover, one of our solvers won the SAT Competition'18. Going further, Painless enabled to conduct fair experiments in the context of divide-and-conquer solvers, and allowed us to highlight original compositions of strategies performing better than already known ones. Also, we were able to create and test new original strategies exploiting the modular structure of SAT instances.
Article
Full-text available
Answer-Set Programming (ASP) is a powerful and expressive knowledge representation paradigm with a significant number of applications in logic-based AI. The traditional ground-and-solve approach, however, requires ASP programs to be grounded upfront and thus suffers from the so-called grounding bottleneck (i.e., ASP programs easily exhaust all available memory and thus become unsolvable). As a remedy, lazy-grounding ASP solvers have been developed, but many state-of-the-art techniques for grounded ASP solving have not been available to them yet. In this work we present, for the first time, adaptions to the lazy-grounding setting for many important techniques, like restarts, phase saving, domain-independent heuristics, and learned-clause deletion. Furthermore, we investigate their effects and in general observe a large improvement in solving capabilities and also uncover negative effects in certain cases, indicating the need for portfolio solving as known from other solvers.
Article
Full-text available
A common approach to global optimization is to combine local optimization methods with random restarts. Restarts have been used as a performance boosting approach. They can be a means to avoid “slow progress” by exploiting a potentially good solution, and restarts can enable the potential discovery of multiple local solutions, thus improving the overall quality of the returned solution. A multi-start method is a way to integrate local and global approaches; where the global search itself can be used to restart a local search. Bayesian optimization methods aim to find global optima of functions that can only be point-wise evaluated by means of a possibly expensive oracle. We propose the stochastic optimization with adaptive restart (SOAR) framework, that uses the predictive capability of Gaussian process models as a means to adaptively restart local search and intelligently select restart locations with current information. This approach attempts to balance exploitation with exploration of the solution space. We study the asymptotic convergence of SOAR to a global optimum, and empirically evaluate SOAR performance through a specific implementation that uses the Trust Region method as the local search component. Numerical experiments show that the proposed algorithm outperforms existing methodologies over a suite of test problems of varying problem dimension with a finite budget of function evaluations.
Article
Full-text available
We study strategies for converting randomized algorithms of the Las Vegas type into randomized algorithms with small tail probabilities.
Conference Paper
With random competition we propose a method for parallelizing arbitrary theorem provers. We can prove high efficiency (compared with other parallel theorem provers) of random competition on highly parallel architectures with thousands of processors. This method is suited for all kinds of distributed memory architectures, particularly for large networks of high performance workstations since no communication between the processors is necessary during run-time. On a set of examples we show the performance of random competition applied to the model elimination theorem prover SETHEO. Besides the speedup results for random competition our theoretical analysis gives fruitful insight in the interrelation between search-tree structure, run-time distribution and parallel performance of OR-parallel search in general.
OR-Parallel Theorem Proving with Random Competition Proceedings of Logic Programming and Automated R easoning
  • Wolfgang Ertel
Wolfgang Ertel. OR-Parallel Theorem Proving with Random Competition. Proceedings of Logic Programming and Automated R easoning, St. Petersburg, July 1992, Springer Lecture Notes in AI Vol. 624, pp. 226{237.
Proceedings of Logic Programming and Automated R easoning
  • Wolfgang Ertel
Wolfgang Ertel. OR-Parallel Theorem Proving with Random Competition. Proceedings of Logic Programming and Automated R easoning, St. Petersburg, July 1992, Springer Lecture Notes in AI Vol. 624, pp. 226{237.