## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

To read the full-text of this research,

you can request a copy directly from the author.

... SAT has been a pivotal problem in theoretical computer science ever since the advent of the Cook-Levin Theorem [7, 13] operations research, artificial intelligence and bioinformatics. Moreover, it continues to be studied under various specialized models such as as random-SAT and building efficient SAT solvers for real-life scenarios. ...

... Moreover, it continues to be studied under various specialized models such as as random-SAT and building efficient SAT solvers for real-life scenarios. In the complexity theoretic setting, we know that while 3SAT is NP-complete [7, 13], 2SAT can be solved in linear time [12, 9, 3]. Given the fundamental nature of 2SAT, in this paper, we consider the following question: What is the minimum amount of information needed to solve 2SAT in polynomial time? ...

... In particular, if each clause involves at most k literals, then this problem is classified as kSAT. It is well known that while 2SAT can be solved in linear time [12, 9, 3], kSAT for k ≥ 3 is NP-complete [7, 13]. A useful notion is that of clause types which is defined as the unordered set of literals present in the clause. ...

What is the minimum amount of information and time needed to solve 2SAT? When the instance is known, it can be solved in polynomial time, but is this also possible without knowing the instance? Bei, Chen and Zhang (STOC '13) considered a model where the input is accessed by proposing possible assignments to a special oracle. This oracle, on encountering some constraint unsatisfied by the proposal, returns only the constraint index. It turns out that, in this model, even 1SAT cannot be solved in polynomial time unless P=NP. Hence, we consider a model in which the input is accessed by proposing probability distributions over assignments to the variables. The oracle then returns the index of the constraint that is most likely to be violated by this distribution. We show that the information obtained this way is sufficient to solve 1SAT in polynomial time, even when the clauses can be repeated. For 2SAT, as long as there are no repeated clauses, in polynomial time we can even learn an equivalent formula for the hidden instance and hence also solve it. Furthermore, we extend these results to the quantum regime. We show that in this setting 1QSAT can be solved in polynomial time up to constant precision, and 2QSAT can be learnt in polynomial time up to inverse polynomial precision.

... Automation' (RSS' 16), 'Minimality and Trade-offs in Automated Robot Design' (RSS '17), and 'Autonomous Robot Design' (ICRA '18). 1 Naturally enough, the research is focused on the development of algorithmic tools to help answer such design questions. Several ideas have been proffered as useful ways to tackle robot design problems. ...

... The reader may also note that the cost function introduced in (2) depends, via (1), on enumerating satisfying assignments to Ψ. In light of the Cook-Levin theorem [4], [16], this implies that merely evaluating the cost of a design in this context is a hard algorithmic problem. This is perhaps not a surprise: The sorts of overlapping and interacting constraints that govern the validity of robot designs and the actions they enable are, in an informal sense, the essence of what makes problems NP-Complete. ...

Assuming one wants to design the most cost-effective robot for some task, how difficult is it to choose the robot’s actuators? This paper addresses that question in algorithmic terms, considering the problem of identifying optimal sets of actuation capabilities to allow a robot to complete a given task. We consider various cost functions which model the cost needed to equip a robot with some capabilities, and show that the general form of this problem is NP-hard, confirming what many perhaps have suspected about this sort of design-time optimization. As a result, several questions of interest having both optimality and efficiency of solution is unlikely. However, we also show that, for some specific types of cost functions, the problem is either polynomial time solvable or fixed-parameter tractable.

... There are many decision problems for which the complexity status is currently unknown. To say at least something about their relative complexity, Cook (1971) and Levin (1973) developed useful machinery, which has led to the definition of the class of N P-complete problems. ...

... Such a problem Q in N P is an N P-complete problem. Cook (1971) and Levin (1973) independently showed that there are N P-complete problems. They each proved that the unrestricted Boolean satisfiability problem (SAT) is N Pcomplete. ...

We discuss some claims that certain UCOMP devices can perform hypercomputation (compute Turing-uncomputable functions) or perform super-Turing computation (solve NP-complete problems in polynomial time). We discover that all these claims rely on the provision of one or more unphysical resources.

... ????? ????? ????? ???? end ? ????? ????(? ????? ????? ????? ????? ????? ????[? ????][6]= ? ????? ????? ????? ???? ? ...

... ????? ??? ?? ???? ? ????? ????? ????? ????? ????? ????[? ????][6]= ? ????? ????? ????? ???? ? ...

NP-Complete problems have an important attribute that if one NP-Complete problem can be solved in polynomial time, all NP-Complete problems will have a polynomial solution. The 3-CNF-SAT problem is a NP-Complete problem and the primary method to solve it checks all values of the truth table. This task is of the {\Omega}(2^n) time order. This paper shows that by changing the viewpoint towards the problem, it is possible to know if a 3-CNF-SAT problem is satisfiable in time O(n^10) or not? In this paper, the value of all clauses are considered as false. With this presumption, any of the values inside the truth table can be shown in string form in order to define the set of compatible clauses for each of the strings. So, rather than processing strings, their clauses will be processed implicating that instead of 2^n strings, (O(n^3)) clauses are to be processed; therefore, the time and space complexity of the algorithm would be polynomial.

... The P versus NP problem (which is also called the search problem) was formulated by Cook [1] and Levin [2]. Cook gave an official description of this problem for the Millennium Prize Problems [3]. ...

... The complexity of the predicate 3-SAT. The 3-SAT problem is described in [1,2]. In these papers, it was also proved that 3-SAT is NP-complete. ...

Classes of time and space complexity of Turing machines are defined, and relationships between them are discussed. New relationships between the defined complexity classes are described.

... See [2] as a general overview. Or consult the roots, [12], [10], [11], [5], [1], [7], [8], [9]. ...

We propose a polynomially bounded, in time and space, method to decide whether a given 3-SAT formula is satisfiable or no. The tools we use here are, in fact, very simple. We first decide satisfiability for a particular \thsat\ formula, called pivoted 3-SAT and, after a plain transformation, still keeping the polynomial boundaries, it is shown that \thsat\ formulas can be written as pivoted formulas.

... For the following proofs we use the well-known satisfiability problem SAT for propositional formulas. The problem SAT is NP-complete [14, 30]. Moreover, hardness for SAT holds even when restricted to propositional formulas that are in 3CNF. ...

In this paper we introduce a computational-level model of theory of mind (ToM) based on dynamic epistemic logic (DEL), and we analyze its computational complexity. The model is a special case of DEL model checking. We provide a parameterized complexity analysis, considering several aspects of DEL (e.g., number of agents, size of preconditions, etc.) as parameters. We show that model checking for DEL is PSPACE-hard, also when restricted to single-pointed models and S5 relations, thereby solving an open problem in the literature. Our approach is aimed at formalizing current intractability claims in the cognitive science literature regarding computational models of ToM.

... One important aspect about the satisfiability problem is that it is NP-complete. SAT was proven to be NPcomplete by Cook in 1971 [2] and independently by Levin in 1973 [9] (known today as the Cook-Levin theorem), and to this day SAT remains the most researched NPcomplete problem. The development of SAT solvers was incremental until the 2000's and early 2010's, when SAT was split in various subproblems [15], and efficient solvers were found for many of them (most notably Survey Propagation which solves Random-SAT in quasipolynomial time [7]). ...

... Proof verification models and corresponding complexity classes ranging from NP, to IP, MIP and PCP greatly enrich the theory of computing. The class NP [14, 41, 31], one of the cornerstones of theoretical computer science, corresponds to the proof verification of a proof string by an efficient deterministic computer. Interactive models of proof verification were first proposed by Babai [4] and Goldwasser, Micali, and Rackoff [20]. ...

We present a protocol that transforms any quantum multi-prover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, this proves that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP*, and therefore NEXP-hard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the possibility of the gap amplification approach to the multi-prover variant of the quantum PCP conjecture.

... For example, given a string x, a Kolmogorov machine can build a binary tree over x and then move fast about x. Leonid Levin used a universal Kolmogorov machine to construct his algorithm for NP problems that is optimal up to a multiplicative constant [41,22]. The up-to-a-multiplicative-constant form is not believed to be achievable for the multitape Turing machine model popular in theoretical computer science. ...

... Boolean satisfiability (SAT) is a fundamental problem in mathematical logic and computing theory, which is one of the first proven NP-complete problems [1], [2]. The study of SAT is of significance on both of theory and practice. ...

Boolean satisfiability (SAT) is a fundamental problem in computer science, which is one of the first proven $\mathbf{NP}$-complete problems. Although there is no known theoretically polynomial time algorithm for SAT, many heuristic SAT methods have been developed for practical problems. For the sake of efficiency, various techniques were explored, from discrete to continuous methods, from sequential to parallel programmings, from constrained to unconstrained optimizations, from deterministic to stochastic studies. Anyway, showing the unsatisfiability is a main difficulty in certain sense of finding an efficient algorithm for SAT. To address the difficulty, this article presents a linear algebra formulation for unsatisfiability testing, which is a procedure dramatically different from DPLL. Somehow it gives an affirmative answer to an open question by Kautz and Selman in their article "The State of SAT". The new approach could provide a chance to disprove satisfiability efficiently by resorting to a linear system having no solution, if the investigated formula is unsatisfiable. Theoretically, the method can be applied to test arbitrary formula in polynomial time. We are not unclear whether it can also show satisfiability efficiently in the same way. If so, $\mathbf{NP}=\mathbf{P}$. Whatever, our novel method is able to deliver a definite result to uniquely positive $3$-SAT in polynomial time. To the best of our knowledge, this constitutes the first polynomial time algorithm for such problem. Anyway, the new formulation could provide a complementary choice to ad hoc methods for SAT problem.

... -K-SAT are NP − complete for all K ≥ 3 [34,51,52]. ...

A central goal in quantum computing is the development of quantum hardware and quantum algorithms in order to analyse challenging scientific and engineering problems. Research in quantum computation involves contributions from both physics and computer science, hence this article presents a concise introduction to basic concepts from both fields that are used in annealing-based quantum computation, an alternative to the more familiar quantum gate model. We introduce some concepts from computer science required to define difficult computational problems and to realise the potential relevance of quantum algorithms to find novel solutions to those problems. We introduce the structure of quantum annealing-based algorithms as well as two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems. An overview of the quantum annealing systems manufactured by D-Wave Systems is also presented.

... Cook [3] and independently Levin [6] proved that the CNF-satisfiability problem is NPcomplete. This proof shows how every decision problem in the complexity class NP can be reduced to the Boolean satisfiability problem for formulas in the conjunctive normal form. ...

This paper is devoted to the complexity of the Boolean satisfiability problem. We consider a version of this problem, where the Boolean formula is specified in the conjunctive normal form. We prove an unexpected result that the CNF-satisfiability problem can be solved by a deterministic Turing machine in polynomial time.

... Over the last 15 years, the merging of condensed matter physics and computational complexity theory has given rise to a new field of study known as quantum Hamiltonian complexity (Gharibian et al. 2014a;Osborne 2012). The cornerstone of this field is arguably the Kitaev et al. (2002) quantum version of the Cook-Levin theorem (Cook 1972;Levin 1973), which says that the problem of estimating the ground state energy of a local Hamiltonian is complete for the class Quantum Merlin Arthur (QMA), where QMA is a natural generalization of NP. Here, a k-local Hamiltonian is an operator H = i H i acting on n qubits, such that each local Hermitian constraint H i acts non-trivially on k qubits. ...

The study of ground state energies of local Hamiltonians has played a fundamental role in quantum complexity theory. In this article, we take a new direction by introducing the physically motivated notion of “ground state connectivity” of local Hamiltonians, which captures problems in areas ranging from quantum stabilizer codes to quantum memories. Roughly, “ground state connectivity” corresponds to the natural question: Given two ground states |Ψ〉 and |φ〉 of a local Hamiltonian H, is there an “energy barrier” (with respect to H) along any sequence of local operations mapping |Ψ〉 to |φ〉? We show that the complexity of this question can range from QCMA-complete to PSPACE-complete, as well as NEXP-complete for an appropriately defined “succinct” version of the problem. As a result, we obtain a natural QCMA-complete problem, a goal which has generally proven difficult since the conception of QCMA over a decade ago. Our proofs rely on a new technical tool, the Traversal Lemma, which analyzes the Hilbert space a local unitary evolution must traverse under certain conditions. We show that this lemma is essentially tight with respect to the length of the unitary evolution in question.

... For instance, given a language L, a concept class C and a concept c, the teacher should be able to compute the associated small teaching set S and the learner should identify c from it. To get a finite procedure we investigate the introduction of computational steps in the complexity function, inspired by Levin's Kt [21,22], namely: ...

We investigate the teaching of infinite concept classes through the effect of the learning bias (which is used by the learner to prefer some concepts over others and by the teacher to devise the teaching examples) and the sampling bias (which determines how the concepts are sampled from the class). We analyse two important classes: Turing machines and finite-state machines. We derive bounds for the biased teaching dimension when the learning bias is derived from a complexity measure (Kolmogorov complexity and minimal number of states respectively) and analyse the sampling distributions that lead to finite expected biased teaching dimensions. We highlight the existing trade-off between the bound and the representativeness of the sample, and its implications for the understanding of what teaching rich concepts to machines entails.

... , x n ), determine whether or not there exists an assignment of variables to Boolean values such that φ evaluates to TRUE. SAT was the first problem shown to be NP-complete [Coo71,Lev73]. ...

We describe an algorithm to solve the problem of Boolean CNF-Satisfiability when the input formula is chosen randomly. We build upon the algorithms of Sch{\"{o}}ning 1999 and Dantsin et al.~in 2002. The Sch{\"{o}}ning algorithm works by trying many possible random assignments, and for each one searching systematically in the neighborhood of that assignment for a satisfying solution. Previous algorithms for this problem run in time $O(2^{n (1- \Omega(1)/k)})$. Our improvement is simple: we count how many clauses are satisfied by each randomly sampled assignment, and only search in the neighborhoods of assignments with abnormally many satisfied clauses. We show that assignments like these are significantly more likely to be near a satisfying assignment. This improvement saves a factor of $2^{n \Omega(\lg^2 k)/k}$, resulting in an overall runtime of $O(2^{n (1- \Omega(\lg^2 k)/k)})$ for random $k$-SAT.

... The use of computational complexity theory to study the inherent difficulty of computational problems has proven remarkably fruitful over the last decades. For example, the theory of NP-completeness [Coo72,Lev73,Kar72] has helped classify the worst-case complexity of hundreds of computational problems which elude efficient classical algorithms. In the quantum setting, the study of a quantum analogue of NP, known as Quantum Merlin Arthur (QMA), was started in 1999 by the seminal "quantum Cook-Levin theorem" of Kitaev [KSV02], which showed that estimating the ground state energy of a given k-local Hamiltonian is QMA-complete for k ≥ 5. Since then, a number of physically motivated problems have been shown complete for QMA (see, e.g., [Boo14] and [GHLS14] for surveys), a number of which focus on estimating ground state energies of local Hamiltonians. ...

An important task in quantum physics is the estimation of local quantities for ground states of local Hamiltonians. Recently, [Ambainis, CCC 2014] defined the complexity class P^QMA[log], and motivated its study by showing that the physical task of estimating the expectation value of a local observable against the ground state of a local Hamiltonian is P^QMA[log]-complete. In this paper, we continue the study of P^QMA[log], obtaining the following results: (1) The P^QMA[log]-completeness result of [Ambainis, CCC 2014] above requires O(log(n))-local Hamiltonians and O(log(n))-local observables. Whether this could be improved to the more physically appealing O(1)-local setting was left as an open question. We resolve this question positively by showing that simulating even a single qubit measurement on ground states of 5-local Hamiltonians is P^QMA[log]-complete. (2) We formalize the complexity theoretic study of estimating two-point correlation functions against ground states, and show that this task is P^QMA[log]-complete. (3) P^QMA[log] is thought of as "slightly harder" than QMA. We give a formal justification of this intuition by exploiting the technique of hierarchical voting of [Beigel, Hemachandra, and Wechsung, SCT 1989] to show P^QMA[log] is in PP. This improves the known containment that QMA is in PP [Kitaev, Watrous, STOC 2000]. (4) A central theme of this work is the subtlety involved in the study of oracle classes in which the oracle solves a promise problem. In this vein, we identify a flaw in [Ambainis, CCC 2014] regarding a P^UQMA[log]-hardness proof for estimating spectral gaps of local Hamiltonians. By introducing a "query validation" technique, we build on [Ambainis, CCC 2014] to obtain P^UQMA[log]-hardness for estimating spectral gaps under polynomial-time Turing reductions.

... l(p) is the length of program p. In terms of the criteria shown in Section 3, the logical depth has the following properties (summarized in Tables 1 and 2 [31] and Kt complexity [21,32], have the same drawback as the logical depth because they also focus on the computational feature and cannot capture the descriptive feature of complexity. (5) Computability: e logical depth of an object (string) ...

One of the most fundamental problems in science is to define the complexity of organized matters quantitatively, that is, organized complexity. Although many definitions have been proposed toward this aim in previous decades (e.g., logical depth, effective complexity, natural complexity, thermodynamics depth, effective measure complexity, and statistical complexity), there is no agreed-upon definition. The major issue of these definitions is that they captured only a single feature among the three key features of complexity, descriptive, computational, and distributional features, for example, the effective complexity captured only the descriptive feature, the logical depth captured only the computational, and the statistical complexity captured only the distributional. In addition, some definitions were not computable; some were not rigorously specified; and any of them treated either probabilistic or deterministic forms of objects, but not both in a unified manner. This paper presents a new quantitative definition of organized complexity. In contrast to the existing definitions, this new definition simultaneously captures all of the three key features of complexity for the first time. In addition, the proposed definition is computable, is rigorously specified, and can treat both probabilistic and deterministic forms of objects in a unified manner or seamlessly. The proposed definition is based on circuits rather than Turing machines and ɛ-machines. We give several criteria required for organized complexity definitions and show that the proposed definition satisfies all of them.

... In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values true or false in such a way that the formula evaluates to true. SAT was the first known NP-complete problem, as proved by Stephen Cook [2] and independently by Leonid Levin [3]. ...

In this paper it is shown that PSPACE is equal to 4th level in the polynomial hierarchy. A lot of important consequences are also deduced, see arXiv:1411.0628
True quantified Boolean formula is indeed a generalisation of the Boolean Satisfiability Problem, where determining of interpretation that satisfies a given Boolean formula is replaced by existence of Boolean functions that makes a given QBF to be tautology. Such functions are called the Skolem functions.
The essential idea is to skolemize, and then use additional formulas from the second level of the polynomial hierarchy inside the skolemized prefix to enforce that the skolem variables indeed depend only on the universally quantified variables they are supposed to. However, some dependence is lost when the quantification is reversed. It is called "XOR issue" in the paper because the functional dependence can be expressed by means of an XOR formula. Thus, it is needed to locate these XORs, but there is no need to locate all chains with XORs: any chain includes a XOR of only two variables. The last can be done locally in each iteration (keep in mind the algebraic normal form (ANF)), when all arguments are specified, i.e. as a polynomial subroutine.
Relativization is defeated due to the well-known fact: PH = PSPACE iff second-order logic over finite structures gains no additional power from the addition of a transitive closure operator. Boolean algebra is finite. The exchange is possible due to finite possibilities for arguments. So, the theorems with oracles are not applicable since a random oracle is an arbitrary set. And that’s why Polynomial Hierarchy is infinite relative to a random oracle with probability 1.

... Of course, the use of computers in physics was nothing new (see Pang 2006 for a history of computers in physics). Two elements contributed to the emergence of a real "digital physics" (Fredkin 2003) in the 1980s: (1) the generalization (democratization) of personal computers in the beginning of the 1980s and, (2) the works of some scientists (Jaynes 1957;Zuse 1969;Levin 1973) that showed that physical systems can be described by computational simulations on the condition that they are compatible with principles of information theory, statistical thermodynamics and quantum mechanics. Progressively, physicists associated physical systems with computational processes founded on an information structure in which "classical matter/energy is replaced by information, while the dynamics are identified as computational processes" (Müller 2010, p. 5). ...

For the last three decades, physicists have been moving beyond the boundaries of their discipline, using their methods to study various problems usually instigated by economists. This trend labeled ‘econophysics’ can be seen as a hybrid area of knowledge that exists between economics and physics. Econophysics did not spring from nowhere—the existing literature agrees that econophysics emerged in the 1990s and historical studies on the field mainly deal with what happened during that decade. This article aims at investigating what happened before the 1990s by clarifying the epistemic background that might have paved the way to the emergence of econophysics. This historical exploration led me to highlight the active role played by the Santa Fe Institute by promoting interdisciplinary research on complexity in 1980s. Precisely, by defining three research themes on economic complexity, the SFI defined a research agenda and a way of extending physics\biology to economics. This article offers a possible archeology of econophysics to clarify what could have contributed to the development of a particular episteme in the 1980s easing the advent of econophysics in the 1990s.

... The following result is obtained by classical SAT being NP-complete [11,36]. ...

Dependence Logic was introduced by Jouko Väänänen in 2007. We study a propositional variant of this logic (PDL) and investigate a variety of parameterisations with respect to central decision problems. The model checking problem (MC) of PDL is NP-complete (Ebbing and Lohmann, SOFSEM 2012). The subject of this research is to identify a list of parameterisations (formula-size, formula-depth, treewidth, team-size, number of variables) under which MC becomes fixed-parameter tractable. Furthermore, we show that the number of disjunctions or the arity of dependence atoms (dep-arity) as a parameter both yield a paraNP-completeness result. Then, we consider the satisfiability problem (SAT) which classically is known to be NP-complete as well (Lohmann and Vollmer, Studia Logica 2013). There we are presenting a different picture: under team-size, or dep-arity SAT is paraNP-complete whereas under all other mentioned parameters the problem is FPT. Finally, we introduce a variant of the satisfiability problem, asking for a team of a given size, and show for this problem an almost complete picture.

How much time, space and/or hardware resource does require an algorithm? Such questions lead to surprising results: conceptual simplicity does not always go along with efficiency. A lot of quite natural questions remain open, e.g., the famous P \(=\) NP problem raised in 1970. The so elementary model of finite automata, adequately tailored to diverse data structures, proves to be a flexible and powerful tool in the subject whereas quantum computing opens astonishing perspectives. An elegant tool for proofs of lower bounds for time/space complexity is a totally different notion of complexity: Kolmogorov complexity which measures the information contents.

This article examines the selection of a robot’s actuation and sensing hardware to minimize the cost of that design while ensuring that the robot is capable of carrying out a plan to complete a task. Its primary contribution is in the study of the hardness of reasonable formal models for that minimization problem. Specifically, for the case in which sensing hardware is held fixed, we show that this algorithmic design problem is NP-hard even for particularly simple classes of cost functions, confirming what many perhaps have suspected about this sort of design-time optimization. We also introduce a formalism, based on the notion of label maps, for the broader problem in which the design space encompasses choices for both actuation and sensing components. As a result, for several questions of interest, having both optimality and efficiency of solution is unlikely. However, we also show that, for some specific types of cost functions, the problem is either polynomial-time solvable or fixed-parameter tractable.
Note to Practitioners
—Despite the primary results being theoretical and, further, taking the form of bad news, this article still has considerable value to practitioners. Specifically, assuming that one has been employing heuristic or approximate solutions to robot design problems, this article serves as a justification for doing so. Moreover, it delineates some circumstances in which one can, in a sense, do better and achieve genuine optima with practical algorithms.

The ability to induce short descriptions of, i.e. compressing, a wide class of data is essential for any system exhibiting general intelligence. In all generality, it is proven that incremental compression – extracting features of data strings and continuing to compress the residual data variance – leads to a time complexity superior to universal search if the strings are incrementally compressible. It is further shown that such a procedure breaks up the shortest description into a set of pairwise orthogonal features in terms of algorithmic information.

We study the complexity of the validity problems of propositional dependence logic, modal dependence logic, and extended modal dependence logic. We show that the validity problem for propositional dependence logic is -complete. In addition, we establish that the corresponding problems for modal dependence logic and extended modal dependence logic coincide. We show containment in , whereas -hardness follows from the propositional case.

We study the complexity of problems solvable in deterministic polynomial time with access to an NP or Quantum Merlin-Arthur (QMA)-oracle, such as $P^{NP}$ and $P^{QMA}$, respectively. The former allows one to classify problems more finely than the Polynomial-Time Hierarchy (PH), whereas the latter characterizes physically motivated problems such as Approximate Simulation (APX-SIM) [Ambainis, CCC 2014]. In this area, a central role has been played by the classes $P^{NP[\log]}$ and $P^{QMA[\log]}$, defined identically to $P^{NP}$ and $P^{QMA}$, except that only logarithmically many oracle queries are allowed. Here, [Gottlob, FOCS 1993] showed that if the adaptive queries made by a $P^{NP}$ machine have a "query graph" which is a tree, then this computation can be simulated in $P^{NP[\log]}$. In this work, we first show that for any verification class $C\in\{NP,MA,QCMA,QMA,QMA(2),NEXP,QMA_{\exp}\}$, any $P^C$ machine with a query graph of "separator number" $s$ can be simulated using deterministic time $\exp(s\log n)$ and $s\log n$ queries to a $C$-oracle. When $s\in O(1)$ (which includes the case of $O(1)$-treewidth, and thus also of trees), this gives an upper bound of $P^{C[\log]}$, and when $s\in O(\log^k(n))$, this yields bound $QP^{C[\log^{k+1}]}$ (QP meaning quasi-polynomial time). We next show how to combine Gottlob's "admissible-weighting function" framework with the "flag-qubit" framework of [Watson, Bausch, Gharibian, 2020], obtaining a unified approach for embedding $P^C$ computations directly into APX-SIM instances in a black-box fashion. Finally, we formalize a simple no-go statement about polynomials (c.f. [Krentel, STOC 1986]): Given a multi-linear polynomial $p$ specified via an arithmetic circuit, if one can "weakly compress" $p$ so that its optimal value requires $m$ bits to represent, then $P^{NP}$ can be decided with only $m$ queries to an NP-oracle.

In circa 2006, Feder & Subi established that Barnette’s 1969 conjecture, which postulates that all cubic bipartite polyhedral graphs are Hamiltonian, is true if and only if the Hamiltonian cycle decision problem for this class of graphs is polynomial time solvable (assuming P≠NP). Here, we bridge the truth of Barnette’s conjecture with the hardness of a related set of decision problems belonging to the ModkP complexity classes (not known to contain NP), where we are tasked with deciding if an integer k fails to evenly divide the Hamiltonian cycle count of a cubic bipartite polyhedral graph. In particular, we show that Barnette’s conjecture is true if there exists a polynomial time procedure for this decision problem when k can be any arbitrarily selected prime number. However, to illustrate the barriers for utilizing this result to prove Barnette’s conjecture, we also show that the aforementioned decision problem is ModkP-complete ∀k∈2N>0+1, and more generally, that unless NP=RP, no polynomial time algorithm can exist if k is not a power of two.

Switched Ethernet networks rely on the Spanning Tree Protocol (STP) to ensure a cycle-free connectivity between nodes, by reducing the topology of the network to a spanning tree. The Multiple Spanning Tree Protocol (MSTP) allows for the providers to partition the traffic in the network and assign it to different virtual local area networks, each satisfying the STP. In this manner, it is possible to make a more efficient use of the physical resources in the network. In this paper, we consider the traffic engineering problem of finding optimal designs of switched Ethernet networks implementing the MSTP, such that the worst-case link utilization is minimized. We show that this problem is -hard. We propose three mixed-integer linear programming formulations for this problem. Through a large set of computational experiments, we compare the performance of these formulations. Until now, the problem was almost exclusively solved with heuristics. Our objective here is providing a first comparison of different models that can be used in exact methods.

We present a new quantum heuristic algorithm aimed at finding satisfying assignments for hard K-SAT instances using a continuous time quantum walk that explicitly exploits the properties of quantum tunneling. Our algorithm uses a Hamiltonian HA(F) which is specifically constructed to solve a K-SAT instance F. The heuristic algorithm aims at iteratively reducing the Hamming distance between an evolving state |ψj⟩ and a state that represents a satisfying assignment for F. Each iteration consists on the evolution of |ψj⟩ (where j is the iteration number) under e-iHAt, a measurement that collapses the superposition, a check to see if the post-measurement state satisfies F and in the case it does not, an update to HA for the next iteration. Operator HA describes a continuous time quantum walk over a hypercube graph with potential barriers that makes an evolving state to scatter and mostly follow the shortest tunneling paths with the smaller potentials that lead to a state |s⟩ that represents a satisfying assignment for F. The potential barriers in the Hamiltonian HA are constructed through a process that does not require any previous knowledge on the satisfying assignments for the instance F. Due to the topology of HA each iteration is expected to reduce the Hamming distance between each post measurement state and a state |s⟩. If the state |s⟩ is not measured after n iterations (the number n of logical variables in the instance F being solved), the algorithm is restarted. Periodic measurements and quantum tunneling also give the possibility of getting out of local minima. Our numerical simulations show a success rate of 0.66 on measuring |s⟩ on the first run of the algorithm (i.e., without restarting after n iterations) on thousands of 3-SAT instances of 4, 6, and 10 variables with unique satisfying assignments.

We present an algorithm for computing both functional dependency and unateness of combinational and sequential Boolean functions represented as logic networks. The algorithm uses SAT-based techniques from Combinational Equivalence Checking (CEC) and Automatic Test Pattern Generation (ATPG) to compute the dependency matrix of multi-output Boolean functions. Additionally, the classical dependency definitions are extended to sequential functions and a fast approximation is presented to efficiently yield a sequential dependency matrix. Extensive experiments show the applicability of the methods and the improved robustness compared to existing approaches.

In this paper, we initiate a systematic study of the parameterised complexity in the field of Dependence Logics which finds its origin in the Dependence Logic of Väänänen from 2007. We study a propositional variant of this logic (PDL) and investigate a variety of parameterisations with respect to the central decision problems. The model checking problem (MC) of PDL is NP-complete (Ebbing and Lohmann, SOFSEM 2012). The subject of this research is to identify a list of parameterisations (formula-size, formula-depth, treewidth, team-size, number of variables) under which MC becomes fixed-parameter tractable. Furthermore, we show that the number of disjunctions or the arity of dependence atoms (dep-arity) as a parameter both yield a paraNP-completeness result. Then, we consider the satisfiability problem (SAT) which classically is known to be NP-complete as well (Lohmann and Vollmer, Studia Logica 2013). There we are presenting a different picture: under team-size, or dep-arity SAT is paraNP-complete whereas under all other mentioned parameters the problem is in FPT. Finally, we introduce a variant of the satisfiability problem, asking for teams of a given size, and show for this problem an almost complete picture.

We present a protocol that transforms any quantum multi-prover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, it follows that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP*, and therefore NEXP-hard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the feasibility of the gap amplification approach to the multi-prover variant of the quantum PCP conjecture.

Ker-I Ko was among the first people to recognize the importance of resource-bounded Kolmogorov complexity as a tool for better understanding the structure of complexity classes. In this brief informal reminiscence, I review the milieu of the early 1980’s that caused an up-welling of interest in resource-bounded Kolmogorov complexity, and then I discuss some more recent work that sheds additional light on the questions related to Kolmogorov complexity that Ko grappled with in the 1980’s and 1990’s.

We classify the computational complexity of the satisfiability, validity, and model-checking problems for propositional independence, inclusion, and team logic. Our main result shows that the satisfiability and validity problems for propositional team logic are complete for alternating exponential-time with polynomially many alternations.

In 2011, the TV quiz show Jeopardy featured two human champions competing against IBM’s Watson, an AI system designed for the event. Watson won the match by a large margin. Then, in 2017, another AI system, AphaGo, beat the world’s Go champion with a creative move that was previously unknown, surprising Go experts. Popularized events such as these have made it saliently obvious to a large audience that AI bears many implications for understanding what intelligence is; and given that those AI systems were the product of a partnership between mathematicians and computer scientists, it is also obvious that they bear specific implications for how mathematics itself is practiced in the current technological environment, called the Information Age or, equally, the Computer Age. If an AI system can be devised to come up with a truly intelligent move in the game of Go, previously unbeknownst to humans, then the question arises: Can AI do creative mathematics? A positive answer does not seem to be beyond the realm of possibility.

Mathematical texts from across time indicate that the needs of the societies of different eras and different places have guided practices in mathematics itself and in how it was taught at school, responding to the needs and exigencies of each age. In some cases, even new discoveries were seen as part of a collaborative effort, rather than the product of individuals—the classic example being the Pythagoreans, who worked as a group to do mathematics (Heath 1921). A similar social attitude has emerged in the current Information Age, as evidenced by projects such as PolyMath, spearheaded by renowned mathematician Tim Gowers—a worldwide project involving mathematicians from all over the globe collaborating through the Internet to solve problems (Nielsen 2012). PolyMath started in 2009 when Gowers posted a famous problem on his blog, the density version of the Hales-Jewett theorem, asking people to help him find a proof for it. Seven weeks later, Gowers wrote that the problem had probably been solved, thanks to the many suggestions he had received. Since then, the PolyMath project has become a global collaborative project, recalling not only the ancient Pythagoreans but, in recent times, the Nicolas Bourbaki group of French mathematicians, who initially wanted to design updated textbooks for teaching contemporary mathematics in the post-World War II era under this pseudonym, rather than under the name of any one individual.

Computers and Intractability: A Guide to the Theory of NP-Completeness, by Michael R. Garey and David S. Johnson, was published 40 years ago (1979). Despite its age, it is unanimously considered by many in the computational complexity community as its most important book. NP-completeness is perhaps the single most important concept to come out of theoretical computer science. The book was written in the late 1970s, when problems solvable in polynomial time were linked to the concepts of efficiently solvable and tractability, and the complexity class NP was defined to capture the concept of good characterization. Besides his contributions to the theory of NP-completeness, David S. Johnson also made important contributions to approximation algorithms and the experimental analysis of algorithms. This paper summarizes many of Johnson’s contributions to these areas and is an homage to his memory.

This paper takes the next step in developing the theory of average case complexity initiated by Leonid A. Levin. Previous works have focused on the existence of complete problems. We widen the scope to other basic questions in computational complexity. Our results include:
1.⊎|the equivalence of search and decision problems in the context of average case complexity;
2.⊎|an initial analysis of the structure of distributional-NP (i.e., NP problems coupled with “simple distributions”) under reductions which preserve average polynomial-time;
3.⊎|a proof that if all of distributional-NP is in average polynomial-time then non-deterministic exponential-time equals deterministic exponential time (i.e., a collapse in the worst case hierarchy);
4.⊎|definitions and basic theorems regarding other complexity classes such as average log-space.
An exposition of the basic definitions suggested by Levin and suggestions for some alternative definitions are provided as well.

The theoretical hardness of machine teaching has usually been analyzed for a range of concept languages under several variants of the teaching dimension: the minimum number of examples that a teacher needs to figure out so that the learner identifies the concept. However, for languages where concepts have structure (and hence size), such as Turing-complete languages, a low teaching dimension can be achieved at the cost of using very large examples, which are hard to process by the learner. In this paper we introduce the teaching size, a more intuitive way of assessing the theoretical feasibility of teaching concepts for structured languages. In the most general case of universal languages, we show that focusing on the total size of a witness set rather than its cardinality, we can teach all total functions that are computable within some fixed time bound. We complement the theoretical results with a range of experimental results on a simple Turing-complete language, showing how teaching dimension and teaching size differ in practice. Quite remarkably, we found that witness sets are usually smaller than the programs they identify, which is an illuminating justification of why machine teaching from examples makes sense at all.

The polynomial families complete for \(\textsf {VF}\), \(\textsf {VBP}\), and \(\textsf {VP}\) are model independent, i.e. they do not use a particular instance of a formula, ABP or circuit for characterising \(\textsf {VF}\), \(\textsf {VBP}\), or \(\textsf {VP}\), respectively.

In this paper, we initiate a systematic study of the parameterised complexity in the field of Dependence Logics which finds its origin in the Dependence Logic of Väänänen from 2007. We study a propositional variant of this logic (PDL) and investigate a variety of parameterisations with respect to the central decision problems. The model checking problem (MC) of PDL is NP-complete (Ebbing and Lohmann, SOFSEM 2012). The subject of this research is to identify a list of parameterisations (formula-size, formula-depth, treewidth, team-size, number of variables) under which MC becomes fixed-parameter tractable. Furthermore, we show that the number of disjunctions or the arity of dependence atoms (dep-arity) as a parameter both yield a paraNP-completeness result. Then, we consider the satisfiability problem (SAT) which classically is known to be NP-complete as well (Lohmann and Vollmer, Studia Logica 2013). There we are presenting a different picture: under team-size, or dep-arity SAT is paraNP-complete whereas under all other mentioned parameters the problem is in FPT. Finally, we introduce a variant of the satisfiability problem, asking for teams of a given size, and show for this problem an almost complete picture.

Complex behaviour emerges from interactions between objects produced by different generating mechanisms. Yet to decode their causal origin(s) from observations remains one of the most fundamental challenges in science. Here we introduce a universal, unsupervised and parameter-free model-oriented approach, based on the seminal concept and the first principles of algorithmic probability, to decompose an observation into its most likely algorithmic generative models. Our approach uses a perturbation-based causal calculus to infer model representations. We demonstrate its ability to deconvolve interacting mechanisms regardless of whether the resultant objects are bit strings, space–time evolution diagrams, images or networks. Although this is mostly a conceptual contribution and an algorithmic framework, we also provide numerical evidence evaluating the ability of our methods to extract models from data produced by discrete dynamical systems such as cellular automata and complex networks. We think that these separating techniques can contribute to tackling the challenge of causation, thus complementing statistically oriented approaches.

ResearchGate has not been able to resolve any references for this publication.