Chapter

Computational Complexity

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... ). (cf. [52,62]) The total time complexity of an algorithm is defined as the sum of the time required to execute each step of the algorithm, expressed as a function of the input size. If an algorithm involves multiple steps or operations, the total time complexity is determined by the time required for the most time-consuming operation. ...
... (cf. [52,62]) The Space Complexity of an algorithm is the total amount of memory required to execute the algorithm, expressed as a function of the input size. This includes: ...
... (cf. [52,62]) Big-O notation is a mathematical concept used to describe the upper bound of the time or space complexity of an algorithm. Let ( ) and ( ) be functions that map non-negative integers to non-negative real numbers. ...
Chapter
Full-text available
A Plithogenic Graph is a mathematical framework that models multi-valued attributes in graphs by incorporating membership and contradiction functions, enabling a nuanced representation of complex relationships. This Short paper develops algorithms for Plithogenic Graphs and Intuitionistic Plithogenic Graphs, and provides an in-depth analysis of their complexity and validity.
... The new approach we propose in order to prove the decidability of IDP is based on an algorithm to solve a restricted case of higher-order associativecommutative matching (AC-matching). To design this algorithm we use well-known results for solving systems of linear Diophantine equations (SLDE) [12,15,22,27], which we combine with a polynomial algorithm to solve the DO-ACM problem (Distinct Occurrences of AC-Matching) [8]. ...
... Our algorithm relies on solving a restricted case of higher-order AC-matching problem that is used to decide the deduction relation. It is a combination of two standard algorithms: one for solving the DO-ACM problem [8] which has a polynomial bound in our case; and one for solving systems of Linear Diophantine Equations(SLDE), which is polynomial in Z [12,15,22,27]. Using this algorithm we prove that IDP is decidable in polynomial time with respect to the saturated set of terms, for locally stable theories with inverses. ...
... This AC-matching problem is solved using the DO-ACM (Distinct-Occurrences of AC-matching) [8], where every variable in the term being matched occurs only once. In addition, we also use a standard and polynomial time algorithm for solving SLDE over Z [12,15,22,27]. ...
Preprint
Full-text available
We present an algorithm to decide the intruder deduction problem (IDP) for a class of locally stable theories enriched with normal forms. Our result relies on a new and efficient algorithm to solve a restricted case of higher-order associative-commutative matching, obtained by combining the Distinct Occurrences of AC- matching algorithm and a standard algorithm to solve systems of linear Diophantine equations. A translation between natural deduction and sequent calculus allows us to use the same approach to decide the \emphelementary deduction problem for locally stable theories. As an application, we model the theory of blind signatures and derive an algorithm to decide IDP in this context, extending previous decidability results.
... The time and space complexities of the algorithm are often subjects of analysis. Computational complexity evaluates the resources, such as time and space, required by an algorithm to solve a problem as a function of input size, offering theoretical efficiency bounds [1,10,61,85]. ...
... (cf. [85,99]) The Total Time Complexity of an algorithm is defined as the sum of the time required to execute each step of the algorithm, expressed as a function of the input size. If an algorithm involves multiple steps or operations, the total time complexity is determined by the maximum time required for the most time-consuming operation. ...
... (cf. [85,99]) The Space Complexity of an algorithm is the total amount of memory required to execute the algorithm, expressed as a function of the input size. This includes: ...
Chapter
Full-text available
Hypergraphs extend traditional graphs by allowing edges (known as hyperedges) to connect more than two vertices, rather than just pairs. This paper explores fundamental problems and algorithms in the context of SuperHypergraphs, an advanced extension of hypergraphs enabling modeling of hierarchical and complex relationships. Topics covered include constructing SuperHyperGraphs, recognizing SuperHyperTrees, and computing SuperHyperTree-width. We address a range of optimization problems, such as the SuperHy-pergraph Partition Problem, Reachability, Minimum Spanning SuperHypertree, and Single-Source Shortest Path. Furthermore, adaptations of classical problems like the Traveling Salesman Problem, Chinese Postman Problem, and Longest Simple Path Problem are presented in the SuperHypergraph framework.
... (cf. [75,85]) The Space Complexity of an algorithm is the total amount of memory required to execute the algorithm, expressed as a function of the input size. This includes: ...
... (cf. [75,85]) Big-O notation is a mathematical concept used to describe the upper bound of the time or space complexity of an algorithm. Let ( ) and ( ) be functions that map non-negative integers to non-negative real numbers. ...
... Definition 2.45. (cf.[75,85]) The Total Time Complexity of an algorithm is defined as the sum of the time required to execute each step of the algorithm, expressed as a function of the input size. ...
Preprint
Full-text available
In the study of uncertainty, concepts such as fuzzy sets [113], fuzzy graphs [79], and neutrosophic sets [88] have been extensively investigated. This paper focuses on a novel logical framework known as Upside-Down Logic, which systematically transforms truths into falsehoods and vice versa by altering contexts, meanings, or perspectives. The concept was first introduced by F. Smarandache in [99]. To contribute to the growing interest in this area, this paper presents a mathematical definition of Upside-Down Logic, supported by illustrative examples, including applications related to the Japanese language. Additionally , it introduces and explores Contextual Upside-Down Logic, an advanced extension that incorporates a contextual transformation function, enabling the adjustment of logical connectives in conjunction with flipping truth values based on contextual shifts. Furthermore, the paper introduces Indeterm-Upside-Down Logic and Certain Upside-Down Logic, both of which expand Upside-Down Logic to better accommodate indeterminate values. Finally, a simple algorithm leveraging Upside-Down Logic is proposed and analyzed, providing insights into its computational characteristics and potential applications.
... ). (cf. [52,62]) The total time complexity of an algorithm is defined as the sum of the time required to execute each step of the algorithm, expressed as a function of the input size. If an algorithm involves multiple steps or operations, the total time complexity is determined by the time required for the most time-consuming operation. ...
... (cf. [52,62]) The Space Complexity of an algorithm is the total amount of memory required to execute the algorithm, expressed as a function of the input size. This includes: ...
... (cf. [52,62]) Big-O notation is a mathematical concept used to describe the upper bound of the time or space complexity of an algorithm. Let ( ) and ( ) be functions that map non-negative integers to non-negative real numbers. ...
Preprint
Full-text available
A Plithogenic Graph is a mathematical framework that models multi-valued attributes in graphs by incorporating membership and contradiction functions, enabling a nuanced representation of complex relationships. This Short paper develops algorithms for Plithogenic Graphs and Intuitionistic Plithogenic Graphs, and provides an in-depth analysis of their complexity and validity.
... Papadimitriou in [18] part II, Logic Chapter 5 Corrolary 6 page 111, and it should be ...
... In most of the books of computational complexity it is stated the halting problem and the acceptance problem of Turing machines and it is proved the undesirability of them , with a diagonal method over Goedel words <M> of Turing machines. See for example C. Papadimitriou [18] The decider machine D must have higher logical order that the ones involved in its definition , that is the machines M in the <M>. But if one of the M= D then D must higher logical order than itself, which is a contradiction. ...
... The proof of the Time Hierarchy theorem is outlined below based on the proof as it exists in the book by C. Papadimitriou [18] paragraph 7.2 page 143. And also in slightly different way in the book by M Sipser [39] Theorem 9.10 page 369 . ...
Presentation
Full-text available
In this analysis we present the method that B Russel vanquishes both the syntactical and semantical antinomies in the logical system of Principia Mathematica, through his ramified logical orders, and disallowed predicative predicates (axiom of reducibility). We apply it to the Richard antinomy diagonal argument (in the validity of propositions) which is the twin brother of the Goedel's diagonal arguments (in the provability of propositions). We discuss the consequences of this for the work of Goedel We also present the consequences of Russell’s’ logical standards for the diagonal method proofs of the time hierarchy theorem and the acceptance problem of the Turing machines. Also the impact of the logical standards of Russell for the solution of the P vs NP millennium problem. Next we prove the non-provability of P!=NP within any finite order but countable only symbolic logic of set theory. We present though a proof acceptable by the standards of logic by B Russel, that P!=NP within the meta-mathematical approach of descriptive complexity which is equivalent to a ω+n transfinite logical order , symbolic logic . We discuss both the modification of the Church thesis so as to allow meta-machines but also restriction of the Church thesis to practical algorithms that do not involve at all the infinite in the set of inputs or run time and space (bounded run time and space).
... [103]. [104], but also C. Papadimitriou [18] part II, chapters 4,5 ). Each such logic e.g. ...
... The sentence is the next (See e.g. [18] part II, Chapter 5 example 5.11) ...
Presentation
Full-text available
This was originated with a long lecture here The millennium problem "Polynomial complexity versus non-deterministic polynomial complexity". What is the state of the art today? Ill posed aspects of the problem? Example of a reasonable solution. Perspectives in the theory of computational complexity https://www.researchgate.net/publication/376170410_The_millennium_problem_Polyno mial_complexity_versus_non-deterministic_polynomial_complexity_What_is_the_state_of_the_art_today_Ill_posed_asp ects_of_the_problem_Example_of_a_reasonable_solution_Perspec)
... Observe that ε can be arbitrarily small (i.e., arbitrary precision). For the lower bound we reduce from TSP Cost which is FP NP -hard [23]. Given a TSP Cost instance (G, c), G = (V, E) is a graph, c : E → Z is a cost function, we construct a game G such that the ε-worstNE(G) corresponds to the value of optimum tour 3 . ...
... Proof. To show that strong ε-improvement problem is NP-hard, we reduce from Hamiltonian Path problem: given a directed graph G = (V, E), is there a path that visits each vertex exactly once; this problem is NP-hard [23]. We build a game G and fix β, ∆ and ε such that the strong ε-improvement problem returns yes if and only if Hamiltonian Path returns yes. ...
Preprint
Full-text available
Mechanism design is a well-established game-theoretic paradigm for designing games to achieve desired outcomes. This paper addresses a closely related but distinct concept, equilibrium design. Unlike mechanism design, the designer's authority in equilibrium design is more constrained; she can only modify the incentive structures in a given game to achieve certain outcomes without the ability to create the game from scratch. We study the problem of equilibrium design using dynamic incentive structures, known as reward machines. We use weighted concurrent game structures for the game model, with goals (for the players and the designer) defined as mean-payoff objectives. We show how reward machines can be used to represent dynamic incentives that allocate rewards in a manner that optimises the designer's goal. We also introduce the main decision problem within our framework, the payoff improvement problem. This problem essentially asks whether there exists a dynamic incentive (represented by some reward machine) that can improve the designer's payoff by more than a given threshold value. We present two variants of the problem: strong and weak. We demonstrate that both can be solved in polynomial time using a Turing machine equipped with an NP oracle. Furthermore, we also establish that these variants are either NP-hard or coNP-hard. Finally, we show how to synthesise the corresponding reward machine if it exists.
... In the absence of gravitational effects, computational resources scale as follows [86]: ...
... Ignore y ▷ y is the "certificate" in the NP model This verifier simulates the deterministic Turing Machine M , effectively ignoring the non-deterministic aspect of NP computation. The key insight is that if a problem can be solved deterministically in polynomial time, it can certainly be verified nondeterministically in polynomial time, a principle that holds in both classical and gravitational settings [86]. ...
Preprint
Full-text available
This work explores the interplay between gravity, information processing, and computational complexity across the universe. We present a framework integrating gravitational effects into computational complexity theory, revealing implications for computation in curved spacetime. By synthesizing principles from general relativity, quantum mechanics, and complexity theory, we introduce gravitationally modified complexity classes and derive constraints imposed by spacetime curvature on computational capacity. Our analysis uncovers phenomena such as gravitationally induced phase transitions in problem complexity and potential observer-dependent resolutions of issues like the P vs NP problem in extreme gravitational environments. We demonstrate that maximum computational capacity is linked to a system's energy content and local gravitational field, with quantum gravitational effects significant at smaller scales. The framework predicts novel phenomena, including gravitationally induced decoherence and potential enhancements to quantum computation in specific gravitational regimes. We explore cosmological implications, examining how the universe's expansion affects computational capacity and connecting the arrow of time to the growth of computational complexity. Grounded in rigorous theoretical and mathematical foundations, we propose experimental setups to test our predictions, ranging from Earth-based atomic clock experiments to satellite-based quantum computing tests and astronomical observations. This research suggests the universe may be inherently computational, with gravity shaping the informational landscape of reality. Our findings offer new perspectives on problems like the black hole information paradox and open technological possibilities, including gravity-assisted quantum algorithms and holographic quantum computation. By demonstrating how gravity shapes computation, this work provides a unified view of information processing in the universe and paves the way for a deeper understanding of the connections between spacetime structure, quantum mechanics, and computational complexity.
... • Themes: Graph Classes Hierarchy, Mathematical Structure of Graph Classes, Graph Parameters [73,76,118], Algorithms [42], Computational Complexity [15,104], Real-world Applications, Combinatorics [41,69,119]. ...
Chapter
Full-text available
A graph class consists of graphs that share common structural properties, defined by specific rules or constraints. This study focuses on Even-Hole-Free and Meyniel Graphs, analyzed within the frameworks of Fuzzy, Neutrosophic, Turiyam Neutrosophic, and Plithogenic Graphs. Even-Hole-Free Graphs lack induced cycles with an even number of vertices, ensuring longer cycles are odd. Meyniel Graphs feature odd-length cycles (of at least 5 vertices) with at least two chords, enhancing connectivity.
... Since the early days of computational complexity, there has been an extensive study of the approximation properties of optimization problems whose underlying decision problem in NP-complete. This study has shown that such optimization problems may have very different approximation properties, ranging from polynomial-time approximation schemes (e.g., Knapsack) to constant-approximation algorithms (e.g., Min Vertex Cover) to logarithmic-approximation algorithms (e.g., Min Set Cover), or even worse (e.g., Max Clique) -see [14,29]. ...
Preprint
We embark on a study of the consistent answers of queries over databases annotated with values from a naturally ordered positive semiring. In this setting, the consistent answers of a query are defined as the minimum of the semiring values that the query takes over all repairs of an inconsistent database. The main focus is on self-join free conjunctive queries and key constraints, which is the most extensively studied case of consistent query answering over standard databases. We introduce a variant of first-order logic with a limited form of negation, define suitable semiring semantics, and then establish the main result of the paper: the consistent query answers of a self-join free conjunctive query under key constraints are rewritable in this logic if and only if the attack graph of the query contains no cycles. This result generalizes an analogous result of Koutris and Wijsen for ordinary databases, but also yields new results for a multitude of semirings, including the bag semiring, the tropical semiring, and the fuzzy semiring. We also show that there are self-join free conjunctive queries with a cyclic attack graph whose certain answers under bag semantics have no polynomial-time constant-approximation algorithm, unless P = NP.
... As an algorithmic problem, network alignment is clearly more challenging than sequence alignment, which can be solved by dynamic programming [16,17]. Already simpler problems such as matching two graphs by determining the largest common subgraph are N P -hard [18], which implies there is probably no polynomial-time algorithm. We have developed an efficient heuristic, by which network alignment is mapped onto to a generalized quadratic assignment problem, which in turn can be solved by iteration of a linear problem [19]. ...
Preprint
Complex interactions between genes or proteins contribute a substantial part to phenotypic evolution. Here we develop an evolutionarily grounded method for the cross-species analysis of interaction networks by {\em alignment}, which maps bona fide functional relationships between genes in different organisms. Network alignment is based on a scoring function measuring mutual similarities between networks taking into account their interaction patterns as well as sequence similarities between their nodes. High-scoring alignments and optimal alignment parameters are inferred by a systematic Bayesian analysis. We apply this method to analyze the evolution of co-expression networks between human and mouse. We find evidence for significant conservation of gene expression clusters and give network-based predictions of gene function. We discuss examples where cross-species functional relationships between genes do not concur with sequence similarity.
... In the same paper, the authors showed that the decision problem for a POMDP is PSPACE-complete and thus probably does not admit a polynomial-time algorithm. We prove that for all m 2: 2, DEC-POMDP m is NEXP-complete, and for all m 2: 3, DEC-MDP m is NEXP-complete, where NEXP = NTIME (2n c ) (Papadimitriou, 1994). Since P ::J. ...
Preprint
Planning for distributed agents with partial state information is considered from a decision- theoretic perspective. We describe generalizations of both the MDP and POMDP models that allow for decentralized control. For even a small number of agents, the finite-horizon problems corresponding to both of our models are complete for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov processes. In contrast to the MDP and POMDP problems, the problems we consider provably do not admit polynomial-time algorithms and most likely require doubly exponential time to solve in the worst case. We have thus provided mathematical evidence corresponding to the intuition that decentralized planning problems cannot easily be reduced to centralized problems and solved exactly using established techniques.
... , where QMA is the quantum analog of the NP complexity class in classical computing [Pap03]. There are several approximation algorithms for this problem, with the best approximation ratio for generic instances being 0.562 [Lee22]. ...
Article
Full-text available
In order to characterize and benchmark computational hardware, software, and algorithms, it is essential to have many problem instances on-hand. This is no less true for quantum computation, where a large collection of real-world problem instances would allow for benchmarking studies that in turn help to improve both algorithms and hardware designs. To this end, here we present a large dataset of qubit-based quantum Hamiltonians. The dataset, called HamLib (for Hamiltonian Library), is freely available online and contains problem sizes ranging from 2 to 1000 qubits. HamLib includes problem instances of the Heisenberg model, Fermi-Hubbard model, Bose-Hubbard model, molecular electronic structure, molecular vibrational structure, MaxCut, Max- k -SAT, Max- k -Cut, QMaxCut, and the traveling salesperson problem. The goals of this effort are (a) to save researchers time by eliminating the need to prepare problem instances and map them to qubit representations, (b) to allow for more thorough tests of new algorithms and hardware, and (c) to allow for reproducibility and standardization across research studies.
... We assume the reader to be familiar with the basic concepts of complexity theory [Pap94,BDG95]. Throughout the paper all logarithms are base 2. The following reduction types will be used in this paper. ...
Preprint
No P-immune set having exponential gaps is positive-Turing self-reducible.
... The point is that without an NP-completeness proof we would be trying the same thing without knowing it! " [40]. In the same spirit, these new conditional hardness results have cleared the polynomial landscape by showing that there really are not that many hard problems (for the recent background, see [48]). ...
Preprint
In recent years, significant progress has been made in explaining the apparent hardness of improving upon the naive solutions for many fundamental polynomially solvable problems. This progress has come in the form of conditional lower bounds -- reductions from a problem assumed to be hard. The hard problems include 3SUM, All-Pairs Shortest Path, SAT, Orthogonal Vectors, and others. In the (min,+)(\min,+)-convolution problem, the goal is to compute a sequence (c[i])i=0n1(c[i])^{n-1}_{i=0}, where c[k]=c[k] = mini=0,,k\min_{i=0,\ldots,k} {a[i]\{a[i] + b[ki]}b[k-i]\}, given sequences (a[i])i=0n1(a[i])^{n-1}_{i=0} and (b[i])i=0n1(b[i])_{i=0}^{n-1}. This can easily be done in O(n2)O(n^2) time, but no O(n2ε)O(n^{2-\varepsilon}) algorithm is known for ε>0\varepsilon > 0. In this paper, we undertake a systematic study of the (min,+)(\min,+)-convolution problem as a hardness assumption. First, we establish the equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)(\min,+)-convolution problem has been used as a building block in algorithms for many problems, notably problems in stringology. It has also appeared as an ad hoc hardness assumption. Second, we investigate some of these connections and provide new reductions and other results. We also explain why replacing this assumption with the SETH might not be possible for some problems.
... Not only is an initial starting state (corresponding to one or more feasible solutions) hard to find, designing the mixing operator is also problematic. Even given a set of feasible solutions to an NP-complete problem, it is typically computationally difficult to find another [42], making it difficult to design a mixer that fully explores the feasible subspace. The situation here is somewhat subtle, with it being easy to show in the case of SAT that finding a second solution when given a first remains NP-complete, but for a Hamiltonian cycle on cubic graphs, given a first solution, a second is easy to find (but not a third). ...
Preprint
The next few years will be exciting as prototype universal quantum processors emerge, enabling implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation, and which have the potential to significantly expand the breadth of quantum computing applications. A leading candidate is Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates between applying a cost-function-based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the Quantum Alternating Operator Ansatz, is the consideration of general parametrized families of unitaries rather than only those corresponding to the time-evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach to a wide variety of approximate optimization, exact optimization, and sampling problems. Here, we introduce the Quantum Alternating Operator Ansatz, lay out design criteria for mixing operators, detail mappings for eight problems, and provide brief descriptions of mappings for diverse problems.
... For any set L ⊆ Σ * , the complement of L is L df = Σ * \ L, and the characteristic function of L is denoted by χ L , i.e., χ L (x) = 1 if x ∈ L, and χ L (x) = 0 if x ∈ L. For the definition of relativized complexity classes and of oracle Turing machines, we refer to any standard text book on computational complexity (see, e.g., [Pap94,BDG88,HU79]). For any oracle Turing machine M and any oracle A, we denote the language of M A by L(M A ), and we simply write L(M ) if A = ∅. ...
Preprint
Ko [RAIRO 24, 1990] and Bruschi [TCS 102, 1992] showed that in some relativized world, PSPACE (in fact, ParityP) contains a set that is immune to the polynomial hierarchy (PH). In this paper, we study and settle the question of (relativized) separations with immunity for PH and the counting classes PP, C_{=}P, and ParityP in all possible pairwise combinations. Our main result is that there is an oracle A relative to which C_{=}P contains a set that is immune to BPP^{ParityP}. In particular, this C_{=}P^A set is immune to PH^{A} and ParityP^{A}. Strengthening results of Tor\'{a}n [J.ACM 38, 1991] and Green [IPL 37, 1991], we also show that, in suitable relativizations, NP contains a C_{=}P-immune set, and ParityP contains a PP^{PH}-immune set. This implies the existence of a C_{=}P^{B}-simple set for some oracle B, which extends results of Balc\'{a}zar et al. [SIAM J.Comp. 14, 1985; RAIRO 22, 1988] and provides the first example of a simple set in a class not known to be contained in PH. Our proof technique requires a circuit lower bound for ``exact counting'' that is derived from Razborov's [Mat. Zametki 41, 1987] lower bound for majority.
... We assume that the reader is familiar with standard complexity-theoretic notions and notation. For more background, we refer to any standard textbook on computational complexity theory such as Papadimitriou's book [Pap94]. All completeness results in this paper are with respect to the polynomial-time many-one reducibility, denoted by To define the boolean hierarchy over NP, we use the symbols ∧ and ∨, respectively, to denote the complex intersection and the complex union of set classes. ...
Preprint
We prove that the exact versions of the domatic number problem are complete for the levels of the boolean hierarchy over NP. The domatic number problem, which arises in the area of computer networks, is the problem of partitioning a given graph into a maximum number of disjoint dominating sets. This number is called the domatic number of the graph. We prove that the problem of determining whether or not the domatic number of a given graph is {\em exactly} one of k given values is complete for the 2k-th level of the boolean hierarchy over NP. In particular, for k = 1, it is DP-complete to determine whether or not the domatic number of a given graph equals exactly a given integer. Note that DP is the second level of the boolean hierarchy over NP. We obtain similar results for the exact versions of generalized dominating set problems and of the conveyor flow shop problem. Our reductions apply Wagner's conditions sufficient to prove hardness for the levels of the boolean hierarchy over NP.
... For K ≥ 1 define: E K TIME a∈N TIME exp K 2 (an) . Observe in particular that E 1 TIME = a∈N TIME exp 1 2 (an) = a∈N TIME (2 an ) = E (where E is the usual complexity class of this name, see e.g., [20,Ch. 20]). ...
Preprint
Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K \geq 1, left-linear cons-free systems with type order K characterize EK^KTIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (EK^KTIME versus EXPK1^{K-1}TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E1^1TIME \subsetneq E2^2TIME \subsetneq ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.
... We may ask whether there is a natural way to restrict the POTM model to recover feasibility. One way to do this is to introduce preservation of polynomialtime functions as an extrinsic or a semantic restriction [25]. This is the approach taken by Cook with his notion of intuitively feasible functionals [9]. ...
Preprint
This paper provides an alternate characterization of type-two polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-polynomial-time introduced by Cook. We impose a further restriction on the oracle interactions to force feasibility. Both the restriction as well as its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not guarantee that polynomial-time computability is preserved. To mend this we restrict the number of lookahead revisions, that is the number of times a query can be asked that is bigger than any of the previous queries. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one is allowed to separate a task into efficiently solvable subtasks. Formally put: the closure of our class under lambda-abstraction and application includes all feasible operations. We also revisit the very similar class of strongly polynomial-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This can be attributed to properties of the limited recursion operator: It is not strongly polynomial-time computable but decomposes into two such operations and lies in our class.
... Two problems are equivalent if a solution for one can be readily converted to a solution for the other (as defined in Boyd & Vandenberghe, 2004, §4.1.3); readers from computer science will recognize that our definitions are in the same spirit as the more formal ones from their field (see for example §10.3 of Papadimitriou, 1994, and §2.2 of Arora & Barak, 2009). Every canonicalization is a reduction. ...
Preprint
We describe a modular rewriting system for translating optimization problems written in a domain-specific language to forms compatible with low-level solver interfaces. Translation is facilitated by reductions, which accept a category of problems and transform instances of that category to equivalent instances of another category. Our system proceeds in two key phases: analysis, in which we attempt to find a suitable solver for a supplied problem, and canonicalization, in which we rewrite the problem in the selected solver's standard form. We implement the described system in version 1.0 of CVXPY, a domain-specific language for mathematical and especially convex optimization. By treating reductions as first-class objects, our method makes it easy to match problems to solvers well-suited for them and to support solvers with a wide variety of standard forms.
... Combinatorial optimization is one of the foundational problems of computer science. Though in general such problems are NP-hard (Papadimitriou 2003), it is often the case that locally optimal solutions can be useful in practice. In clustering for example, a common objective is to divide a given set of examples into a fixed number of groups so as to minimize the distances between group members. ...
Preprint
We study the task of finding good local optima in combinatorial optimization problems. Although combinatorial optimization is NP-hard in general, locally optimal solutions are frequently used in practice. Local search methods however typically converge to a limited set of optima that depend on their initialization. Sampling methods on the other hand can access any valid solution, and thus can be used either directly or alongside methods of the former type as a way for finding good local optima. Since the effectiveness of this strategy depends on the sampling distribution, we derive a robust learning algorithm that adapts sampling distributions towards good local optima of arbitrary objective functions. As a first use case, we empirically study the efficiency in which sampling methods can recover locally maximal cliques in undirected graphs. Not only do we show how our adaptive sampler outperforms related methods, we also show how it can even approach the performance of established clique algorithms. As a second use case, we consider how greedy algorithms can be combined with our adaptive sampler, and we demonstrate how this leads to superior performance in k-medoid clustering. Together, these findings suggest that our adaptive sampler can provide an effective strategy to combinatorial optimization problems that arise in practice.
... In the rich classical history of the theory of computation, models of computation were typically compared to the Turing machine concept, which allows us to characterize their computational power in great detail. 1,2 If, however, one would like to ascribe "computational" capacity to processes and systems observed in nature, one is naturally pushed toward using dynamical systems notions as the natural framework, leaving the problem open of how to fit this approach into, or how to link this approach with, Turing computation. A paradigmatic class of systems that comprise in a generic manner both computational and dynamical system aspects are the cellular automata (CA). ...
Preprint
Cellular automata are both computational and dynamical systems. We give a complete classification of the dynamic behaviour of elementary cellular automata (ECA) in terms of fundamental dynamic system notions such as sensitivity and chaoticity. The "complex" ECA emerge to be sensitive, but not chaotic and not eventually weakly periodic. Based on this classification, we conjecture that elementary cellular automata capable of carrying out complex computations, such as needed for Turing-universality, are at the "edge of chaos".
... where Q = ∃, if k is odd, and Q = ∀, if k even. An introduction to the polynomial hierarchy and the classes Σ p k can be found in the book by Papadimitriou [35] or in the article by Jeroslow [27]. An introduction specifically in the context of bilevel optimization can be found in the article of Woeginger [38]. ...
Preprint
Recoverable robust optimization is a popular multi-stage approach, in which it is possible to adjust a first-stage solution after the uncertain cost scenario is revealed. We consider recoverable robust optimization in combination with discrete budgeted uncertainty. In this setting, it seems plausible that many problems become Σ3p\Sigma^p_3-complete and therefore it is impossible to find compact IP formulations of them (unless the unlikely conjecture NP =Σ3p= \Sigma^p_3 holds). Even though this seems plausible, few concrete results of this kind are known. In this paper, we fill that gap of knowledge. We consider recoverable robust optimization for the nominal problems of Sat, 3Sat, vertex cover, dominating set, set cover, hitting set, feedback vertex set, feedback arc set, uncapacitated facility location, p-center, p-median, independent set, clique, subset sum, knapsack, partition, scheduling, Hamiltonian path/cycle (directed/undirected), TSP, k-disjoint path (k2k \geq 2), and Steiner tree. We show that for each of these problems, and for each of three widely used distance measures, the recoverable robust problem becomes Σ3p\Sigma^p_3-complete. Concretely, we show that all these problems share a certain abstract property and prove that this property implies that their robust recoverable counterpart is Σ3p\Sigma^p_3-complete. This reveals the insight that all the above problems are Σ3p\Sigma^p_3-complete 'for the same reason'. Our result extends a recent framework by Gr\"une and Wulf.
... First, as suggested in the article, these rules appear in a number of textbooks and well-known sources. In addition to being in the standard database book [AHV95], one can find some of the crucial ones in [Pap94,page 99], and many of the crucial ones also appear in the Wikipedia entry on first-order logic. 2 We can remark that these rules are generally used when showing that first-order formulas can be rewritten into prenex normal form. ...
Preprint
Full-text available
A central computational task in database theory, finite model theory, and computer science at large is the evaluation of a first-order sentence on a finite structure. In the context of this task, the \emph{width} of a sentence, defined as the maximum number of free variables over all subformulas, has been established as a crucial measure, where minimizing width of a sentence (while retaining logical equivalence) is considered highly desirable. An undecidability result rules out the possibility of an algorithm that, given a first-order sentence, returns a logically equivalent sentence of minimum width; this result motivates the study of width minimization via syntactic rewriting rules, which is this article's focus. For a number of common rewriting rules (which are known to preserve logical equivalence), including rules that allow for the movement of quantifiers, we present an algorithm that, given a positive first-order sentence ϕ\phi, outputs the minimum-width sentence obtainable from ϕ\phi via application of these rules. We thus obtain a complete algorithmic understanding of width minimization up to the studied rules; this result is the first one -- of which we are aware -- that establishes this type of understanding in such a general setting. Our result builds on the theory of term rewriting and establishes an interface among this theory, query evaluation, and structural decomposition theory.
... some semantics σ ∈ Σ. For that, we assume familiarity with the basic concepts of computational complexity, in particular with the basic complexity classes P, NP and coNP [17]. Furthermore, we also consider the classes Σ P 2 and Π P 2 . ...
Chapter
We introduce the notion of serialisation equivalence, which provides a notion of equivalence that takes the underlying dialectical structure of extensions in an argumentation framework into account. Under this notion, two argumentation frameworks are considered equivalent if they possess not only the same extensions wrt. some semantics but also the same serialisation sequences. A serialisation sequence is a decomposition of an extension into a series of minimal acceptable sets and essentially offers insight into the order in which arguments need to brought forward to resolve the conflicts and to justify a particular position in the argumentation framework. We analyse serialisation equivalence in detail and show that it is generally more strict than standard equivalence and less strict than strong equivalence. Furthermore, we provide a full analysis of the computational complexity of deciding serialisation equivalence.
... Computational Complexity Theory focuses on classifying computational problems based on their intrinsic difficulty and the resources (such as time and space) required to solve them [4,115,311,558,908]. However, the full scope of algorithm complexity for uncertain graphs has not yet been fully uncovered. ...
Preprint
Full-text available
Graph theory is a fundamental branch of mathematics that studies networks consisting of nodes (vertices) and their connections (edges). Extensive research has been conducted on various graph classes within this field. Fuzzy Graphs and Neutrosophic Graphs are specialized models developed to address uncertainty in relationships. Intersection graphs, such as Unit Square Graphs, Circle Graphs, Ray Intersection Graphs, Grid Intersection Graphs, and String Graphs, play a critical role in analyzing graph structures. In this paper, we explore intersection graphs within the frameworks of Fuzzy Graphs, Intuitionistic Fuzzy Graphs, Neutrosophic Graphs, Turiyam Graphs, and Plithogenic Graphs, highlighting their mathematical properties and interrelationships. Additionally, we provide a comprehensive survey of the graph classes and hierarchies related to intersection graphs and uncertain graphs, reflecting the increasing number of graph classes being developed in these areas.
... The algorithm responds "yes" if the first oracle call returns "yes" and the second one returns "no". Our proof is based on a reduction of a DP-complete graph problem, known as Critical 3-colorability [11]: ...
Preprint
Full-text available
This paper studies the completeness of conjunctive queries over a partially complete database and the approximation of incomplete queries. Given a query and a set of completeness rules (a special kind of tuple generating dependencies) that specify which parts of the database are complete, we investigate whether the query can be fully answered, as if all data were available. If not, we explore reformulating the query into either Maximal Complete Specializations (MCSs) or the (unique up to equivalence) Minimal Complete Generalization (MCG) that can be fully answered, that is, the best complete approximations of the query from below or above in the sense of query containment. We show that the MSG can be characterized as the least fixed-point of a monotonic operator in a preorder. Then, we show that an MCS can be computed by recursive backward application of completeness rules. We study the complexity of both problems and discuss implementation techniques that rely on an ASP and Prolog engines, respectively.
... The space occupied by input and output neurons and backpropagation is 2 + 2 . The total space occupied by dense layer ( ) is defined by = (|▽ | + | | + 2 + 2 ) × bytes (11) Number of multiplication or addition operations per layer ( ) is expressed as (Papadimitriou, 2003) = ( × × ℎ)( − + )( − ℎ + ) 2 ...
Article
Full-text available
This research addresses the pressing challenge of enhancing processing times and detection capabilities in Unmanned Aerial Vehicle (UAV)/drone imagery for global wildfire detection, despite limited datasets. Proposing a Segmented Neural Network (SegNet) selection approach, we focus on reducing feature maps to boost both time resolution and accuracy significantly advancing processing speeds and accuracy in real-time wildfire detection. This paper contributes to increased processing speeds enabling real-time detection capabilities for wildfire, increased detection accuracy of wildfire, and improved detection capabilities of early wildfire, through proposing a new direction for image classification of amorphous objects like fire, water, smoke, etc. Employing Convolutional Neural Networks (CNNs) for image classification, emphasizing on the reduction of irrelevant features vital for deep learning processes, especially in live feed data for fire detection. Amidst the complexity of live feed data in fire detection, our study emphasizes on image feed, highlighting the urgency to enhance real-time processing. Our proposed algorithm combats feature overload through segmentation, addressing challenges arising from diverse features like objects, colors, and textures. Notably, a delicate balance of feature map size and dataset adequacy is pivotal. Several research papers use smaller image sizes, compromising feature richness which necessitating a new approach. We illuminate the critical role of pixel density in retaining essential details, especially for early wildfire detection. By carefully selecting number of filters during training, we underscore the significance of higher pixel density for proper feature selection. The proposed SegNet approach is rigorously evaluated using real-world dataset obtained by a drone flight and compared to state-of-the-art literature. Keywords: Segment Neural Network, Machine Learning, Unmanned Aerial Vehicle, Drones, Convolution Neural Network, Wildfire, Detection, Computer Vision
... Performing arithmetic operations on extremely large numbers requires more processing power and time. Algorithms designed for standard-sized numbers may not scale efficiently to handle very large inputs [8].The Java program was coded in a computer with the following specifications: ...
Article
Full-text available
The Goldbach Conjecture states that every even integer ≥ 4 can be written as a sum of two prime numbers. It is known to be true for all even numbers up to 4 × 1018[1]. Using the new formulation of a set of even numbers as [9], and the fact that, an even number of this formulation can be partitioned into all pairs of odd numbers [10], we present a computational algorithm that confirms the Strong Goldbach Conjecture holds true for all even numbers not larger than 9 × 1018.
... Computational Complexity. We assume the reader to be familiar with the basic concepts of computational complexity theory (Arora & Barak, 2009;Papadimitriou, 1994)-As usual, by P (polynomial time) we denote the class of all problems which can be solved via a deterministic polynomial-time Turing machine. As usual, we will call these problems tractable. ...
Article
Full-text available
This paper is a contribution to the research on dynamics in assumption-based argumentation (ABA). We investigate situations where a given knowledge base undergoes certain changes. We show that two frequently investigated problems, namely enforcement of a given target atom and deciding strong equivalence of two given ABA frameworks, are intractable in general. Notably, these problems are both tractable for abstract argumentation frameworks (AFs) which admit a close correspondence to ABA by constructing semanticspreserving instances. Inspired by this observation, we search for tractable fragments for ABA frameworks by means of the instantiated AFs. We argue that the usual instantiation procedure is not suitable for the investigation of dynamic scenarios since too much information is lost when constructing the abstract framework. We thus consider an extension of AFs, called cvAFs, equipping arguments with conclusions and vulnerabilities in order to better anticipate their role after the underlying knowledge base is extended. We investigate enforcement and strong equivalence for cvAFs and present syntactic conditions to decide them. We show that the correspondence between cvAFs and ABA frameworks is close enough to capture dynamics in ABA. This yields the desired tractable fragment. We furthermore discuss consequences for the corresponding problems for logic programs.
Book
Full-text available
The third volume of “Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization” delves into groundbreaking developments in uncertain combinatorics and set theory. It highlights methodologies such as graphization, hyperization, and uncertainization, seamlessly integrating fuzzy, neutrosophic, soft, and rough set theories to address uncertainty in complex systems. These innovations bridge the gap between combinatorics and graph theory, resulting in novel structures like hypergraphs and superhypergraphs. The first chapter introduces Upside-Down Logic, a framework that reverses truths and falsehoods based on contextual shifts. The second chapter focuses on Local-Neutrosophic Logic, which incorporates the concept of locality to better model uncertainty. In the third chapter, advanced set-theoretic offsets are explored, while the fourth chapter extends Plithogenic Graphs to innovative structures such as Plithogenic OffGraphs. The fifth chapter expands matroid theory with the introduction of Neutrosophic Closure Matroids, which integrate uncertainty and indeterminacy. Finally, the sixth chapter examines graph parameters, including Superhypertree-width and Neutrosophic Tree-width, to deepen the understanding of graph characteristics in uncertain contexts. This volume establishes a comprehensive framework for addressing mathematical and real-world uncertainties, laying a strong foundation for future innovations in combinatorics, set theory, and graph theory.
Preprint
Full-text available
The study of SAT and its variants has provided numerous NP-complete problems, from which most NP-hardness results were derived. Due to the NP-hardness of SAT, adding constraints to either specify a more precise NP-complete problem or to obtain a tractable one helps better understand the complexity class of several problems. In 1984, Tovey proved that bounded-degree SAT is also NP-complete, thereby providing a tool for performing NP-hardness reductions even with bounded parameters, when the size of the reduction gadget is a function of the variable degree. In this work, we initiate a similar study for QBF, the quantified version of SAT. We prove that, like SAT, the truth value of a maximum degree two quantified formula is polynomial-time computable. However, surprisingly, while the truth value of a 3-regular 3-SAT formula can be decided in polynomial time, it is PSPACE-complete for a 3-regular QBF formula. A direct consequence of these results is that Avoider-Enforcer and Client-Waiter positional games are PSPACE-complete when restricted to bounded-degree hypergraphs. To complete the study, we also show that Maker-Breaker and Maker-Maker positional games are PSPACE-complete for bounded-degree hypergraphs.
Article
Neural symbolic knowledge graph (KG) reasoning offers a promising approach that combines the expressive power of symbolic reasoning with the learning capabilities inherent in neural networks. This survey provides a comprehensive overview of advancements, techniques, and challenges in the field of neural symbolic KG reasoning. The survey introduces the fundamental concepts of KGs and symbolic logic, followed by an exploration of three significant KG reasoning tasks: knowledge graph completion, complex query answering, and logical rule learning. For each task, we thoroughly discuss three distinct categories of methods: pure symbolic methods, pure neural approaches, and the integration of neural networks and symbolic reasoning methods known as neural-symbolic. We carefully analyze and compare the strengths and limitations of each category of methods to provide a comprehensive understanding. By synthesizing recent research contributions and identifying open research directions, this survey aims to equip researchers and practitioners with a comprehensive understanding of the state-of-the-art in neural symbolic KG reasoning, fostering future advancements in this interdisciplinary domain.
Chapter
The author will discuss different quantum protocols in this chapter for guided media and open space communication. The author examines already developed quantum protocols for photons and electrons. Earlier developed quantum protocols are the basis of recently developed quantum protocols. The author makes a study on quantum protocols with the help of quantum mechanics features such as entanglement, superposition, uncertainty principle, and no cloning.
Preprint
Full-text available
As practitioners seek to surpass the current reliability and quality frontier of monolithic models, Compound AI Systems consisting of many language model inference calls are increasingly employed. In this work, we construct systems, which we call Networks of Networks (NoNs) organized around the distinction between generating a proposed answer and verifying its correctness, a fundamental concept in complexity theory that we show empirically extends to Language Models (LMs). We introduce a verifier-based judge NoN with K generators, an instantiation of "best-of-K" or "judge-based" compound AI systems. Through experiments on synthetic tasks such as prime factorization, and core benchmarks such as the MMLU, we demonstrate notable performance gains. For instance, in factoring products of two 3-digit primes, a simple NoN improves accuracy from 3.7\% to 36.6\%. On MMLU, a verifier-based judge construction with only 3 generators boosts accuracy over individual GPT-4-Turbo calls by 2.8\%. Our analysis reveals that these gains are most pronounced in domains where verification is notably easier than generation--a characterization which we believe subsumes many reasoning and procedural knowledge tasks, but doesn't often hold for factual and declarative knowledge-based settings. For mathematical and formal logic reasoning-based subjects of MMLU, we observe a 5-8\% or higher gain, whilst no gain on others such as geography and religion. We provide key takeaways for ML practitioners, including the importance of considering verification complexity, the impact of witness format on verifiability, and a simple test to determine the potential benefit of this NoN approach for a given problem distribution. This work aims to inform future research and practice in the design of compound AI systems.
Article
Full-text available
Reaction systems are a formal model for computational processing in which reactions operate on sets of entities (molecules) providing a framework for dealing with qualitative aspects of biochemical systems. This paper is concerned with reaction systems in which entities can have discrete concentrations, and so reactions operate on multisets rather than sets of entities. The resulting framework allows one to deal with quantitative aspects of reaction systems, and a bespoke linear-time temporal logic allows one to express and verify a wide range of key behavioural system properties. In practical applications, a reaction system with discrete concentrations may only be partially specified, and the possibility of an effective automated calculation of the missing details provides an attractive design approach. With this idea in mind, the current paper discusses parametric reaction systems with parameters representing unknown parts of hypothetical reactions. The main result is a method aimed at replacing the parameters in such a way that the resulting reaction system operating in a specified external environment satisfies a given temporal logic formula.This paper provides an encoding of parametric reaction systems in smt, and outlines a synthesis procedure based on bounded model checking for solving the synthesis problem. It also reports on the initial experimental results demonstrating the feasibility of the novel synthesis method.
Article
Showing that a problem is hard for a model of computation is one of the most challenging tasks in theoretical computer science, logic and mathematics. For example, it remains beyond reach to find an explicit problem that cannot be computed by polynomial size propositional formulas (PF). As a model of computation, logic programs (LP) under answer set semantics are as expressive as PF, and also NP\mathtt{NP} -complete for satisfiability checking. In this paper, we show that the PAR problem is hard for LP, i.e., deciding whether a binary string contains an odd number of 1 ’s requires exponential size logic programs. The proof idea is first to transform logic programs into equivalent boolean circuits, and then apply a probabilistic method known as random restriction to obtain an exponential lower bound. Based on the main result, we generalize a sufficient condition for identifying hard problems for LP, and give a separation map for a logic program family from a computational point of view, whose members are all equally expressive and share the same reasoning complexity.
Article
Facial recognition technology has been developed and widely used for decades. However, it has also made privacy concerns and researchers’ expectations for facial recognition privacy-preserving technologies. To provide privacy, detailed or semantic contents in face images should be obfuscated. However, face recognition algorithms have to be tailor-designed according to current obfuscation methods, as a result the face recognition service provider has to update its commercial off-the-shelf(COTS) products for each obfuscation method. Meanwhile, current obfuscation methods have no clearly quantified explanation. This paper presents a universal face obfuscation method for a family of face recognition algorithms using global or local structure of eigenvector space. By specific mathematical explanations, we show that the upper bound of the distance between the original and obfuscated face images is smaller than the given recognition threshold. Experiments show that the recognition degradation is 0% for global structure based and 0.3%-5.3% for local structure based, respectively. Meanwhile, we show that even if an attacker knows the whole obfuscation method, he/she has to enumerate all the possible roots of a polynomial with an obfuscation coefficient, which is computationally infeasible to reconstruct original faces. So our method shows a good performance in both privacy and recognition accuracy without modifying recognition algorithms.
Article
Full-text available
What is computational explanation? Many accounts treat it as a kind of causal explanation. I argue against two more specific versions of this view, corresponding to two popular treatments of causal explanation. The first holds that computational explanation is mechanistic, while the second holds that it is interventionist. However, both overlook an important class of computational explanations, which I call limitative explanations. Limitative explanations explain why certain problems cannot be solved computationally, either in principle or in practice. I argue that limitative explanations are not plausibly understood in either mechanistic or interventionist terms. One upshot of this argument is that there are causal and non-causal kinds of computational explanation. I close the paper by suggesting that both are grounded in the notion of computational implementation.
Article
The problem of min-time coverage in constricted environments concerns the employment of networked robotic fleets for the support of routine inspection and service operations taking place in well-structured but constricted environments. In a series of previous works, we have provided a detailed definition of this problem, a Mixed Integer Programming (MIP) formulation for it, a formal analysis of its worst-case computational complexity, and additional structural results for its optimal solutions that also enable the solution of the problem through a partial relaxation of the original MIP formulation. The current work employs those past developments towards the development of a heuristic algorithm able to address larger problem instances that are not amenable to the previous methods. An accompanying set of numerical experiments demonstrates and assesses the computational advantages of this new method. Furthermore, the presented developments can function as building blocks for additional heuristic approaches to the considered problem; this potential is highlighted in the concluding part of the paper.
ResearchGate has not been able to resolve any references for this publication.