Science topic
Computational Complexity Theory - Science topic
Explore the latest questions and answers in Computational Complexity Theory, and find Computational Complexity Theory experts.
Questions related to Computational Complexity Theory
Can we apply the theoretical computer science for proofs of theorems in Math?
Explore the unresolved question in computational complexity theory, addressing whether problems solvable in polynomial time (P) can be verified as efficiently as they are solved (NP), impacting fields like cryptography and optimization.
Our answer is certain: YES. See
I am trying to calculate the computational complexity of an expression that includes these hyperbolic functions. I know how to calculate the computational complexity of other parts of the expression but facing difficulties in calculating the complexity of these hyperbolic functions. What is the computational complexity of sinh, cosh, and tanh functions? Can anyone explain, please? Thanks in advance.
During the addition of ions Na and Cl to the system in the sol, the program threw the error stating that "no line with molecule 'SOL' found in the [molecules] section of file 'topol.top'.
While the file topol.top has the entry in it. please suggest how to rectify the errror.
Thanks in advance.
Regards,
Vinay
Should these polynomials be defined up to bounded number of members in their Taylor series?
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Hello,
I am currently designing a toy model to calculate interatomic potential this toy model needs to scale better than DFT simulation. But sadly I have little knowledge of DFT simulations I have a infinite repeating lattice (which I of course limit to just a few surrounding cells)
I found a paper online (https://www.nature.com/articles/nphys1370) and according to it DFT scales as O(N^3) (so cubicly with the amount of particles) but one thing is not entirely clear to me. Does the DFT simulation scale cubicly with the amount of particles in the unit-cell or with all the particles in the simulation (including the ones part of the boundary condition)?
If you know the answer I would love to hear (I might be understanding/reading it wrong)!
Can the complexity theory solve complete or partially problems in Math?
Pls, anyone with contributions on how i can use DEA to solve Graph Algorithms problems such, Network flow, Project management, Scheduling, Routing.etc
Majorly I need information on how to identify the input and output variables in this kind of problems(where there is no complete knowledge of the I/O ).
I think I can identify my DMUs.
I shall be glad to receive contributions on the appropriate general DEA model approach for solving Combinatorial Optimization problems of these kind.
Thanks
I've recently read about a Reinforcement Learning (RL) agent with an LSTM controller overseeing an LSTM path integration module receiving occassional visual input from a CNN (Banino et al., 2018).
Does the functionality gain of combining different NNs eventually flat out? Is model standardization, bringing an air of component-based development (CBD) into NN architectures, for the best? Or are end-to-end implementations with higher integration values to be preferred?
I need to know the computational complexity of two operations in terms of Big O notation:
(i) Elementwise division of two NxM matrices
(ii) Elementwise multiplication of two NxM matrices
In both cases N=M/2
I have some idea about the Big O notation. Please direct me to any article or tutorial so that I can derive the computational complexity of any algorithm on my own
Does anybody know of an optimization tool which has a built in spatial branch and bound solver?
Is there any definitive relationship between the two under certain circumstance?
Given a graph G and a finite list L ( v ) ⊆ N for each vertex v ∈ V , the list-coloring problem asks for a list-coloring of G , i.e., a coloring f such that f ( v ) ∈ L ( v ) for every v ∈ V. The list coloring problem is NP-Complete for most of the graph classes. Can anyone please provide the related literature in which the list coloring problem has been proved NP-Complete for the general graph using reduction (from well know NP-Complete problem)?
We know there is an elementary cellular automata (ECA) with 2 states (Rule 110) that is universal, i.e. Turing-complete. One-way cellular automata (OCA's) are a subcategory of ECA's where the next state only depends on the state of the current cell and one neighbor. I believe that one can make a universal OCA with 4 states pretty easily by simulating two cells of Rule 110 with each cell of the OCA. Does anyone know if it can be done with only three states? Thanks!
Is it possible to compare the evapotranspiration of sebal algorithms with Potential evapotranspiration of fao-penman- montith?
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?
Can any one suggest me any paper regarding this topic?
Thank you in advance.
A TQBF is a boolean formula with alternating existential and universal quantifiers. The boolean formula here is in conjunctive normal form (CNF).
The goal of computational complexity is to classify algorithms according to their performances.If a problem is complex with multi dimensional variable will that affect the performance of the algorithm? consider this:
T(n) using the "big-O" notation to express an algorithm runtime complexity. For example, the following statement
T(n) = O(n2)
says that an algorithm has a quadratic time complexity.
In what sense is "irreducibly complex" synonymous with "NP-complete"?
In what sense is "complicated" just "computationally-intensive but solvable"?
I am studying in detail one-way functions and in the standard literature it is known that there are 3 different kinds of these functions: deterministic, weak and strong. I was wondering if there is another kind of functions in classical literature?
In trying to approximate an answer to an NP-complete problem, a heuristic (i.e. particle swarm, genetic algorithm) is used. Is there a study on what standard heuristic(s) is/are used in approximating NP-complete problems?
Let us assume a connected graph with large number of nodes without weight in the edge. Also assume that entire information is not available (i.e. adjacency matrix is not available). One node is having information of its neighbor only. Now I want to find a path to destination. How do I find one path which is shortest among various possible path.
Consider the decision problem where one is given a directed graph G=(V,A), a root r ∈ V, and a subset of its arcs T ⊆ A.
The problem consists on deciding whether there is a subgraph G'=(V,A') of G such that:
a. All vertices in V are reachable from r in G';
b. T ⊆ A', ie, G' contains all arcs in T; and
c. G' has no cycles, ie, G' is a DAG.
As an example, the answer is 'yes' for the graph attached with r=0 and T=∅, but it is 'no' with r=0 and T={(3,0)}.
Could someone please tell if this problem is in P or NP-Hard (or...)?
It seems the problem is in P for the case T=∅ (a DFS will do). Also, the "undirected version" of the problem seems to be in P. I guess it could be solved by Kruskal's algorithm for MST, processing the edges in T first. However, I am not sure about the general, directed case. It may be worth pointing that we are looking for a DAG, and not necessary an arborescence.
Some related problems are the Feedback Arc Set problem and the Directed Steiner Tree problem, both NP-Hard. Unfortunately I could not think on reductions from these problems to this one, mainly due to the fact that all vertices must be present and reachable.
Maybe I am just missing something...?
Thank you very much in advice,
Ricardo.
In terms of graph isomorphism complexity classes, Trees have a polynomial time algorithm and Directed Acyclic Graphs (DAG's) do not (so far).
What about Poly-trees (oriented trees)? These are DAG's where the possible paths from a node are all trees. Unlike tree nodes, Poly-tree nodes can have several parents.
Are polytrees in the isomorphism complexity class of DAG's or Trees, so far?
I have posted this question in mathoverflow and math.stackexchange with no answers so far.
for example, rough partial orders or rough pre-orders or rough binary relations? but also rough algebraic structures such as semigroups, monoids, etc?
I started to study Kolmogorov complexity today, and this question came to mind. Is there any way to use LZW to do this? I'm looking for a guidance to my studies
I read this article [1] and found an interesting algorithm Space Saving (SS) for finding Top-K and frequent items or heavy hitters. The authors described a data structure called Stream summary. The SS algorithm is very simple but the stream summary data structure is a little bit difficult to implement. I am wondering is it possible to design SS algorithm without stream summary data structure. Can some one guide me?
[1] Metwally, Ahmed, Divyakant Agrawal, and Amr El Abbadi. "Efficient computation of frequent and top-k elements in data streams." Database Theory-ICDT 2005. Springer Berlin Heidelberg, 2005. 398-412.
To know a problem NP-hard or not is a delicate question. Although it is an important question for economics too, it is a kind of labyrinth for a non-specialist. Moreover, a small change of of problem setting change NP-hard problem to a tractable problem. I want to know if the problem defined below is NP-hard or not. It is a problem which lies between the (original) optimal assignment problem and the generalized assignment problem.
The optimal assignment problem can be formulated as follows.
Find an optimal assignment σ: a permutation of set [N] = {1, 2, ... , N} that maximizes the linear sum
∑i=1N a iσ(i),
where A = ( aij ) is a positive square matrix of order N. This problem can be reduced to LP problem
Maximize ∑i=1Nj=1N aij xij (1)
under the conditions
∑ i=1Nxij = 1 ∀ i=1, ..., N
∑ j=1Nxij = 1 ∀ j=1, ... , N;
xij ≧ 0 ∀ i=1, ..., N; j=1, ... , N,
This is a classical LP problem. Birkhoff, von Neumann, Koopmans and others showed that this problem can be solved by a Hungarian method. The Birkhoff-von Neumann theorem assures that this LP problem is equivalent to the associated integer problem, i.e. the problem with the restrictions xij = 0, 1.
Now this problem can be generalized as Generalized Assignment Problem (GAP) and it is cited as NP-hard problem. (See for example "Generalized assignment problem" in Wikipedia.) A GAP is formulated as follows:
Maximize ∑ i=1M, j=1N aij xij (2)
subject to
∑ j=1N wij xij ≦ ti ∀ i=1, ..., M;
∑ i=1Mxij = 1 ∀ j=1, ... , N;
xij = {0, 1} ∀ i=1, ..., M; j=1, ... , N,
where A = (aij) and W = (wij) are two positive rectangular matrices of size M × N and t = (ti) is a positive vector of dimension M.
It is evident that the problem (2) is reduced problem (1) when
wij = 1 ∀ i, j and ti = 1 ∀ i.
Now let us set a third problem, which is a maximization problem on on a bipartite graph.
Let G = ([M]∪[N] , E) where E ⊆ [M]×[N] (a bipartite graph). The problem is to
maximize ∑ (i, j) ∈ E aij xij (3)
subjected to
∑i=1N xij = 1 ∀j;
∑j=1xij = 1 ∀i;
xij ≧ 0 ∀i, j.
If E is defined by
(i, j) ∈ E if and only if wij ≦ ti.
we see that the problem (3) lies between problem (1) and (2).
I wonder if this problem (3) is also NP-hard or it has an algorithm which ends in polynomial time.
I creat the generator matrix using the method in paper "Efficient encoding of quasi-cyclic low-density parity-check codes". The parity bits of the generator matrix are decimal number. Why are the parity bits parts in the generator matrix not 0 or 1?
I focused on the LLR computation of the codeword for soft-decision LDPC decoidng. The decimal number are difficult to get LLR information in noise channel.
Using the EG(3,2^2) in GF(2^6), I create type-I QC-LDPC parity matrix H=[H1,H2,H3,H4,H5] with five parallel line pounds.
When I solve a problem and want to compare it with the analytically solution, how do I know which "N" to take as lambda scales with N.
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required. It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
I have gone through some paper, where it is mentioned that weights are assigned using semiring. How these rings are formed i.e. under what operations on rings are formed? Is it depend on application or anything else?
What is the best technique for arabic name entity recognition: regular expressions, finite state transducers, CFG, HMM, CRF
Please I have a question,
How will I know if an optimization problem is NP-hard or NP-complet?
(Many papers just say that the optimization problem they have is NP hard and propose heuristics...)
If a graph depicts connectivity, what does it tell us about the space it segments?
Hello,
Does anyone knows what does w[1]-hard means in the context of parametrized complexity?
I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.
Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.
Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)
My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.
Note: I also asked this question on CSTheory StackExchange. http://cstheory.stackexchange.com/questions/29175/minimum-vertex-cover-on-k-regular-graphs-for-fixed-k2-np-hard-proof
Thank you so much!
Does it imply that if the theory did not allow calculating values of the given quantity in reasonable time, then this theoretical quantity would not have a counterpart in physical reality? Particularly, does this imply that the wave functions of the Universe do not correspond to any element of physical reality, inasmuch as they cannot be calculated in any reasonable time? Furthermore, if the ‘computational amendment’ (mentioned in the paper http://arxiv.org/abs/1410.3664v1) to the EPR definition of an element of physical reality is important and physically meaningful, should we then exclude infeasible, i.e., practically useless, solutions from all the equations of physical theories?
I need an example to find out how was this problem solved?
the Equation is as follows:
" (∂4w/∂x4− ∂4w0/∂x4)+(P−EA/2L*(int((∂w/∂x−∂w0/∂x)*dx))*∂2w/∂x2−f=0 "
A reduced order (RO) model resulting from the Galerkin decomposition which is based on the following representation of the beamshape
w(x)=qi*ϕi(x)
I attached the Resulting equation.
I need to know what is the order of complexity and also how to calculate it.
Thanks in advance.
Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?
Definitions:
1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.
2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.
3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.
The problem had better to be a problem in graph theory, with a tree as its input.
The question is regarding NP problems in Data Structures.
From my own research, I reached the conclusion that information about the behavior of the coefficients of the series expansion of the Riemann Xi function could lead to a solution to the Riemann Hypothesis. The proof would rely on the convexity of the modulus (squared) of the Riemann Xi function (see attached article draft).
What is it known about these coefficients a2n, how fast do they decrease, or what is their general behavior for large values of n?
Hi there,
I do some research on approximation algorithms for quadratic programming. I try to optimize a quadratic function with a polytope as feasible set (a QP in standard form, to define it briefly). The matrix of the quadratic term would be indefinite in the general case.
I already know Vavasis' algorithm [1] to approximate global minima of such QP's is polynomial time (provided that the number of negative eigenvalues of the quadratic term is a fixed constant). Recently, I found an algorithm by Ye [2], which yields a guaranteed 4/7-approximation of the solution of a quadratically constrained QP. Ye developed his algorithm starting from a positive semi-definite relaxation of the original problem.
I wonder if there are PSDP relaxations of linearly constrained QP's that lead to similar approximation guarantees. Does anyone know at least one paper in which such a technique is posed?
[1] S. A. Vavasis, Approximation algorithms for indefinite quadratic programming, Math. Prog. 57 (1992), pp. 279-311.
[2] Y. Ye, Interior point algorithms: theory and analysis, Wiley-Interscience (1997), pp. 325-332.
Like x^r where both x and r are real numbers, in terms of the number of multiplications and additions required.
I will be very much thankful with detailed explanation or some link to literature. BR
What are the general characteristics of a problem in PSPACE?
We use reduction to solve problem P1 using problem P2 such that a solution of P2 is also a solution of P1.
While a problem P1 is transformed into a simpler form so that solving P1 becomes easy.
So the solution set is same in both cases.
In a data structure book I read that shell sort is the most efficient and the most complex of the O(n2) class of sorting algorithms.
I thought about an algorithm that I think solves 3 - SAT with high probability in polynomial time (DEA, the dual expression algorithm, see attachment). I have not been able to prove the polynomial time without any doubt, but arguments based on Markov chain theory indicate that fact. I would appreciate any suggestions. Is anyone interested in testing this algorithm?
Deleted research item The research item mentioned here has been deleted
Is there anything faster than solving the dual simplex problem?
I have found a fast algorithm for the linear assignment problem, and I want to test it against other algorithms
For classes over NP (EXPTIME, EXPSPACE etc.) we define complete problems in terms of polynomial time reductions. I can understand that it is useful in cases of that class being equal to P, but it is highly unlikely. I think we should use corresponding limits for complexity classes. (polynomial space reduction for EXPTIME, exponential time reductions for EXPSPACE etc.). This will probably increase the size of complete problems for each class.
What do you think?
From what I understand, the fixed word recognition problem is the question: given any string from any essentially non-contracting, context sensitive language, can this string be generated by some given grammar rule set. I also understand that this problem is PSPACE-complete. I'm looking for papers that are directly related to this problem, especially in relationship to computational complexity, such as it's PSPACE completeness.
Adding or removing some nodes and links with out changing the maximum degree of G. Here the minimum function is linear with respect to its layer dimension.
I'm looking for weird and simple models of computation that can simulate a Turing machine with only a polynomial time slowdown. For example I know the 2-tag systems and bi-tag systems (Damien Woods and Turlough Neary, "Remarks on the computational complexity of small universal Turing machines", 2006). Do you know other models like those?
I usually face with MIP problems. I want to know what software is better in what aspect (such as solution time, used node etc.) in MIP, CPLEX solver in GAMS or IBM ILOG CPLEX?