Science topic

Computational Complexity Theory - Science topic

Explore the latest questions and answers in Computational Complexity Theory, and find Computational Complexity Theory experts.
Questions related to Computational Complexity Theory
  • asked a question related to Computational Complexity Theory
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Computational Complexity Theory
Question
2 answers
Explore the unresolved question in computational complexity theory, addressing whether problems solvable in polynomial time (P) can be verified as efficiently as they are solved (NP), impacting fields like cryptography and optimization.
Relevant answer
Answer
The question of whether P equals NP in polynomial time is one of the most famous unsolved problems in computational complexity theory. In this context, P refers to the set of problems that can be solved in polynomial time on a deterministic Turing machine, while NP refers to the set of problems for which a solution can be verified in polynomial time once it is given.
If P were to equal NP, it would imply that problems for which solutions can be verified quickly could also be solved quickly. This would have profound implications across various fields, including cryptography, optimization, artificial intelligence, and many other areas of computer science.
For example, in cryptography, the security of many encryption algorithms relies on the assumption that certain problems are hard to solve (NP-hard problems). If P were to equal NP, it could potentially mean that these problems are not as hard to solve as previously believed, leading to significant vulnerabilities in cryptographic systems.
Similarly, in optimization, many real-world problems involve finding the best solution among a vast number of possible options. If P were to equal NP, it would revolutionize optimization algorithms, potentially enabling the efficient solution of complex optimization problems that are currently considered computationally intractable.
Despite decades of research and numerous attempts to solve the P vs. NP problem, no definitive answer has been found yet. The resolution of this question would not only have far-reaching implications for computer science but also impact our understanding of the inherent complexity of computational problems and the limits of efficient computation.
  • asked a question related to Computational Complexity Theory
Question
3 answers
Our answer is certain: YES. See
Relevant answer
Answer
π=3.1415926535897932384626433832795…⇒π≃3.14,
while 22/7=3.1428571428571428571428571428571…=3.(142857) ̅ …⇒22/7≃3.14≃π
  • asked a question related to Computational Complexity Theory
Question
2 answers
I am trying to calculate the computational complexity of an expression that includes these hyperbolic functions. I know how to calculate the computational complexity of other parts of the expression but facing difficulties in calculating the complexity of these hyperbolic functions. What is the computational complexity of sinh, cosh, and tanh functions? Can anyone explain, please? Thanks in advance.
Relevant answer
Answer
looks like the answer has been deleted?
  • asked a question related to Computational Complexity Theory
Question
2 answers
During the addition of ions Na and Cl to the system in the sol, the program threw the error stating that "no line with molecule 'SOL' found in the [molecules] section of file 'topol.top'.
While the file topol.top has the entry in it. please suggest how to rectify the errror.
Thanks in advance.
Regards,
Vinay
Relevant answer
Answer
Hi,
If it still persists, just add a new line character around the line of SOL in topol.top file. Sometimes, gmx behaves in a weird manner during reading SOL.
HTH.
  • asked a question related to Computational Complexity Theory
Question
14 answers
Should these polynomials be defined up to bounded number of members in their Taylor series?
Relevant answer
Answer
The question about avoidance of intersection of trajectories in state space has different meaning for infinitesimal numbers and for bounded rationals. Thus while in the first case distnaces among trajectories are infinitesimely small. This however, implies that energy/matter/information involved in the aviodance of any intersection is ill-defined and since the infinitesimal numbers involve infinite energy/matter/information for their execution, it turns out that notion of distance becomes compromised. On the other hand, a bounded dense transitive set of trajectories provides a bounded well-defined distance between trajecories, even though they intersect. The tricky point is that this bounded set has no dynamical center because of the perpetual motion with bounded velocity, although it has a geometric one according to Weierstrass theorem.
  • asked a question related to Computational Complexity Theory
Question
8 answers
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Relevant answer
Answer
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
I recently conducted a similar study where I applied three different sensitivity analysis methods to fire simulations and compared their results!
Cheers!
  • asked a question related to Computational Complexity Theory
Question
4 answers
Hello,
I am currently designing a toy model to calculate interatomic potential this toy model needs to scale better than DFT simulation. But sadly I have little knowledge of DFT simulations I have a infinite repeating lattice (which I of course limit to just a few surrounding cells)
I found a paper online (https://www.nature.com/articles/nphys1370) and according to it DFT scales as O(N^3) (so cubicly with the amount of particles) but one thing is not entirely clear to me. Does the DFT simulation scale cubicly with the amount of particles in the unit-cell or with all the particles in the simulation (including the ones part of the boundary condition)?
If you know the answer I would love to hear (I might be understanding/reading it wrong)!
Relevant answer
Answer
In DFT, since all the so-called Kohn-Sham (KS) integrals are performed over functionals of charge density , the size of the computations scales up as O(N^3). Here, N represents the system size which can be the number of atoms, electrons or basis functions. In practice, the exact value of N depends on the method used for solving the KS integrations. For example, if the so-called all-electron method is used, N would be the number atomic-orbital basis sets used to construct the wave functions within the unit cell. On the other hand, for pseudo-potential methods (in which the contribution of core electrons is neglected), the number of valence electrons of each atom within the unit cell is relevant for N.
  • asked a question related to Computational Complexity Theory
Question
6 answers
Can the complexity theory solve complete or partially problems in Math?
Relevant answer
Answer
I read your notes, but I got nothing! You described some open problems. Complexity theory is useful in the presence of an algorithm to tackle the problem.
First, you need to show a concrete theory and then build your algorithm with a suitable complexity time to support your proofs.
We have nothing to do with complexity in the absence of the theory.
Best regards
  • asked a question related to Computational Complexity Theory
Question
2 answers
Pls, anyone with contributions on how i can use DEA to solve Graph Algorithms problems such, Network flow, Project management, Scheduling, Routing.etc
Majorly I need information on how to identify the input and output variables in this kind of problems(where there is no complete knowledge of the I/O ).
I think I can identify my DMUs.
I shall be glad to receive contributions on the appropriate general DEA model approach for solving Combinatorial Optimization problems of these kind.
Thanks
Relevant answer
Answer
DEA is generally applied to assess the relative performance a set of decision making units (DMUs) that consume inputs to produce outputs under a similar production technology. This may be valid whenever the systems under consideration fit within such a structure. In graph related problems such as scheduling, routing, etc., the objectives are completely different. Although there is a flow of material over the network, which may suggest that a node can be assimilated to a DMU, here, we are more concerned with finding an optimal route that satisfies constraints that may be as complex as the practical problem under study. In large scale problems, one may think of DEA for building clusters of nodes so that to reduce the problem size and, hence, related computational cost. I think that this aspect is worth to be investigated.
  • asked a question related to Computational Complexity Theory
Question
4 answers
I've recently read about a Reinforcement Learning (RL) agent with an LSTM controller overseeing an LSTM path integration module receiving occassional visual input from a CNN (Banino et al., 2018).
Does the functionality gain of combining different NNs eventually flat out? Is model standardization, bringing an air of component-based development (CBD) into NN architectures, for the best? Or are end-to-end implementations with higher integration values to be preferred?
Relevant answer
Answer
I agree handcrafting components is going to be critical in the future. I think creating hybrid networks with both hand crafted as well as learner components is going to be the next step. Since neural nets are too rigid post training this to truly instill some dynamic component hand crafting would be the most ideal way.
  • asked a question related to Computational Complexity Theory
Question
7 answers
I need to know the computational complexity of two operations in terms of Big O notation:
(i) Elementwise division of two NxM matrices
(ii) Elementwise multiplication of two NxM matrices
In both cases N=M/2
Relevant answer
Answer
If you are really interested in the properties of algorithms, perhaps you should consult the bible of algorithms: The Art of Computer Programming by Donald Knuth. It's all there! ;-)
  • asked a question related to Computational Complexity Theory
Question
5 answers
I have some idea about the Big O notation. Please direct me to any article or tutorial so that I can derive the computational complexity of any algorithm on my own
Relevant answer
Answer
This is not my field of research but I chanced on some useful resources on computational aspects in statistical processing as well as on approximate signal processing. All the best in your project.
  • asked a question related to Computational Complexity Theory
Question
3 answers
Does anybody know of an optimization tool which has a built in spatial branch and bound solver?
Relevant answer
Answer
Many nonconvex MINLPs can be solved easily nowadays with spatial branch-and-bound solvers.  For a recent review of optimization solvers that can solve this class of problems, see Kılınç, M. and N. V. Sahinidis, State-of-the-art in mixed-integer nonlinear programming, in T. Terlaky, M. Anjos and S. Ahmed (eds.), Advances and Trends in Optimization with Engineering Applications, MOS-SIAM Book Series on Optimization, SIAM, Philadelphia, 2017, pp. 273-292.
For freely available tools through the NEOS server, see https://neos-server.org/neos/solvers/index.html#minco
For some recent comparisons of MINLP solvers, see http://plato.asu.edu/ftp/minlp.html
  • asked a question related to Computational Complexity Theory
Question
2 answers
Is there any definitive relationship between the two under certain circumstance?
Relevant answer
Answer
Thanks for your attention to my questions. As a follow-up message, I report that I now have the answer to my earlier questions. It is not surprising that there are optimization problems that are neither in APX nor strongly NP-hard. In fact, there is no close connection between the separation boundary of strongly/weakly NP-hardness and that of approximability/in-approximability. For those who would like to have more details, please get in touch via individual communications.
  • asked a question related to Computational Complexity Theory
Question
6 answers
Given a graph G and a finite list L ( v ) ⊆ N for each vertex v ∈ V , the list-coloring problem asks for a list-coloring of G , i.e., a coloring f such that f ( v ) ∈ L ( v ) for every v ∈ V. The list coloring problem is NP-Complete for most of the graph classes. Can anyone please provide the related literature in which the list coloring problem has been proved NP-Complete for the general graph using reduction (from well know NP-Complete problem)?
Relevant answer
Answer
Yes Sir, that is the trivial way of doing this.  Thanks
  • asked a question related to Computational Complexity Theory
Question
7 answers
We know there is an elementary cellular automata (ECA) with 2 states (Rule 110) that is universal, i.e. Turing-complete.  One-way cellular automata (OCA's) are a subcategory of ECA's where the next state only depends on the state of the current cell and one neighbor.  I believe that one can make a universal OCA with 4 states pretty easily by simulating two cells of Rule 110 with each cell of the OCA.  Does anyone know if it can be done with only three states?  Thanks!
Relevant answer
Answer
You might find this very interesting! I think it might be close to what you have been asking for. http://www.wolframscience.com/prizes/tm23/solved.html
  • asked a question related to Computational Complexity Theory
Question
2 answers
Is it possible to compare the evapotranspiration of sebal algorithms with Potential evapotranspiration of fao-penman- montith?
Relevant answer
Answer
thanks for your usefull help
  • asked a question related to Computational Complexity Theory
Question
6 answers
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?  
Can any one suggest me any paper regarding this topic? 
Thank you in advance. 
Relevant answer
Answer
That depends on what "more" means. :-) If "more" is a constant (finite) number then it means that you solve a finite number of NP-complete problems - which is still NP-complete.
  • asked a question related to Computational Complexity Theory
Question
2 answers
A TQBF is a boolean formula with alternating existential and universal quantifiers. The boolean formula here is in conjunctive normal form (CNF).
Relevant answer
Answer
Yes! my question was regarding the "exponential" with respect to the number of variables. (The question "arose" when I was trying a problem reduction from QBF. )
I understood that the question was irrelevant in any sense as the number of disjunctive clauses in the  CNF  depends on the atomics. I was about to delete the question and then saw your answer. Thank you for considering my question and taking time to answer it.
  • asked a question related to Computational Complexity Theory
Question
6 answers
The goal of computational complexity is to classify algorithms according to their performances.If a problem is complex with multi dimensional variable  will that affect the performance of the algorithm?  consider this:
     T(n) using the "big-O" notation to express an algorithm runtime complexity. For example, the following statement
 T(n) = O(n2)
says that an algorithm has a quadratic time complexity.
Relevant answer
Answer
Complexity increases as source data integrity and linearity becomes compromised or more complex in itself, thereby changing the "fit". It also increases with increased definition of actionable content within the data.
  • asked a question related to Computational Complexity Theory
Question
3 answers
In what sense is "irreducibly complex" synonymous with "NP-complete"?
In what sense is "complicated" just "computationally-intensive but solvable"?
Relevant answer
Answer
It appears to me that, in the paper you cited, Chaitin uses the phrase "irreducibly complex" as a synonym for "algorithmically random." This is quite different from computational complexity such as NP-completeness.  Algorithmically random is defined in terms of Kolmogorov complexity.  The prefix-free Kolmogorov complexity,  K(s) of a binary string, s, is defined to be the length of the shortest program that outputs s.   Prefix-free means that no program is a prefix of another program. We assume a fixed optimal computer. Changing optimal computers changes the Kolmogorov complexities of strings, but the change is bounded by a single constant for all strings.   Results in Kolmogorov complexity are usually stated "up to an additive constant"  just as results in analysis of algorithms are usually stated "up to a mupltiplicative constant"  (big O notation).  An infinite binary sequence, x, is algorithmically random if there is a constant b such that for all n, K(x[n]) >= n - b, where x[n] is the initial segment of x consisting of the first n bits of x. This means that the initial segments of x cannot be described using short programs. The programs must be almost as long as the strings themselves. (The programs can be shorter only by the additive constant b.)  Algorithmically random sequences cannot be computable.  A computable sequence x can be described by a program of finite length, so an initial segment x[n] can be described by the finite program plus a description of n, which can described using less than 2log(n)+O(1) bits.  This implies x is not algorithmically random.  However, although algorithmically random implies not computable, the converse implication doesn't hold.  For instance, if x=x0, x1, x2, ... is algorithmically random, x0, 0, x1, 0, x2, 0, ... is not computable, but is not algorithmically random since an initial segment of length n only contains n/2 bits of information.   Algorithmically random sequences are upredicatable in the sense that no program can predit the bits of the sequence with better than 50% accuracy.
  • asked a question related to Computational Complexity Theory
Question
1 answer
I am studying in detail one-way functions and in the standard literature it is known that there are 3 different kinds of these functions: deterministic, weak and strong. I was wondering if there is another kind of functions in classical literature? 
Relevant answer
Answer
In classical literature, I didn't know another kind of one-way functions.
  • asked a question related to Computational Complexity Theory
Question
17 answers
In trying to approximate an answer to an NP-complete problem, a heuristic (i.e. particle swarm, genetic algorithm) is used. Is there a study on what standard heuristic(s) is/are used in approximating NP-complete problems?
Relevant answer
Answer
In my view particle swarm and genetic algorithms are simply elaborate ways to explore some of the solution space (i.e. a fraction of the feasible region). There is very little theory guiding this search -- which is fine if you have absolutely no idea about how to tackle your underlying problem. But most of us, once we explore a problem, begin to gain some appreciation for the multivariate space which contains the solutions. Much better then, as hinted in the other answers, is to formulate easier-to-solve special cases, relaxations, etc. In other words exploit / explore the structure of the problem. That would also seem to lead to a better chance of having a publishable result -- e.g. as a result of finding a new facet defining inequality for example (basically chopping off more infeasible solutions). Good luck and let us know what you decide as a strategy!
  • asked a question related to Computational Complexity Theory
Question
5 answers
Let us assume a connected graph with large number of nodes without weight in the edge. Also assume that entire information is not available (i.e. adjacency matrix is not available). One node is having information of its neighbor only. Now I want to find a path to destination. How do I find one path which is shortest among various possible path.
Relevant answer
Answer
In addition, consider searching from
both sides. When two opposing vertices
from different starting points meet,
you can stop. For example, you can
start from one side with positive
counting 1,2,3,4... from the other
side with -1,-2,-3,-4... When an edge
appears with a positive and negative
node on either side, you can construct
the path by stepping back in descending
and ascending order.
You can also use 2 marks instead
of a single positive/negative.
Regards,
Joachim
  • asked a question related to Computational Complexity Theory
Question
11 answers
Consider the decision problem where one is given a directed graph G=(V,A), a root r ∈ V, and a subset of its arcs T ⊆ A.
The problem consists on deciding whether there is a subgraph G'=(V,A') of G such that:
a. All vertices in V are reachable from r in G';
b. T ⊆ A', ie, G' contains all arcs in T; and
c. G' has no cycles, ie, G' is a DAG.
As an example, the answer is 'yes' for the graph attached with r=0 and T=∅, but it is 'no' with r=0 and T={(3,0)}.
Could someone please tell if this problem is in P or NP-Hard (or...)?
It seems the problem is in P for the case T=∅ (a DFS will do). Also, the "undirected version" of the problem seems to be in P. I guess it could be solved by Kruskal's algorithm for MST, processing the edges in T first. However, I am not sure about the general, directed case. It may be worth pointing that we are looking for a DAG, and not necessary an arborescence.
Some related problems are the Feedback Arc Set problem and the Directed Steiner Tree problem, both NP-Hard. Unfortunately I could not think on reductions from these problems to this one, mainly due to the fact that all vertices must be present and reachable.
Maybe I am just missing something...?
Thank you very much in advice,
Ricardo.
Relevant answer
Answer
Dear colleagues,
First, I must correct a mistake in my previous comment. Where it is written
“For each other component C, check if there is a vertex u of C such that there is an arc vu in G with v ∉ V(C), and such that your property holds for the component C taking u as root. If it holds for some u, it means that there is a DAG which contains all arcs of T ∩ E(C) and that all vertices of C are reachable from u. But it also means that there is a path from r to u.”
It should actually be read:
“For each other component C, let S(C) be the set of all vertices u of C such that there is an arc vu in G with v ∉ V(C), and let C' be the graph obtained by creating temporarily a supersource s_C with an arc to every vertex u of S(C). Then, check if your property holds for C' taking s_C as root. If it holds, it means that there is a DAG which contains all arcs of T ∩ E(C) and that all vertices of C are reachable from some vertex u of S(C). But it also means that all vertices of C are reachable from r, since there is a path from r to each vertex of S(C).”
The change above is necessary since there could be a case wherein all vertices of C are reachable from r, but still not reachable from a single vertex of S(C).
Let me clarify how you could check your property for C' taking s_C as root, given that C is strongly connected. Surely the arcs of T ∩ E(C) induce a DAG D, otherwise G would not have passed in the prechecking. If there is a path in C from some u in S(C) to some source x of D (a source of D is a vertex with indegree 0 in D) which passes by a vertex y ≠ x of D reachable from x in D, then this path cannot exist in the DAG we are willing to construct, for it would create a cycle. So, all you have to do is to check, for every source x of D, if there is a path in C' from s_C to x which does not pass by any non-source vertex y of D reachable from x in D. It is pretty easy to see that this can be done in quadratic total time: one could, for each source x of D, temporarily delete all vertices y reachable from x in D, and then run a DFS from s_C to check if x is still reachable from s_C in C' even after the deletion. Notice that, as C is strongly connected, only the sources of D need to be tested. Please do not forget to restore the deleted vertices after the DFS for each source x of D. I would not be surprised if the total time for all sources could also be linear, but I would need some further thinking to make up with something.
Marcelo, surely v is in an already verified component, since the components are taken following a topological sorting of the SCC-DAG of G.
Again, best regards.
  • asked a question related to Computational Complexity Theory
Question
4 answers
In terms of graph isomorphism complexity classes, Trees have a polynomial time algorithm and Directed Acyclic Graphs (DAG's) do not (so far).
What about Poly-trees (oriented trees)? These are DAG's where the possible paths from a node are all trees. Unlike tree nodes, Poly-tree nodes can have several parents.
Are polytrees in the isomorphism complexity class of DAG's or Trees, so far?
I have posted this question in mathoverflow and math.stackexchange with no answers so far.
Relevant answer
Answer
My first  thought: if two polytrees are isomorphic then their underlying trees must be isomorphic: this surely cuts the problem space down immensely.  I suspect that the methods which show that the isomorphism problem for trees is in P can be modified to work for polytrees too: something along the lines of 
a) find the center(s) of the underlying trees
b) consider the subtrees meeting at the center: either there are few of them, in which case divide and conquer again, or there are many, in which case the trees in question are small.
I don't immediately see how to implement this, but the fact that trees are in P strongly suggests to me that polytrees should be too.
Of course, you could just wait a little while and see if Babai and others manage to tighten his proof  that GI is almost in P to get it actually in P!
  • asked a question related to Computational Complexity Theory
Question
9 answers
for example, rough partial orders or rough pre-orders or rough binary relations? but also rough algebraic structures such as semigroups, monoids, etc?
Relevant answer
Answer
Thank you Dimiter, I see Rough Relations go back to Pawlak and there are several references in J. Stepaniuk Rough Relations and Logics:
 I have been working on a version of rough graphs where you start with a graph and impose a kind of "equivalence relation on a graph" and I thought there must be similar things out there. My equivalence relations on graphs are relations in the sense of
Stell, J. G.(2015) Symmetric Heyting Relation Algebras with Applications to Hypergraphs. Journal of Logical and Algebraic Methods in Programming, vol 84 pp440-455.
where the relations are reflexive and transitive (in a straightforward way) but there is a weak kind of symmetry defined using the left converse operation in the above paper.
So I was especially looking for work which adopted the strategy that you get a "rough x" by finding a notion of "equivalence relation on an x" (or partition etc) and then get a way to approximate a "sub x" in terms of the "equivalence classes" which will themselves form an x and not merely a set. For x=hypergraph I think we can do all this.
  • asked a question related to Computational Complexity Theory
Question
1 answer
I started to study Kolmogorov complexity today, and this question came to mind. Is there any way to use LZW to do this? I'm looking for a guidance to my studies
Relevant answer
Answer
Mathematica claims to have the largest set of statistical tests.  I looked through Kolmogorov relatives and found nothing but data.  However, at the bottom of Mathematica pages related functions are suggested.  Here's a method that works with vectors: https://reference.wolfram.com/language/ref/SpearmanRankTest.html .  I suggest just looking through their functions as if it were a textbook.
  • asked a question related to Computational Complexity Theory
Question
4 answers
I read this article [1] and found an interesting algorithm Space Saving (SS) for finding Top-K and frequent items or heavy hitters. The authors described a data structure called Stream summary. The SS algorithm is very simple but the stream summary data structure is a little bit difficult to implement. I am wondering is it possible to design SS algorithm without stream summary data structure. Can some one guide me? 
[1] Metwally, Ahmed, Divyakant Agrawal, and Amr El Abbadi. "Efficient computation of frequent and top-k elements in data streams." Database Theory-ICDT 2005. Springer Berlin Heidelberg, 2005. 398-412.
Relevant answer
Answer
.
by the way, the Stream Summary data structure is not so complicated ; in order to understand it, you just have to remember the following facts
  1. there are less different counter values than counters (and in the tail of the distribution, you may have a lot of counters sharing the same low values 1, 2 etc ... so there may be much much less different values than counters, specially for highly skewed distributions)
  2. items having the same counter value are treated in the same way by the algorithm (random tie breaking for eviction of a minimum value counter)
  3. the algorithm indeed only relies on a sorted list of values (sorting ex aequo counters is just a waste of time)
therefore it makes sense
  1. to maintain a sorted list of values
  2. to attach a "bag" of items to their corresponding value
... which is the basic structure of the Stream Summary
.
  • asked a question related to Computational Complexity Theory
Question
11 answers
To know a problem NP-hard or not is a delicate question. Although it is an important question for economics too, it is a kind of labyrinth for a non-specialist. Moreover, a small change of of problem setting change NP-hard problem to a tractable problem. I want to know if the problem defined below is NP-hard or not. It is a problem which lies between the (original) optimal assignment problem and the generalized assignment problem. 
The optimal assignment problem can be formulated as follows.
Find an optimal assignment  σ: a permutation of set [N] = {1, 2, ... , N} that maximizes the linear sum
            ∑i=1N   a iσ(i),
where A = ( aij ) is a positive square matrix of order N. This problem can be reduced to LP problem  
     Maximize  ∑i=1Nj=1N aij xij                                                 (1)
under the conditions
             ∑ i=1Nxij  = 1  ∀ i=1, ..., N
            ∑ j=1Nxij  = 1  ∀ j=1, ... , N;
             xij  ≧ 0     ∀ i=1, ..., N; j=1, ... , N,
This is a classical LP problem. Birkhoff, von Neumann, Koopmans and others showed that this problem can be solved by a Hungarian method. The Birkhoff-von Neumann theorem assures that this LP problem is equivalent to  the associated integer problem, i.e. the problem with the restrictions  xij = 0, 1.  
Now this problem can be generalized as Generalized Assignment Problem (GAP) and it is cited as NP-hard problem. (See for example "Generalized assignment problem" in Wikipedia.) A GAP is formulated as follows:
     Maximize  ∑ i=1M, j=1N  aij xij                                       (2)
subject to 
       ∑ j=1N  wij xij  ≦ ti  ∀ i=1, ..., M;
       ∑ i=1Mxij  = 1      ∀ j=1, ... , N;    
       xij  = {0, 1}    ∀ i=1, ..., M; j=1, ... , N,
where A = (aij) and W = (wij) are two positive rectangular matrices of size M × N and t = (ti) is a positive vector of dimension M
It is evident that the problem (2) is reduced problem (1) when 
            wij = 1  ∀ i, j   and   ti = 1  ∀ i.
Now let us set a third problem, which is a maximization problem on on a bipartite graph.
Let G = ([M]∪[N] , E)  where E ⊆ [M]×[N] (a bipartite graph). The problem is to
                       maximize    ∑ (i, j) ∈ E  aij xij                 (3)           
subjected to 
        ∑i=1N  xij  = 1     ∀;
          ∑j=1xij  = 1    ∀i;
                     xij ≧ 0            ∀i, j.
If E is defined by
             (i, j)  ∈ E  if and only if   wijti.
we see that the problem (3) lies between problem (1) and (2).
I wonder if this problem (3) is also NP-hard or it has an algorithm which ends in polynomial time. 
Relevant answer
I do not know the answer but this is what i thought.
If you can build a deterministic algorithm to solve this problem in polynomial time then it belongs to Class P. If not, you can try to prove  that this problem is in NP-Complete.
(summarizing some theory)
1° Prove that this problem is in NP.
a) Define a structure to represent a solution from this problem.
b) Build a non deterministic turing machine (random algorithm) to generate a solution.
c) Build a deterministic algorithm to verify if the generated solution satisfies the problem.
2° Prove that this problem is NP-Complete
build a polynomial tranformation from [math] pi^* \in NP-C[/math] to your problem.
try to transform from SAT or TSP to this problem.
Also I guest this is a combinatorial problem, and looks like the Linear Ordering Problem (LOP).
I hope this helps in some way and find an anwer to your question.
  • asked a question related to Computational Complexity Theory
Question
5 answers
I creat the generator matrix using the method in paper "Efficient encoding of quasi-cyclic low-density parity-check codes". The parity bits of the generator matrix are decimal number. Why are the parity bits parts in the generator matrix not 0 or 1?
I focused on the LLR computation of the codeword for soft-decision LDPC decoidng. The decimal number are difficult to get LLR information in noise channel.
Using the EG(3,2^2) in GF(2^6), I create type-I QC-LDPC parity matrix H=[H1,H2,H3,H4,H5] with five parallel line pounds. 
Relevant answer
Answer
I am not sure what your question is. But, a decimal number might be the column shifting of an identity matrix  in the representation of a QC-LDPC parity matrix. For example,  9 means a cyclic shifting right  of the columns of an identity matrix by 9.
  • asked a question related to Computational Complexity Theory
Question
2 answers
When I solve a problem and want to compare it with the analytically solution, how do I know which "N" to take as lambda scales with N.
Relevant answer
Answer
Even if you normalize them, the scaling will appear because of the non-uniformity of the grid. If you have 'N+1' points, you will get 'N+1' eigenvalues out of which 2/3rd of them will agree with the analytical solution and the remaining 1/3rd will scale as N^2 for a Laplacian.
I was able to work it out. Thanks for sharing though.
  • asked a question related to Computational Complexity Theory
Question
2 answers
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required.  It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
Relevant answer
Answer
Anthony, there are other posts that I've made where it is determined that the sequence is a de Bruijn sequence.  What is most important about my work is the speed with which such a sequence can be generated; for 100 million digit de Bruijn sequence, my software produces same in less than 30 minutes.  Also, while random is not computable, it is also true that random is biased from maximal disorder.  Thanks for the reply.
  • asked a question related to Computational Complexity Theory
Question
2 answers
I have gone through some paper, where it is mentioned that weights are assigned using semiring. How these rings are formed i.e. under what operations on rings are formed? Is it depend on application or anything else?
Relevant answer
Answer
Oh, and concerning your question in the title: a weighted automaton is very much like a standard finite automaton. The only additional feature is that every transition has a weight (from the chosen semiring). The weight of a string is obtained by multiplying the weights of all the transitions that he automaton uses while reading this string. If the automaton is non-deterministic there can be several possible computations (with possibly different weights) for the same input word. In this case, the resulting weights are summed up. This is how the two semiring operations sum and product are used. 
For example: give every transition weight one. Then the weight of the word is the number of different accepting computations.
  • asked a question related to Computational Complexity Theory
Question
3 answers
What is the best technique for arabic name entity recognition: regular expressions, finite state transducers, CFG, HMM, CRF
Relevant answer
Answer
Thank you very much. I wanted just to enrich the discussion about. For now, we are preparing a large-scale test collection from hadith books to be announced on www.jarir.tn.
  • asked a question related to Computational Complexity Theory
Question
10 answers
Please I have a question, 
How will I know if an optimization problem is NP-hard or NP-complet? 
(Many papers just say that the optimization problem they have is NP hard and propose heuristics...)
Relevant answer
Answer
I know - I think there are too many papers that contribute only heuristic solution methods, with no quality guarantees. We DO want to find an optimal solution, and almost all heuristics provide a feasible solution with NO guarantee of being any good.
The basic technique to establish that an optimization problem is "difficult" in the language of complexity theory, is to show that you can convert (in polynomial time and space) a problem (problem P, say) that you already know is "difficult" to your problem, problem Q. We call it a "polynomial reduction", because we (1) state both problems P and Q in a common language, and (2) we show that any instance of the known difficult problem P can be converted in polynomial time AND space to our problem Q. We have therefore shown that among all problems in the class Q there exist difficult ones, and we are done. 
The perhaps first really good book that illustrates these conversions is by Garey and Johnson. Chapter one is here: http://udel.edu/~heinz/classes/2014/4-667/materials/readings/Garey%20and%20Johnson/Chapter%20One.pdf
  • asked a question related to Computational Complexity Theory
Question
11 answers
If a graph depicts connectivity, what does it tell us about the space it segments?
Relevant answer
Answer
Your question makes me think about mathematical morphology (the mathematical framework is based on lattices, but it is applied on grids, graphs). Connectivity is important, there are some notions as watersheds, it ls related with segmentation (more preciselly it is a tool for some segmentation technics) and so on... I give you the wikipedia link but there exists of course many academic sources (http://en.wikipedia.org/wiki/Mathematical_morphology).
  • asked a question related to Computational Complexity Theory
Question
2 answers
Hello,
Does anyone knows what does w[1]-hard means in the context of parametrized complexity?
Relevant answer
Answer
Thank you for your kind reply.
  • asked a question related to Computational Complexity Theory
Question
3 answers
I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.
Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.
Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)
My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.
Thank you so much!
Relevant answer
Answer
I see that there is a reduction proof on the CSTheory StackExchange website already. But if it is a reference that you need, here it is:
Fricke, G. H., Hedetniemi, S. T., Jacobs, D. P.,
Independence and irredundance in k-regular graphs.
Ars Combin. 49 (1998), 271–279.
Summary from MathSciNet: "We show that for each fixed k≥3, the INDEPENDENT SET problem is NP-complete for the class of k-regular graphs. Several other decision problems, including IRREDUNDANT SET, are also NP-complete for each class of k-regular graphs, for k≥6.''
Now, if the summary is correct, the authors prove that the decision version of the independent set problem is NP-complete for the class of k-regular graphs. Therefore, the optimization problem of finding the maximal independent set is np-hard for the same class. And of course, the minumum vertex cover is the complement of the max independent set. Hope this helps.
  • asked a question related to Computational Complexity Theory
Question
96 answers
Does it imply that if the theory did not allow calculating values of the given quantity in reasonable time, then this theoretical quantity would not have a counterpart in physical reality? Particularly, does this imply that the wave functions of the Universe do not correspond to any element of physical reality, inasmuch as they cannot be calculated in any reasonable time? Furthermore, if the ‘computational amendment’ (mentioned in the paper http://arxiv.org/abs/1410.3664v1) to the EPR definition of an element of physical reality is important and physically meaningful, should we then exclude infeasible, i.e., practically useless, solutions from all the equations of physical theories?
Relevant answer
Answer
Charles, no, it is done for momentum meausurement, I have given the formulas, of course, one needs the cofiguration of the measurement device as well as the full wave function for this too, and, again, this is not speculation but the prescription, the necessary consequence of the dBB equations, and this is well-known since Bohm's paper. Because this extension from position measurement to all other quantum measurements was the main new result of Bohm's paper in comparison with de Broglies original theory. 
Only repetition of already rejected arguments, so it seems time to finish this discussion. Its already clear that you have no counterarguments against the points I have made, but are unwilling to accept anything.  Bye. 
  • asked a question related to Computational Complexity Theory
Question
2 answers
I need an example to find out how was this problem solved?
the Equation is as follows:
" (∂4w/∂x4− ∂4w0/∂x4)+(P−EA/2L*(int((∂w/∂x−∂w0/∂x)*dx))*∂2w/∂x2−f=0 "
A reduced order (RO) model resulting from the Galerkin decomposition which is based on the following representation of the beamshape
w(x)=qi*ϕi(x)
I attached the Resulting equation.
Relevant answer
Answer
You can look at some of my papers on how the reduced model work.
  • asked a question related to Computational Complexity Theory
Question
5 answers
I need to know what is the order of complexity and also how to calculate it.
Thanks in advance.
Relevant answer
Answer
Thank you all for your help, but I think Dr. Breuer is right. My question was not clear enough! Please accept my apology.
Let me explain it, I am trying to solve a  linear equation Ax = B.
As Matrix A is not a square matrix, I use x= (ATA)-1ATB
Different experiments result different matrix A and B. matrix A is always accurate but, Matrix B is obtained from some practical experiments therefore, is not accurate.
I know the accuracy of the answer is dependent  on the condition number of matrix (ATA), and I am using "Cond()" command in Matlab to calculate it. and then I want to use this number to find the most accurate answer.
Now I want to Know what is the computational complexity of  doing this procedure.How many operations is used to calculate it? (matrix A is M by 2 and matrix B is M by 1).
I hope this is clear enough.
Thanks in advance.
  • asked a question related to Computational Complexity Theory
Question
2 answers
Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?
Definitions:
1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.
2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.
3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.
Relevant answer
Answer
There is an obvious reduction form 3DM to the problem of finding a maximum heterochromatic matching in an ede-colored graph (represent the third gender in each triplet by colors).
Therefore, the problem of finding a maximum heterochromatic matching in an ede-colored graph is NP-complete.
Moreover, 3DM is NP-complete even when no two triplets intersect in more than one element. When we start from instances with this property, the reduction mentioned above yields only properly edge-colored graphs. Therefore, the same problem remains NP-complete for properly edge colored bipartite graphs.
Assessing NP-completeness in the case of strong edge-colorings is more technical but  appears now to be downhill.
  • asked a question related to Computational Complexity Theory
Question
15 answers
The problem had better to be a problem in graph theory, with a tree as its input.
Relevant answer
Answer
Drawing a trees hierarchically on a grid (with other aesthetics) as narrow as possible is NP-complete. See: http://link.springer.com/article/10.1007/BF00289576
  • asked a question related to Computational Complexity Theory
Question
6 answers
The question is regarding NP problems in Data Structures.
Relevant answer
For instance, a polynomial reduction from SUDOKU into SAT is a function for computing in polynomial time a propositional formula from an incomplete grid of Sudoku such that the Sudoku grid is solvable iff the propositional formula is satisfiable.
"Reduction" is a metaphore because SUDOKU is then seen as a "sub-problem" of SAT. SAT is more general.
Thus, as SAT is NP, SUDOKU is also in NP.
Now, as SAT is also NP-hard, if you can build a polynomial reduction from SAT into, let say, COLORING, then it means that SAT is a "sub-problem" of COLORING. Thus, if SAT is already hard, COLORING has no choice to be harder than SAT. That is COLORING is NP-hard.
  • asked a question related to Computational Complexity Theory
Question
4 answers
From my own research, I reached the conclusion that information about the behavior of the coefficients of the series expansion of the Riemann Xi function could lead to a solution to the Riemann Hypothesis. The proof would rely on the convexity of the modulus (squared) of the Riemann Xi function (see attached article draft).
What is it known about these coefficients a2n, how fast do they decrease, or what is their general behavior for large values of n?
Relevant answer
Answer
Nice question. The first 30 coeffients are presented at attachment. Since they are extremely complicated, I present only their numeric values. After a while they seem to decrease without a known pattern. I hope it will help.
  • asked a question related to Computational Complexity Theory
Question
4 answers
Hi there,
I do some research on approximation algorithms for quadratic programming. I try to optimize a quadratic function with a polytope as feasible set (a QP in standard form, to define it briefly). The matrix of the quadratic term would be indefinite in the general case.
I already know Vavasis' algorithm [1] to approximate global minima of such QP's is polynomial time (provided that the number of negative eigenvalues of the quadratic term is a fixed constant). Recently, I found an algorithm by Ye [2], which yields a guaranteed 4/7-approximation of the solution of a quadratically constrained QP. Ye developed his algorithm starting from a positive semi-definite relaxation of the original problem.
I wonder if there are PSDP relaxations of linearly constrained QP's that lead to similar approximation guarantees. Does anyone know at least one paper in which such a technique is posed?
[1] S. A. Vavasis, Approximation algorithms for indefinite quadratic programming, Math. Prog. 57 (1992), pp. 279-311.
[2] Y. Ye, Interior point algorithms: theory and analysis, Wiley-Interscience (1997), pp. 325-332.
Relevant answer
Answer
Here are some useful references:
1 Hoang Tuy. Convexity and Monotonicity in Global Optimization. Advances in Convex Analysis and Global Optimization.
Nonconvex Optimization and Its Applications Volume 54, 2001, pp 569-594
2. The following chapters 6. Algorithms for Constructing Optimal on Volume Ellipsoids and Semidefinite Programming. 7. The Role of Ellipsoid Method for Complexity Analysis of Combinatorial Problems. 8. Semidefinite Programming Bounds for Extremal Graph Problems.in the book N.Z Shor. Nondifferentiable Optimization and Polynomial Problems.Nonconvex Optimization and Its Applications, Vol. 24, 1998, XVII, 396 p.
  • asked a question related to Computational Complexity Theory
Question
6 answers
Like x^r where both x and r are real numbers, in terms of the number of multiplications and additions required.
I will be very much thankful with detailed explanation or some link to literature. BR
Relevant answer
Answer
Yes. For complexity as r tends to infinity, the value of int (r) also tends to infinity and fract (r) is less than 1.So complexity depends on int(r).
  • asked a question related to Computational Complexity Theory
Question
5 answers
What are the general characteristics of a problem in PSPACE?
Relevant answer
Answer
I wouldn't say "why is PSPACE required". If you choose not to study it, that's entirely up to you; one wants to stand on the shoulder of giants when doing research. Theoreticians have figured out that problems sitting in PSPACE have certain properties. For example, problems in P and NP all are in PSPACE. It is of interest to those working on intractable problems that may be exhaustive in nature (may have an EXP like behaviour). Since we know EXPSPACE is not in PSPACE, showing it in PSPACE is very useful. Keep in mind that PSPACE means that there exists an algorithm for which the problem (its decision variant) can solved in a polynomial space with respect to the input size.
I think Timur's answer is also helpful.
  • asked a question related to Computational Complexity Theory
Question
2 answers
We use reduction to solve problem P1 using problem P2 such that a solution of P2 is also a solution of P1.
While a problem P1 is transformed into a simpler form so that solving P1 becomes easy.
So the solution set is same in both cases.
Relevant answer
Answer
I'm not quite following. What you are saying in both cases is a reduction (at least to me), just rephrased. Let me expand on this:
In Theoretical Computer Science, we use this idea all the time. It is usually to illustrate the difficulty of solving one problem in relation to another; and provides a clear way to develop an algorithm.
Keep in mind what I saw below are assumed that the results are proven to be true as a premise:
1) What exactly do you mean by "simpler form"? Give a concrete example. Are you meaning something like taking a graph, using some intuition and making some kind of bipartite graph to say something about the original graph? I'd think something like this could technically be skewed as a reduction as well since you just made a new instance that just coincidentally may solve the same problem. Reductions can even occur within the same set of problem instances.
2) The solution set may not be the same between P1 and P2 because the instances are not really the same. Assuming you are talking about a reduction in the scope of something like an optimization problem, you would need to show how you take one instance of a problem and transform it into another one. Typically a reduction is taking an instance I of P1, creating an instance I' using I for problem P2, solving P2, then giving how to get the solution for I from the solution of I' (or their correspondence). In decision problems it is pretty straightforward (as all of them are 'yes'/'no' answers).
To me, it just sounds like a bit of different language. I've seen people call reductions other things like transformations. There are special kinds of reductions though, like a Karp reduction, or Turing reduction. At their heart is taking an instance, making a new instance from it, solving that new instance, and showing what it means for the original by proving a correspondence.
  • asked a question related to Computational Complexity Theory
Question
2 answers
In a data structure book I read that shell sort is the most efficient and the most complex of the O(n2) class of sorting algorithms.
Relevant answer
Answer
I guess you should start reading a different book. Catch a hold of Introduction to Algorithms by CLRS (search on google).
Apart from this, you can classify any algorithm according to two ways -
1. Ease of coding (complexity of coding NOT execution)
2. Complexity (efficiency) [1]
The most basic algorithms such as Bubble sort, Insertion sort or Selection sort are inefficient in nature (of the order of O(n^2) ) but very easy to code. Other algorithms such as Merge sort, Quick sort and heap sort can provide an efficient performance (of the order of O( n log n) ) in average or worst cases. However, they are trickier to code.
A little more efficient algorithms are Counting sort and Radix sort which provide O(n) performance for a trade off with memory space and several constraints. However, they are also not trivial to code.
Apart from these, other sorting techniques like Tim sort, Shell sort etc., are very less used because of difficulty in coding and almost same performance as the other sorting techniques [2].
  • asked a question related to Computational Complexity Theory
Question
2 answers
see above
Relevant answer
Answer
Expand the standard reduction from 3-SAT to SUBSET SUM adding an extra 1bit column 2^c (in a way that 2^c doesn't interfere with the other bits); set to 1 the bit of that column for all the M addends of the subset sum; add M new dummy addends with only that 2^c bit set; and set as target sum the original target sum T plus M2^c. For what regards the number of addends pick K=M. In this way you can pick an arbitrary number of addends of the original subset sum problem and use the dummy addends to reach exactly K addends (and the K2^c part of the modified target sum T+K2^c).
I hope that it is not a homework question ....
  • asked a question related to Computational Complexity Theory
Question
2 answers
I thought about an algorithm that I think solves 3 - SAT with high probability in polynomial time (DEA, the dual expression algorithm, see attachment). I have not been able to prove the polynomial time without any doubt, but arguments based on Markov chain theory indicate that fact. I would appreciate any suggestions. Is anyone interested in testing this algorithm?
Deleted research item The research item mentioned here has been deleted
Relevant answer
Answer
Thank you for your answer to my question from ResearchGate. Please read the last version of my article (SAT13.pdf posted on ResearchGate on my profile).
You assume that we update the x_i assignments every time we find a consistent active chain of dual variables. That is a false assumption.
We work with a solution path (which I also call the active chain of dual variables) through the 3 - CNF clauses, and this active chain of dual variables is formed of 2-CNF clauses, only one for each 3 -CNF clause. At any stage of the algorithm, this chain can have length N (the number of 3-CNF clauses) or less than N, this chain will grow in length, or decrease, depending on whether Test2CNF tells us that the current active chain of dual variables is consistent or not. I emphasize that we do not seek the solution in terms of the original variables unless we have all the 3-CNF clauses solved. That is the whole point, each state of the active chain of dual variables represents a whole class of x_i assignments. That is why in a space of 2^n objects, we can find a certain object (the solution) in polynomial time with sufficiently high probability. My algorithm, if anything, is not simplistic, but it requires attention (and I do agree that it needs more work).
  • asked a question related to Computational Complexity Theory
Question
7 answers
Is there anything faster than solving the dual simplex problem?
Relevant answer
Answer
What is your metric for "fast". Time complexity or experimental results? That may help researchers answer your question.
  • asked a question related to Computational Complexity Theory
Question
2 answers
I have found a fast algorithm for the linear assignment problem, and I want to test it against other algorithms
Relevant answer
Answer
thanks Bolivar
but i want some code implementations, to compare with my method which i guess O(n^2)
  • asked a question related to Computational Complexity Theory
Question
2 answers
For classes over NP (EXPTIME, EXPSPACE etc.) we define complete problems in terms of polynomial time reductions. I can understand that it is useful in cases of that class being equal to P, but it is highly unlikely. I think we should use corresponding limits for complexity classes. (polynomial space reduction for EXPTIME, exponential time reductions for EXPSPACE etc.). This will probably increase the size of complete problems for each class.
What do you think?
Relevant answer
Answer
In complexity theory, we consider a variety of notions of reduction. However, completeness is only meaningful if you use a class of reductions from a potentially smaller class. Every non-trivial problem in PSPACE is complete for PSPACE under PSPACE reductions: the reduction simply solves the problem in question and produces an instance with the same value. On the other hand, it makes sense to talk about problems complete for EXP under PSPACE reductions, although I do not know of any such problems that are not also known to be complete under P reductions.
  • asked a question related to Computational Complexity Theory
Question
6 answers
From what I understand, the fixed word recognition problem is the question: given any string from any essentially non-contracting, context sensitive language, can this string be generated by some given grammar rule set. I also understand that this problem is PSPACE-complete. I'm looking for papers that are directly related to this problem, especially in relationship to computational complexity, such as it's PSPACE completeness.
Relevant answer
  • asked a question related to Computational Complexity Theory
Question
3 answers
Adding or removing some nodes and links with out changing the maximum degree of G. Here the minimum function is linear with respect to its layer dimension.
Relevant answer
Answer
example : 1. If G is a cycle of any finite length L then we have delete one vertex to have metric dimension d-1. If G is a complete graph on N vertices then we have delete one vertex to have metric dimension d-1.
  • asked a question related to Computational Complexity Theory
Question
6 answers
I'm looking for weird and simple models of computation that can simulate a Turing machine with only a polynomial time slowdown. For example I know the 2-tag systems and bi-tag systems (Damien Woods and Turlough Neary, "Remarks on the computational complexity of small universal Turing machines", 2006). Do you know other models like those?
Relevant answer
Answer
Dear Marzio, I am not so sure what you are after: Devices that can simulate a Turing machine with only a polynomial time slowdown look very much like TMs. So, I do not think they are "weird". But, there are devices like "programmed grammars with leftmost-3 derivation mode and unconditional transfer" that allow to represent any recursively enumerable language, but one can also show that there is no Turing machine that transforms such a device into an equivalent TM. That always looke quite weird to me at last ... (OK, I have worked with these devices (together with Frank Stehpan), which always adds to the fun...)
Is this a sufficiently weird thought to your question?
Henning
  • asked a question related to Computational Complexity Theory
Question
6 answers
Flowshop scheduling complexity
Relevant answer
Answer
Hi Palakiti, I am sorry that I am not familiar with exact algorithms. Do heuristics (e.g., the NEH heuristic for PFSP) or metaheuristics (e.g. GA) fit your definition of "polynomial approx. algorithm"?
  • asked a question related to Computational Complexity Theory
Question
11 answers
I usually face with MIP problems. I want to know what software is better in what aspect (such as solution time, used node etc.) in MIP, CPLEX solver in GAMS or IBM ILOG CPLEX?
Relevant answer
Answer
As far as I know, the solver (cplex) is the same in both suites. The main difference, to me, are the modelling tools around the solver in these suites. GAMS uses an AMPL-like modeling language and cplex is one the the solvers it can use. OPL studio or concert technology (IBM ILOG) use cplex (linear/integer/mixed interger programming) or cp (constraints programming). Both cplex and cp belong to IBM. I think the choice should mostly depend on how familiar you are with one of those suites.