Science topic

# Theoretical Computer Science - Science topic

Explore the latest questions and answers in Theoretical Computer Science, and find Theoretical Computer Science experts.

Questions related to Theoretical Computer Science

I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.

You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045

I am grateful if anyone can give me informative suggestions.

We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.

Given,

**p**is an automorhism of graph**G**.How do we verify it?

For example, one can use adjacency matrix of

**G**, but it does not help always.What are the other ways?

Can we update the Turing test? It is about time. The Turing test, created in 1950, aims to differentiate humans from robots -- but we cannot, using that test. Bots can easily beat a human in Chess, Go, image recognition, voice calls, or, seems, any test. We can no longer use the Turing test, we are not exceptional.

The relevant aspect of "playing better chess" is that chess is a model of a conversation, a give and take. It is unsetlling that people have difficulty accepting it, it is not a good performance on a conversation. A human who finds it "normal" that computers can pass as a colleague, frequently, and not wonder about the intelligence of that colleague ... or smile? The Turing test has also become an intelligence test, and humans are using bots to beat humans, easily. This is another reason, in ethics, to depreciate this tool and look deeper.

See...

Preprint Consciousness: The 5th Dimension

Consciousness defies definition. We need to understand it, and a metric, to measure it. Can trust provide both, even if in a limiied fashion?

Preprint Consciousness: The 5th Dimension

Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?

Given an undirected(weighted) graph depicted in the attached diagram which is a representation of relationships between different kinds of drink, how best can I assign weight to the edges? Is there a technique that I can use?

Are there techniques to automatically assign weights on weighted graphs or weights on links in concept hierarchy? Assuming the scenario depicted here :

*https://cs.stackexchange.com/questions/90751/weight-assignment-in-graph-theory* is a form of a weighted graph. Are there ways weights can be assigned to each edges?

I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):

minimize (1,1,...,1)' e

(Y-Zx) > -e

-(Y-Zx) > -e

where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:

-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')

-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')

where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).

But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?

I'm currently teaching our intro course on Theoretical Computer Science to first-year students, and I came up with the following simple proof that there are non-computable functions from

**N**->**N:**It's a simple cardinality-based argument. The set of programs in any arbitrary language is countable - just enumerate by length-lexicographic order (shortest programs first, ASCIIbetically for programs of equal length). The set of functions from

**N**->**N**is uncountable (as the powerset of**N**is uncountable - in a pinch by a diagonalization argument - and hence the set of indicator functions alone is uncountable). So there are more functions than programs. Hence there are functions not computed by any program, hence non-computable functions.My question: This is so simple that either it should be well-known or that it has a fatal flaw I have overlooked. In the first case, can someone point me to a suitable reference? In the second: What is the flaw?

Looking through the literature, I realized all the proofs for NP- hardness of QIP are based on the claim that Binary Quadratic Integer Programming is NP- hard. Is that true?

For me, I am very sure it is solved. If you have interest, first download the problem, run it. Then, read my paper and think, then, you may also be sure.

How to use the program

1. I believe that most of people who download my program would be professionals. So I please you leave your contacting message and welcome your opinions if you download my program. You can leave your message here or to my email: edw95@yahoo.com. Thanks a lot.

2. This program is an informal one, also it is not the quickest one. But it includes my algorithm, also it can work correctly and works very well. No fails for it.

3. How to use: if you have a 0-1 matrix standing for a simple undirected graph with n vertices which has at least a Hamilton path from vertex 0 to vertex n-1, you only press the “ReadMatrix” menu item to read and calculate it, then you press the “Write the result” menu item, to write the result in a new file, you can get a Hamilton path from vertex 0 to vertex n-1 in the new file.

4. How to use: if you have an edges matrix standing for a simple undirected graph with n vertices which has at least a Hamilton path from vertex 1 to vertex n, you only press the “ReadEdges” menu item to read and calculate it, then you press the “Write the result” menu item, to write the result in a new file, you can get a Hamilton path from vertex 1 to vertex n in the new file. If without such a path, you get a message “no...”. The input file format: each row: 1,3 or 1 3. It means that an edge from vertex 1 to vertex 3.

5. The maximum degree is 3. Though I am very sure my algorithm can calculate any degree of undirected graphs, but this program not. The maximum vertex number is 3000, due to that the PC memory is limited.

6. I would like to thank Professor Alexander Chernosvitov very much. He and his one student take a long time to write a program (different from mine) to implement my algorithm and he give me and my work a good comment (see the web codeproject.com and researchgate.net). Mr. Wang, xiaolong also. Before them, no body trust me. Some not smart enough editors and reviewers reject me just on this logic: for such a hard problem, Lizhi Du is not a famous man, so he cannot solve it. Some editor or reviewer does not use his or her brain, say: your paper is apparently wrong, or, your paper cannot be understood. “apparently wrong”, funny! I study it for a lot of years, apparently wrong! If a reviewer is really powerful and use his brain and cost his time, he surely can understand my paper. If you think I am wrong, tell me where is wrong, then I explain to you that is not wrong. If you think my paper cannot be understood, tell me where cannot be understood, I explain to you. In my paper, in the Remarks, I told how to understand my algorithm and proof. I think it is very clear.

7. I studied this problem for a lot of years. I put a lot of versions of my papers on arxiv. Though the former versions have this or that problems, I am very sure the newest version of my paper is the final version and it is surely correct. It may contains some little bugs due to my English expression, but this does not affect the correctness and I can explain or revise the little bugs easily.

8. Surely I think I have proved NP=P and have solved the problem NP vs. P.

9. Thank you for you pay your attention and time on my algorithm!

I want to build a kind of guess game. I do not know the right name but the concept of the game is: person-1(P-1) thinks a name(of anything) and person-2 will have to predict that name by asking as less questions as possible. For example:

p1: thinks something(Steve jobs)

p2: Are you talking about man?

p1: yes.

p2: Is he/she media guy?

p1: No

P2: is he tech personality?

p1: yes

p2: steve jobs.

p1: yes.

So p2 has to ask 4 questions. It could be even more as number of predictors are infinite. Now I want to model this scenario. My Goal is to reduce the number of question. Note that the number of predictors are limited. So situation is not that broad.

I can think of decision tree. But question is, how can I decide where to split so that length of the brunch will be small.

Any suggestion/reference will be appreciated.

Given a graph G and a finite list L ( v ) ⊆ N for each vertex v ∈ V , the list-coloring problem asks for a list-coloring of G , i.e., a coloring f such that f ( v ) ∈ L ( v ) for every v ∈ V. The list coloring problem is NP-Complete for most of the graph classes. Can anyone please provide the related literature in which the list coloring problem has been proved NP-Complete for the general graph using reduction (from well know NP-Complete problem)?

the attachment is my encut test curve, in which the cure show us a parabolic shape but not a tendency closing to falt. is the test result wrong ? anybogy can tell me ? thank you all !

Particularly, if yes then all major algorithms like Apriori, FP-Growth and Eclat are np-hard or only Apriori is np-hard?

Who was the first to use the term mereotopology, and where in the literature does it appear first?

What do you consider the best quality journals to submit a theoretical rough sets paper to?

I was wondering if it is possible to have consensus or to formalize general definition of reachability property in graphs. If yes then what can be its fragments?

For instance: If I say that following are sufficiently enough to define reachability would it be correct by construction? or any way to formally proof that these are necessary and sufficient conditions.

Property 1: No self connectivity: fragments should not be connected to themselves to avoid infinite length cycles in a kirpke structure.

Property 2:Link connectivity: 2 fragments or elements of the graph are reachable if they have link connectivity.

property 3: Fixing source and destination, to avoid all other interpretations of logical reachability.

property 4: Fix the length of bounds: by fixing the number of intermediate nodes or fragments.

PS: please do not confuse bounded reachability with bounded model checking.

I have also uploaded the formal description of properties. Which one are necessary and sufficient and is there a way to prove that they are necessary and sufficient?

for example, rough partial orders or rough pre-orders or rough binary relations? but also rough algebraic structures such as semigroups, monoids, etc?

i have a program that has 3 class (machine-task and main class).i want to give tasks to VMs.i want to say that if this vm was last used vm, search again for new vm.

my code is :

Task4.sortTList();

Task4 tempTask1 = Task4.taskList.getFirst();

Machine4.sortMList(tempTask1.getIns_count());

Machine4 mc1 = Machine4.machineList.getFirst();

mc1.getTasklist().add(tempTask1);

for(int ti=1 ; ti< Task4.taskList.size() ; ti++){

Task4 tempTask = Task4.taskList.get(ti);

Task4 tempT = Task4.taskList.get(ti-1);

Machine4.sortMList(tempTask.getIns_count());

Machine4 mc = Machine4.machineList.getFirst();

if(mc.getTasklist() == null )

mc.getTasklist().add(tempTask);

else if( mc.getTasklist()!= null && mc.getTasklist().getLast() != tempT )

mc.getTasklist().add(tempTask);

else{

Machine4.machineList.removeFirst();

Machine4 second = Machine4.machineList.getFirst();

second.getTasklist().add(tempTask);

Machine4.machineList.addFirst(mc);

}

}

but the Exception is:

Exception in thread "AWT-EventQueue-0" java.util.NoSuchElementException

at java.util.LinkedList.getLast(LinkedList.java:255)

plese help me

I am working on the Dolphin data set. Two methods return different results. Method based eigenvalues return 15 as number of connected components while method based on graph search (depth-first / breadth-first) returns 1. I am confused since I am working on a clustering method find communities in the data set. So which result is correct.

I have been developing the first-order reasoner RACE [1] for Attempto Controlled English ACE [2] that allows users to check the consistency of a set of ACE axioms, to deduce ACE theorems from ACE axioms and to answer ACE queries from ACE axioms.

RACE uses a set of auxiliary axioms to express context-independent knowledge like the relation between plural nouns and singular nouns, or the ordering relations of natural numbers. These auxiliary axioms are written in Prolog that – having the power of the Turing machine – allows us to practically do any deduction. Thus often the question is not "Is this deduction correct?", but "Should RACE allow for this deduction?".

In the following I would like to discuss a case where this question arises.

Using the power of Prolog I have extended RACE by auxiliary axioms that perform second-order deductions, concretely aggregation. Thus RACE can deduce

*John is a man. Johnny is a man. ⊢ There are two men.*

Adding a further axiom establishing that, in fact, Johnny is John, RACE fails.

*John is a man. Johnny is a man. Johnny is John. ⊬ There are two men.*

Thus I have a case of non-monotonic reasoning. (Note that RACE can still deduce that there is one man.)

My question to the community is "Should RACE allow for non-monotonic reasoning, or does non-monotonicity have consequences that could confuse RACE users in more complex cases?"

I would like to add “new lives” to Petri Net diagrams that were published between 1962 – 2014. The Petri Nets with “new lives” would combine the published Petri Net diagrams from the past with JavaScript codes and supporting graphics to create interactive and dynamic Petri Nets in PDF. In other words, the revival creates token game versions of the Petri Net diagrams in PDF.

I am limiting the number of Petri Nets to a maximum of two per year.

**Question 1: Which Petri Net diagrams should I include? Why should I include them?**Once I have finalized the list of Petri Net diagrams to revive, I am hoping to finish the work as soon as possible. So before I begin, I will be looking for volunteers who are interested in adding “new lives” to the Petri Net diagrams. I will create at least two token game versions. Thus I will be looking for a maximum of 102 volunteers – one person per token game version. If there are less than 102, I will create the difference.

**Question 2: Would you be interested in helping out? If so, please give me a shout.**- john

What is nuclear attribute? If it differs from condition and decision attributes.

Assume we have two n-bit integers a, b so the cost(a+b) = n-bit operations. Similarly, the cost of adding two 2n-bit integers will be 2n-bit operations. This is how we compute the cost of operation in theory.

Is it also true in practice? I mean may be the hardware engineers optimize the operation for large integers so that to speedup the performance of the machine.

After I read the result (attached) generated by runts GAMES, I found that it didn't run the trans program and coupled cluster calculations.

Does anyone have experience with this? What's the input file of the "tran" program?

Thank you.

In my view, the formal solution ``P vs NP'' is found in the narrow sense.

I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.

Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.

Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)

**My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.**

**Note: I also asked this question on CSTheory StackExchange. http://cstheory.stackexchange.com/questions/29175/minimum-vertex-cover-on-k-regular-graphs-for-fixed-k2-np-hard-proof**

Thank you so much!

I need to know what is the order of complexity and also how to calculate it.

Thanks in advance.

Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?

Definitions:

1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.

2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.

3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.

I am wondering if anybody can provide any handy resources (for a theoretical computer scientist) in relation to the convex cost flow problem? I have found texts (mostly my combinatorial optimization texts on my shelf), but they sparingly discuss the problem and its algorithmic properties, and the ones I've found so far take a very deep dive into it without explaining a whole lot or providing any examples. I get the formulation of the problem, but a bit more would be helpful.

I'm new to the problem, and wondering if anybody can suggest some good texts, or papers that do a good job of covering the problem, and some of the major algorithmic results for this problem (computational complexity, and algorithms primarily), or maybe applications of it being used to see how researchers have applied it to solve other problems in theoretical computer science, combinatorial optimization, or operations research.

If you have resources or suggestions, that would be helpful! Thank you so much, and have a beautiful day :).

Is there any way to make an absolute execution flow of all programs so that it may not effect on output. For example if there two conditional statement used in computer programs so that it may give four execution flow statically bit dynamically only one. So I want to know that is there any way to make absolute execution flow in static analysis of program. Or any way to make general execution flow of programs in a way where the output of programs may not affect.

When or where do we use both?

Thanks.

if i work with quantitative method..the output should model? if mix method.. the framework is suitable for the output, is that right?

Some optimization problems are well known, but the instances or the benchmarks used are not available. Generally what can we do in this situation?

Most measures that I have seen such as cosine similarity measure similarity between two attributes. I'm not familiar with any measures than can be used (or extended) to compared the values for three of more attributes. Please comment.

We use reduction to solve problem P1 using problem P2 such that a solution of P2 is also a solution of P1.

While a problem P1 is transformed into a simpler form so that solving P1 becomes easy.

So the solution set is same in both cases.

I am aware that a lot of work is going on regarding graphene material. This material has good characteristics, it is rechargeable and consumes less energy.

Formal methods have mathematical foundations. They are based on algebra or finite state machines. Are they practical? Do they justify their costs?

Is it better to use light-weight implementations of formal methods in industrial projects to reduce costs and increase flexibility and practicality?

Nobody knows. It is known that we live in three measurements: it is length, width and height.

In certain cases for the description of difficult processes, it is applied the bigger number of measurements of space is considerable: four, six, ten, hundred, one thousand etc.

However, there is a contradiction in physical and geometrical measurements.

In the main classical cases great researchers manage only one measurement.

So, de Broglie's wave has only one measurement – length. But after all any physical subject has to have except length also height.

I am writing a new computing course and need some examples I can point students to. People talk about using functional programming arguing that it will allow programmers to build formally provable software. That was what I was told about 20 years ago and I was wondering if any real software had been built in a functional language then mathematically proved to be correct? Are there any examples I could point at? Particularly something people would have heard of, if possible.

Is there anything faster than solving the dual simplex problem?

Say we have a set of variables X={ x_1, x_2, ...., x_n}. The goal is to sort the set of elements in X without knowing the values and only using a set of known inequalities such as ( x_i < x_k ). These inequalities come from a noisy source so they contain wrong inequalities as well. Therefore we need an "approximate sorting algorithm". Can someone direct me to some sources of information on related topics?

Organisation, its omni-presence, preservation and dependence, is possible the distinctive aspect of biological phenomena. Achieving a consensus about what is organisation and how can it be used to generate biological explanations would greatly benefit the development of (theoretical) biology.

General and scientific responses welcomed.

Some references point to Bellman's The Art of Dynamic Programming, but after a quick look, the book, under "Bottleneck problems", does not seem to contain an explicit solution description as can be commonly seen in research papers or course material (such as the Introduction to Algorithms by Cormen, Rivest, Leiserson and Stein) on the subject, unless I overlooked a few important equations.

Theta-combinator: \Theta \equiv (\lambda xy. y(xxy))(\lambda xy. y(xxy))

Y-combinator: Y \equiv (\lambda f. (\lambda x. f(xx))(\lambda x. f(xx)))

What's the difference between them? In what sense do you say Theta combinator is more powerful than the Y combinator?

In a reduction path, how can I distinguish beta-reduction and beta-equivalence of terms?

I need to measure a chaotic event based on the entropy. I have the results of the experiment, but i see that entropy do not use real time results to plot a clear information about the pattern formations but is based on probability if i ve correctly understood. Does anybody has a deep information about that?

Go (the surrounding game) is over 4000 year old strategy game between two players. It is played with Black and White tokens (called stones) on a square board with intersections drawn. The goal of the game is to surround opponent's territory (intersections on the board) using the stones. Also you can surround your opponent's stones to capture them and remove them from the board. The game ends when each intersection on the board has clearly decided to which player's territory it belongs, or both players consecutively pass a move. The player that surrounds more territory than his opponent wins the game. Each surrounded intersection counts as one point of territory and each captured stone reduces the opponent's territory by one point. The rules of Go are very simple and can be learned in just a few minutes, but to perfect your game it might take you a lifetime, as Go is very rich in strategy and the number of possible games outnumbers the number of atoms in the observable Universe.

Go being so complex, yet so simple, my question is, if computation (a Universal Turing Machine) can be simulated by it? If such a thing is possible, what about a Go game itself being simulated on such a machine?

The rules of Go:

The numerators for dyadic rationals which approximate sqrt(2) within precision 2^-k (within k-bits of precision) can be given by a recursive function f: |N -> |N

f(k) = least n [ n^2 > 2^(2k+1) ] for all k.

f(k):

2, 3, 6, 12, 23, 46, 91, 182, 363, 725, 1449, 2897, 5793, 11586, 23171, 46341, 92682, 185364, 370728,....

The idea of the function f is that for each k>0 it outputs the number of 1/2^k segments that fit into sqrt(2) plus one.

Since for each k, the value f(k) is the smallest integer such that f(k)/2^k > sqrt(2) we have

f(k)/2^k < sqrt(2) < (f(k)+1)/2^k for each k>0.

From this we obtain the error bound | f(k)/2^k - sqrt(2) | < 2^-k.

Conclusion:

Approximation of Sqrt(2) within precision 2^-k can be "represented" by a natural number f(k). To get the actual approximation by a dyadic rational just divide by 2^k.

Note:

This can be generalized to calculating the square root of any number and of any order.

QUESTION:

Can you give a Wang tile program for calculating the function f ?

The basic ingredients of an optimization problem are the set of instances or input objects, the set of feasible solutions or output objects associated with any instance, and the measure defined for any feasible solution. Most of the NPO problems that I know of have discreet input space.

Can some people suggest some good syllabus for graph theory, advanced graph theory and discrete mathematics for Computer science Engineering students

Is it NP-complete? Hard to approximate? Its a maximum clique problem in a special graph.

The Halting problem can theoretically be solved using an Oracle machine, so can N-digits of Chaitin's Omega. However, all the digits of Chaitin's Omega cannot be decided by an Oracle in finite time as the problem then takes a discrete/real duality.

Additional comment: Robert Solovay seems to have written a paper on "When Oracles Do Not Help" that I was not able to find at the moment but that would seem to indicate that in the case of computational learning through oracles/query there are such cases.

I'm currently writing a short essay on Formal Verification and Program Derivation, and I would like to include an example of an algorithm that seems to be correct under a good number of test cases, but fails for very specific ones. The idea is to present the importance of a logical, deductive approach to infer the properties of an algorithm, in order to avoid such mistakes.

I can't quite think of a good example, though, because I'd like something simple that a layman can understand, but all the examples I came up with are related to NP-complete graph theory problems (usually an algorithm that is correct for a subset of the inputs, but not to the general problem).

Could someone help me think of a simpler example?

This is a very interesting question, such that is internet representable as a turing machine?

I'm doing research in formal languages and automata theory.. in that I need to draw the state diagrams for DFA or NFA. Currently I'm using Msoffice powerpoint tools.

I'm working on a problem where I would need an error correcting code of length at least 5616 bits (and preferably not much longer) that could achieve correct decoding even if 40% of the received word is erroneous. I have looked into some basic coding theory textbooks, but I have not found anything that would suit my purpose. Does such a code exist? If it does, what kind of code is it and can it be efficiently realized? If it doesn't, why not?

Any insight to the problem will be much appreciated.