Science topic

Theoretical Computer Science - Science topic

Explore the latest questions and answers in Theoretical Computer Science, and find Theoretical Computer Science experts.
Questions related to Theoretical Computer Science
  • asked a question related to Theoretical Computer Science
Question
6 answers
I ask because I recently learned that business is really not for me, and yet I don't want to be a teacher either.
I have a Ph.D. in graph theory and algorithms, and I really like doing theoretical research, but most theoretical research positions I can find require me to also lecture, and I've recently learned I'm terrible at that.
Relevant answer
Answer
Fabricio Kolberg Idle question: do you know what and how to change.
  • asked a question related to Theoretical Computer Science
Question
1 answer
I would like to perform a literature review at this time on augmented learning and learning augmented algorithms to enhance performance-guided surgery
Relevant answer
Answer
1. **Define the Scope and Objectives**:
- Clearly define the objectives of your literature review. For example, you may want to focus on understanding the current state of research in using augmented learning and learning-augmented algorithms to enhance surgical performance and guidance.
- Determine the key aspects you want to cover, such as the specific applications of these techniques in the context of performance-guided surgery, the methodologies employed, the reported outcomes and benefits, as well as any challenges or limitations.
2. **Search and Gather Relevant Literature**:
- Identify relevant databases and search engines, such as PubMed, IEEE Xplore, ACM Digital Library, and Google Scholar, to search for peer-reviewed journal articles, conference proceedings, and other relevant publications.
- Use a combination of keywords, such as "augmented learning", "learning-augmented algorithms", "performance-guided surgery", "surgical guidance", "surgical decision support", etc., to conduct your searches.
- Ensure you include both recent and seminal publications in your search to capture the latest advancements as well as the foundations of the field.
3. **Review and Critically Analyze the Literature**:
- Carefully read and analyze the selected publications, focusing on the key aspects identified in the scope and objectives.
- Identify the main themes, methodologies, findings, and contributions reported in the literature.
- Assess the quality, validity, and reliability of the studies, and identify any gaps, inconsistencies, or areas that require further investigation.
4. **Synthesize the Findings**:
- Organize the literature review in a logical and coherent manner, potentially using a thematic or chronological approach.
- Synthesize the key insights, trends, and conclusions drawn from the literature, highlighting the potential applications, benefits, and limitations of using augmented learning and learning-augmented algorithms in performance-guided surgery.
5. **Identify Future Research Directions**:
- Based on your analysis of the literature, identify areas that require further research, such as specific surgical procedures or applications that could benefit from these techniques, methodological improvements, or the integration of these approaches with other emerging technologies.
- Provide recommendations for future research that could contribute to the advancement of this field and address the identified gaps.
6. **Structure and Write the Literature Review**:
- Organize your literature review into a well-structured document, including an introduction, background, review of the literature, synthesis of findings, and a conclusion.
- Use appropriate headings, subheadings, and transitions to ensure the flow and readability of your review.
- Properly cite the references using a consistent citation style, such as APA or IEEE.
Good luck; partial credit AI
  • asked a question related to Theoretical Computer Science
Question
1 answer
Why are top researchers all studying theoretical deep learning?
Relevant answer
Answer
Well, "deep learning" search generates
About 357,000,000 results (0.27 seconds).
On the other side,
"big data" search generates
About 373,000,000 results (0.26 seconds)
which is slightly bigger figure than the former one.
So, people are interested in many current hot topics.
  • asked a question related to Theoretical Computer Science
Question
6 answers
Please, check my P=NP proof for errors:
Please reply with a comment if you find any errors and if you find none, too.
The proof uses logic (incompleteness of ZFC), algorithms accepting algorithms as arguments, reducing SAT to another NP-problem, inversions of bijections.
The proof does not present a practically feasible NP-complete algorithm (so, I don't yet mine Bitcoin by it).
Relevant answer
Answer
I found an error in my proof.
  • asked a question related to Theoretical Computer Science
Question
1 answer
What problem is theoretical deep learning trying to solve?
Relevant answer
Answer
Tong Guo Theoretical deep learning is primarily focused on understanding the fundamental principles, limitations, and mathematical underpinnings of deep neural networks. It aims to address several key problems:
  1. Expressiveness and Representational Power: Theoretical deep learning investigates the expressive power of deep neural networks. It seeks to understand what functions can be approximated by deep networks, and how network architecture, depth, and width influence their representational capacity.
  2. Generalization: Generalization is a central problem in deep learning. Theoretical research aims to explain why deep networks generalize well to unseen data despite having many parameters. It explores concepts like overfitting, bias-variance trade-off, and the impact of network architecture on generalization.
  3. Optimization: Deep learning models are trained using optimization algorithms. Theoretical analysis delves into the properties of optimization landscapes, convergence guarantees, and the choice of optimization algorithms for different network architectures.
  4. Interpretability and Explainability: Understanding why deep networks make specific predictions is crucial. Theoretical work explores methods for interpreting and explaining the decisions made by deep models, especially in fields where model interpretability is critical, such as healthcare and finance.
  5. Robustness and Adversarial Attacks: Deep networks are vulnerable to adversarial attacks, where small perturbations to input data can lead to incorrect predictions. Theoretical research seeks to understand the causes of this vulnerability and develop methods for improving the robustness of models.
  6. Scalability: As deep learning models become larger and more complex, scalability becomes a concern. Theoretical research addresses issues related to training and deploying large models efficiently.
  7. Transfer Learning: Theoretical deep learning investigates how knowledge learned from one task or domain can be transferred to improve performance on related tasks or domains.
  8. Ethical and Fair AI: Theoretical research also delves into ethical considerations and fairness in deep learning, aiming to address issues related to bias, discrimination, and fairness in AI systems.
In summary, theoretical deep learning aims to provide a deeper understanding of the principles underlying deep neural networks, enabling researchers and practitioners to build more effective, interpretable, and robust AI systems. It serves as the theoretical foundation for the practical advances and applications of deep learning in various domains.
  • asked a question related to Theoretical Computer Science
Question
4 answers
The concept of formal system and/or its properties is present frequently in many practical and theoretical components of computer science methods, tools, theories, etc.
But it is frequent too, finding some non rigorous interpretations of formal. For example, in several definitions of ontology, formal is understood as something that "computer can understand".
Does the computer science specialist, BSc, need to know that concept? Are it and its properties useful for them?
Relevant answer
Answer
For me some basic knowledge of formal systems is essential for undergraduate CS students, as this is a necessary part of the scientific education.
If not, you might still learn a lot about software construction, but you wouldn't know nor understand its bedrock.
  • asked a question related to Theoretical Computer Science
Question
2 answers
Many publishers and journals displays their average review time on their websites. However, the IEEE and ACM journals do not display this information. Any of you have this information?
Relevant answer
Answer
I agree with you that it depends on the editor and reviewer. However, could it be better to display this factor (in average or estimated value) on the journal's website. So that authors can get a rough estimate for article.
  • asked a question related to Theoretical Computer Science
Question
2 answers
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.
You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045
I am grateful if anyone can give me informative suggestions.
  • asked a question related to Theoretical Computer Science
Question
62 answers
We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.
Relevant answer
Answer
The answer is P=NP
  • asked a question related to Theoretical Computer Science
Question
5 answers
Given, p is an automorhism of graph G.
How do we verify it?
For example, one can use adjacency matrix of G, but it does not help always.
What are the other ways?
Relevant answer
Answer
If P is the permutation matrix corresponding to a permutation π of the vertex set of a graph G, and A is the adjacency matrix of the graph, then π is an automorphism of G iff PAPᵗ = A (where Pᵗ = P⁻¹ is the transpose and hence inverse of the permutation matrix P).
That is, π ∈ Aut(G) iff PA = AP: the permutation matrix commutes with the adjacency matrix.
This is probably what you have in mind.
But this has some other implications as well.
If x is any eigenvector A corresponding to an eigenvalue λ, then:
APx = PAx ⇒
A(Px) = λ(Px) [since Ax = λx]
which means that Px (which, note, is non-zero) is also an eigenvector of A for the same eigenvalue. Thus, any permutation, when it acts on vectors via the permutation matrix, must permute eigenvectors in each respective eigenspace amongst themselves.
The converse is true too. Suppose the permutation matrix P maps each eigenvector x to another eigenvector corresponding to the same eigenvalue. Then P represents an automorphism of G. To see this, write A as
A = QΛQᵗ = QΛQᵗ
where Λ is the diagonalisation of A [assuming G is an undirected graph and hence A is real and symmetric], and Q is the orthogonal diagonalising matrix whose column are (independent) eigenvectors of A corresponding to the respective eigenvalues along the diagonal of Λ. Then
PAPᵗ = (PQ)Λ(PQ)ᵗ.
But PQ is also a matrix whose columns are eigenvectors of A corresponding to the respective eigenvalues along the diagonal of Λ. Since P and Q are orthogonal, so is PQ [so (PQ)ᵗ = (PQ)⁻¹]. Thus, the RHS of the above equation is A, and we have
PAPᵗ = A
which shows that P represents an automorphism of G.
  • asked a question related to Theoretical Computer Science
Question
13 answers
Can we update the Turing test? It is about time. The Turing test, created in 1950, aims to differentiate humans from robots -- but we cannot, using that test. Bots can easily beat a human in Chess, Go, image recognition, voice calls, or, seems, any test. We can no longer use the Turing test, we are not exceptional.
The relevant aspect of "playing better chess" is that chess is a model of a conversation, a give and take. It is unsetlling that people have difficulty accepting it, it is not a good performance on a conversation. A human who finds it "normal" that computers can pass as a colleague, frequently, and not wonder about the intelligence of that colleague ... or smile? The Turing test has also become an intelligence test, and humans are using bots to beat humans, easily. This is another reason, in ethics, to depreciate this tool and look deeper.
Relevant answer
Answer
The relevant aspect of "playing better chess" is that chess is a model of a conversation, a give and take. It is unsetlling that people have difficulty accepting it, it is not a good performance on a conversation. A human who finds it "normal" that computers can pass as a colleague, frequently, and not wonder about the intelligence of that colleague ... or smile? The Turing test has also become an intelligence test, and humans are using bots to beat humans, easily. This is another reason, in ethics, to depreciate this tool and look deeper.
  • asked a question related to Theoretical Computer Science
Question
38 answers
Consciousness defies definition. We need to understand it, and a metric, to measure it. Can trust provide both, even if in a limiied fashion?
Relevant answer
Answer
That Polyani theorized some matters in error does not invalidate all of his effort.
In addition, we now know that free market economies, if they ever exist, are not wholly self-adjusting with respect to externalities, since free market economies are not closed systems in reality. That does not imply a single-authority system, so perhaps Polyani should not be labelled as binary thinking in this case.
It strikes me that Polyani must certainly have meant truth in a fashion not corresponding to the Ed Gerck notion of truth being accessible to an AI.
I don't believe that successful chess-playing programs qualify as AI, although they do provide demonstration that intelligence is not required to play chess successfully. I don't know about the more-recent demonstration of GO mastery, but I suspect that it does not require an AI either. I conceded the achievement of successful heuristic approaches and the computing power available to apply them beyond the capacities of human opponents.
I love playing computer adventure games of the highly-animated, cinematic form. My favorites are termed third-person shooters because one can observe and operate a character without being trapped behind the character's eyes. I am currently working through "Shadow of the Tomb Raider," a great demonstration of the genre. That the operation of non-player characters and other entities that appear to exhibit agency is sometimes claimed to be evidence of AI is not much evidence for that claim, whatever its appeal in popular culture.
  • asked a question related to Theoretical Computer Science
Question
3 answers
Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?
Relevant answer
Answer
In the case of Euclidean graphs you can use the Euclidean distance between nodes. You can also use random weights. Depending on the application you can use appropriate weights...
  • asked a question related to Theoretical Computer Science
Question
11 answers
Given an undirected(weighted) graph depicted in the attached diagram which is a representation of relationships between different kinds of drink, how best can I assign weight to the edges? Is there a technique that I can use?
Relevant answer
Answer
Use Structural Equation Modeling (SEM).
  • asked a question related to Theoretical Computer Science
Question
2 answers
Are there techniques to automatically assign weights on weighted graphs or weights on links in concept hierarchy? Assuming the scenario depicted here : https://cs.stackexchange.com/questions/90751/weight-assignment-in-graph-theory
is a form of a weighted graph. Are there ways weights can be assigned to each edges?
Relevant answer
Answer
Hi Jesujoba,
AFAIK, this should be accomplished based on previous knowledge (a.k.a background) such as ranking the content using certain aspect; otherwise, there is no meaning or logic behind such process.
HTH.
Samer Sarsam, PhD.
  • asked a question related to Theoretical Computer Science
Question
13 answers
I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:
-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')
-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')
where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).
But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?
Relevant answer
Answer
Dear Fatemeh,
Maybe I am getting too late into this discussion. What I would do is to solve the following problem P:
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
a <= x_i <= b for each vector variable component
and programming myself a simple branch and bound algorithm:
1. Solve P
2. check whether some variable x_i has a value that is different from a or b.
3. (branching) Say you find that x_t is a < x_t < b. Then solve the following two problems:
P1
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = a
P2
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = b
4. Perform step 3 on the solutions of all problems you have.
5. (bounding) . Stop branching if:
- the solution of a problem is infeasible
- all variables in the solution have values either a or b ("integer" solution)
- the solution of the problem still contains variables with values different to a or b, but the objective is worse than a previously found "integer" solution.
I know it is brut force, but it will keep the structure of your problem and will guarantee what you want. And, it is very easy to program.
Hope it helps.
Vlad
  • asked a question related to Theoretical Computer Science
Question
5 answers
I'm currently teaching our intro course on Theoretical Computer Science to first-year students, and I came up with the following simple proof that there are non-computable functions from N->N:
It's a simple cardinality-based argument. The set of programs in any arbitrary language is countable - just enumerate by length-lexicographic order (shortest programs first, ASCIIbetically for programs of equal length). The set of functions from N->N is uncountable (as the powerset of N is uncountable - in a pinch by a diagonalization argument - and hence the set of indicator functions alone is uncountable). So there are more functions than programs. Hence there are functions not computed by any program, hence non-computable functions.
My question: This is so simple that either it should be well-known or that it has a fatal flaw I have overlooked. In the first case, can someone point me to a suitable reference? In the second: What is the flaw?
Relevant answer
Answer
Thanks. Yes, I use the Church/Turing hypothesis/definition of computable (with any Turing-complete language, though programs for UTM are most traditional). OK, I see why I need the Axiom of Choice. I don't think I need the powerset axiom - that's just a convenient shortcut. And yes, it's not a constructive proof.
  • asked a question related to Theoretical Computer Science
Question
16 answers
Looking through the literature, I realized all the proofs for NP- hardness of QIP are based on the claim that Binary Quadratic Integer Programming is NP- hard. Is that true?
Relevant answer
Answer
Unconstrained Quadratic Integer Programming is strongly NP-hard even when the objective function is convex. The proof is by reduction from the Closest Vector Problem.
  • asked a question related to Theoretical Computer Science
Question
43 answers
For me, I am very sure it is solved. If you have interest, first download the problem, run it. Then, read my paper and think, then, you may also be sure.              
How to use the program
1. I believe that most of people who download my program would be professionals. So I please you leave your contacting message and welcome your opinions if you download my program. You can leave your message here or to my email: edw95@yahoo.com. Thanks a lot.
2. This program is an informal one, also it is not the quickest one. But it includes my algorithm, also it can work correctly and works very well. No fails for it.
3. How to use: if you have a 0-1 matrix standing for a simple undirected graph with n vertices which has at least a Hamilton path from vertex 0 to vertex n-1, you only press the “ReadMatrix” menu item to read and calculate it, then you press the “Write the result” menu item, to write the result in a new file, you can get a Hamilton path from vertex 0 to vertex n-1 in the new file.
4. How to use: if you have an edges matrix standing for a simple undirected graph with n vertices which has at least a Hamilton path from vertex 1 to vertex n, you only press the “ReadEdges” menu item to read and calculate it, then you press the “Write the result” menu item, to write the result in a new file, you can get a Hamilton path from vertex 1 to vertex n in the new file. If without such a path, you get a message “no...”. The input file format: each row: 1,3 or 1 3. It means that an edge from vertex 1 to vertex 3.  
5. The maximum degree is 3. Though I am very sure my algorithm can calculate any degree of undirected graphs, but this program not. The maximum vertex number is 3000, due to that the PC memory is limited.
6. I would like to thank Professor Alexander Chernosvitov very much. He and his one student take a long time to write a program (different from mine) to implement my algorithm and he give me and my work a good comment (see the web codeproject.com and researchgate.net). Mr. Wang, xiaolong also. Before them, no body trust me. Some not smart enough editors and reviewers reject me just on this logic: for such a hard problem, Lizhi Du is not a famous man, so he cannot solve it. Some editor or reviewer does not use his or her brain, say: your paper is apparently wrong, or, your paper cannot be understood. “apparently wrong”, funny! I study it for a lot of years, apparently wrong! If a reviewer is really powerful and use his brain and cost his time, he surely can understand my paper. If you think I am wrong, tell me where is wrong, then I explain to you that is not wrong. If you think my paper cannot be understood, tell me where cannot be understood, I explain to you. In my paper, in the Remarks, I told how to understand my algorithm and proof. I think it is very clear.
7. I studied this problem for a lot of years. I put a lot of versions of my papers on arxiv. Though the former versions have this or that problems, I am very sure the newest version of my paper is the final version and it is surely correct. It may contains some little bugs due to my English expression, but this does not affect the correctness and I can explain or revise the little bugs easily.
8. Surely I think I have proved NP=P and have solved the problem NP vs. P.
9. Thank you for you pay your attention and time on my algorithm!
Relevant answer
Answer
Apparently, Norbert Blum is convinced of a negative solution to the question. See "A Solution of the P versus NP Problem" up on arXiv.
  • asked a question related to Theoretical Computer Science
Question
1 answer
I want to build a kind of guess game. I do not know the right name but the concept of the game is: person-1(P-1) thinks a name(of anything) and person-2 will have to predict that name by asking as less questions as possible. For example:
p1: thinks something(Steve jobs)
p2: Are you talking about man?
p1: yes.
p2: Is he/she media guy?
p1: No
P2: is he tech personality?
p1: yes
p2: steve jobs.
p1: yes.
So p2 has to ask 4 questions. It could be even more as number of predictors are infinite. Now I want to model this scenario. My Goal is to reduce the number of question. Note that the number of predictors are limited. So situation is not that broad.
I can think of decision tree. But question is, how can I decide where to split so that length of the brunch will be small.
Any suggestion/reference will be appreciated.
Relevant answer
Answer
Maximize the entropy, the information gain
for every question.
Regards,
Joachim
  • asked a question related to Theoretical Computer Science
Question
6 answers
Given a graph G and a finite list L ( v ) ⊆ N for each vertex v ∈ V , the list-coloring problem asks for a list-coloring of G , i.e., a coloring f such that f ( v ) ∈ L ( v ) for every v ∈ V. The list coloring problem is NP-Complete for most of the graph classes. Can anyone please provide the related literature in which the list coloring problem has been proved NP-Complete for the general graph using reduction (from well know NP-Complete problem)?
Relevant answer
Answer
Yes Sir, that is the trivial way of doing this.  Thanks
  • asked a question related to Theoretical Computer Science
Question
2 answers
the attachment is my encut test curve, in which the cure show us a parabolic shape but not a tendency closing to falt. is the test result wrong ? anybogy can tell me ? thank you all ! 
Relevant answer
Answer
anyone knows?
  • asked a question related to Theoretical Computer Science
Question
7 answers
Particularly, if yes then all major algorithms like Apriori, FP-Growth and Eclat are np-hard or only Apriori is np-hard?
Relevant answer
Answer
It is good paper about the complexity of frequent itemset mining.
  • asked a question related to Theoretical Computer Science
Question
11 answers
Who was the first to use the term mereotopology, and where in the literature does it appear first?
Relevant answer
Answer
I have been unable to find any use of the term earlier than Simons 1987. It's worth noting that he used a hyphen: mereo-topology. But certainly it was being used unhyphenated by the mid-90s.
  • asked a question related to Theoretical Computer Science
Question
5 answers
What do you consider the best quality journals to submit a theoretical rough sets paper to?
Relevant answer
Answer
Selection of a journal invariably depends upon the quality, length etc. h of the paper. One can go to the journal site to check the topics covered by that journal. But this is now a ocean. So, the titles can be analysed before searching.
If we need a SCOPUS indexed journal may or may not have impact factor then there are several Inderscience journals.
If we need impact factor journals then i agree with Prof.Peters that journals like Information sciences, Approximate reasoning, Fuzzy sets and Systems, IJUFKS are quite good.
Also, there are several special editions of different journals on the specific topic of rough sets, granular computing or knowledge discovery etc. to which papers can be communicated.
  • asked a question related to Theoretical Computer Science
Question
14 answers
I was wondering if it is possible to have consensus or to formalize general definition of reachability property in graphs. If yes then what can be its fragments?
For instance: If I say that following are sufficiently enough to define reachability would it be correct by construction? or any way to formally proof that these are necessary and sufficient conditions.
Property 1: No self connectivity: fragments should not be connected to themselves to avoid infinite length cycles in a kirpke structure.
Property 2:Link connectivity: 2 fragments or elements of the graph are reachable if they have link connectivity.
property 3: Fixing source and destination, to avoid all other interpretations of logical reachability.
property 4: Fix the length of bounds: by fixing the number of intermediate nodes or fragments.
PS: please do not confuse bounded reachability with bounded model checking.
I have also uploaded the formal description of properties. Which one are necessary and sufficient and is there a way to prove that they are necessary and sufficient?
Relevant answer
Answer
FO = first order, LFP = least fix point; for the logic LO + LFP and how it defines reachability, see the book Descriptive Complexity by Immerman:
  • asked a question related to Theoretical Computer Science
Question
9 answers
for example, rough partial orders or rough pre-orders or rough binary relations? but also rough algebraic structures such as semigroups, monoids, etc?
Relevant answer
Answer
Thank you Dimiter, I see Rough Relations go back to Pawlak and there are several references in J. Stepaniuk Rough Relations and Logics:
 I have been working on a version of rough graphs where you start with a graph and impose a kind of "equivalence relation on a graph" and I thought there must be similar things out there. My equivalence relations on graphs are relations in the sense of
Stell, J. G.(2015) Symmetric Heyting Relation Algebras with Applications to Hypergraphs. Journal of Logical and Algebraic Methods in Programming, vol 84 pp440-455.
where the relations are reflexive and transitive (in a straightforward way) but there is a weak kind of symmetry defined using the left converse operation in the above paper.
So I was especially looking for work which adopted the strategy that you get a "rough x" by finding a notion of "equivalence relation on an x" (or partition etc) and then get a way to approximate a "sub x" in terms of the "equivalence classes" which will themselves form an x and not merely a set. For x=hypergraph I think we can do all this.
  • asked a question related to Theoretical Computer Science
Question
3 answers
i have a program that has 3 class (machine-task and main class).i want to give tasks to VMs.i want to say that if this vm was last used vm, search again for new vm.
my code is :
Task4.sortTList();
Task4 tempTask1 = Task4.taskList.getFirst();
Machine4.sortMList(tempTask1.getIns_count());
Machine4 mc1 = Machine4.machineList.getFirst();
mc1.getTasklist().add(tempTask1);
for(int ti=1 ; ti< Task4.taskList.size() ; ti++){
Task4 tempTask = Task4.taskList.get(ti);
Task4 tempT = Task4.taskList.get(ti-1);
Machine4.sortMList(tempTask.getIns_count());
Machine4 mc = Machine4.machineList.getFirst();
if(mc.getTasklist() == null )
mc.getTasklist().add(tempTask);
else if( mc.getTasklist()!= null && mc.getTasklist().getLast() != tempT )
mc.getTasklist().add(tempTask);
else{
Machine4.machineList.removeFirst();
Machine4 second = Machine4.machineList.getFirst();
second.getTasklist().add(tempTask);
Machine4.machineList.addFirst(mc);
}
}
but the Exception is:
Exception in thread "AWT-EventQueue-0" java.util.NoSuchElementException
at java.util.LinkedList.getLast(LinkedList.java:255)
plese help me
Relevant answer
Answer
tanks alot
  • asked a question related to Theoretical Computer Science
Question
14 answers
I am working on the Dolphin data set. Two methods return different results. Method based eigenvalues return 15 as number of connected components while method based on graph search (depth-first / breadth-first) returns 1. I am confused since I am working on a clustering method find communities in the data set. So which result is correct.
Relevant answer
Answer
You can find the connected components, then count those.  As others have pointed out, you want to build on DFS or BFS.  You can find the algorithm for this in the following paper (while if you understand DFS or BFS, it's not complicated):
Hopcroft, J.; Tarjan, R. (1973). "Efficient algorithms for graph manipulation". Communications of the ACM 16 (6): 372–378
  • asked a question related to Theoretical Computer Science
Question
5 answers
I have been developing the first-order reasoner RACE [1] for Attempto Controlled English ACE [2] that allows users to check the consistency of a set of ACE axioms, to deduce ACE theorems from ACE axioms and to answer ACE queries from ACE axioms.
RACE uses a set of auxiliary axioms to express context-independent knowledge like the relation between plural nouns and singular nouns, or the ordering relations of natural numbers. These auxiliary axioms are written in Prolog that – having the power of the Turing machine – allows us to practically do any deduction. Thus often the question is not "Is this deduction correct?", but "Should RACE allow for this deduction?".
In the following I would like to discuss a case where this question arises.
Using the power of Prolog I have extended RACE by auxiliary axioms that perform second-order deductions, concretely aggregation. Thus RACE can deduce
  John is a man. Johnny is a man. ⊢ There are two men.
Adding a further axiom establishing that, in fact, Johnny is John, RACE fails.
  John is a man. Johnny is a man. Johnny is John. ⊬ There are two men.
Thus I have a case of non-monotonic reasoning. (Note that RACE can still deduce that there is one man.)
My question to the community is "Should RACE allow for non-monotonic reasoning, or does non-monotonicity have consequences that could confuse RACE users in more complex cases?"
Relevant answer
Answer
Pierre, I agree that nonmon reasoning can be confusing, but not only for naive users.  Experts never take any kind of reasoning at face value.  They always ask follow-on questions:  What were the assumptions?  Where did the data come from?  How did you derive that answer from those assumptions and that data?
One advantage of FOL theorem provers is the ability to generate an explanation:  Show every step of the proof -- every axiom or data point and every inference.  With controlled NLs, each step can be translated to the same CNL that the user knows.
For nonomon reasoning, belief revision has the same advantage as FOL.  Each revision step can be translated to the CNL.  The final proof is a purely classical FOL proof that can be explained by the above method.
For complex proofs, the amount of detail can be overwhelming, but the expert needs that kind of detail.  Intelligent users who don't have much knowledge of FOL can learn FOL by studying the explanations.  Anyone who is both naive and lazy will always be confused.
  • asked a question related to Theoretical Computer Science
Question
1 answer
I would like to add “new lives” to Petri Net diagrams that were published between 1962 – 2014. The Petri Nets with “new lives” would combine the published Petri Net diagrams from the past with JavaScript codes and supporting graphics to create interactive and dynamic Petri Nets in PDF. In other words, the revival creates token game versions of the Petri Net diagrams in PDF.
I am limiting the number of Petri Nets to a maximum of two per year.
Question 1: Which Petri Net diagrams should I include? Why should I include them?
Once I have finalized the list of Petri Net diagrams to revive, I am hoping to finish the work as soon as possible. So before I begin, I will be looking for volunteers who are interested in adding “new lives” to the Petri Net diagrams. I will  create at least two token game versions. Thus I will be looking for a maximum of 102 volunteers – one person per token game version. If there are less than 102, I will create the difference.
Question 2: Would you be interested in helping out? If so, please give me a shout.
- john
Relevant answer
Answer
Hi John,
Subject to your case scenario and system dynamics the embedded Petri Net System may  be relevant.
Best regards 
  • asked a question related to Theoretical Computer Science
Question
2 answers
What is nuclear attribute? If it differs from condition and decision attributes.
Relevant answer
Answer
I got the answer for it sir... Thank you so much... In the paper I referred they mentioned single valued attribute as nuclear attribute,,, 
  • asked a question related to Theoretical Computer Science
Question
13 answers
Assume we have two n-bit integers a, b so the cost(a+b) = n-bit operations. Similarly, the cost of adding two 2n-bit integers will be 2n-bit operations. This is how we compute the cost of operation in theory.
Is it also true in practice? I mean may be the hardware engineers optimize the operation for large integers so that to speedup the performance of the machine. 
Relevant answer
Answer
I'm not sure whether you're asking about the time cost, or cost in gates, transistors, power?  And do you mean at the bit-level, or instruction-level?  And do you mean "as implemented in conventional processors", or do you want theoretical limits, given particular trade-offs?
For instance, 1-bit addition is simple enough (xor), but extending to multiple bits means propagating carry.  This introduces trade-offs: you could simply pipeline your add so a 32b operation takes ~32 pipe stages.  Or you can cram the propagation within a single cycle (meaning your 1-bit full-adder has to operate in < 1/32 cycle).  Or you can dedicate more transistors to propagate the carry faster.
So in this sense, you can't assign a very simple cost (in time, power, transistors) to even addition, since different tradeoffs lead to different costs.  And even simple addition has a cost which grows at least linearly with bit-width (but maybe faster).  The same sort of analysis applies to functions like mul and div, though obviously they are usually have much steeper cost.  (And bitwise operations are always cheaper than add.)
  • asked a question related to Theoretical Computer Science
Question
39 answers
In my view, the formal solution ``P vs NP'' is found in the narrow sense.
Relevant answer
Answer
I am reading your article to understand your above response. I will reply you later.
  • asked a question related to Theoretical Computer Science
Question
3 answers
I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.
Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.
Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)
My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.
Thank you so much!
Relevant answer
Answer
I see that there is a reduction proof on the CSTheory StackExchange website already. But if it is a reference that you need, here it is:
Fricke, G. H., Hedetniemi, S. T., Jacobs, D. P.,
Independence and irredundance in k-regular graphs.
Ars Combin. 49 (1998), 271–279.
Summary from MathSciNet: "We show that for each fixed k≥3, the INDEPENDENT SET problem is NP-complete for the class of k-regular graphs. Several other decision problems, including IRREDUNDANT SET, are also NP-complete for each class of k-regular graphs, for k≥6.''
Now, if the summary is correct, the authors prove that the decision version of the independent set problem is NP-complete for the class of k-regular graphs. Therefore, the optimization problem of finding the maximal independent set is np-hard for the same class. And of course, the minumum vertex cover is the complement of the max independent set. Hope this helps.
  • asked a question related to Theoretical Computer Science
Question
5 answers
I need to know what is the order of complexity and also how to calculate it.
Thanks in advance.
Relevant answer
Answer
Thank you all for your help, but I think Dr. Breuer is right. My question was not clear enough! Please accept my apology.
Let me explain it, I am trying to solve a  linear equation Ax = B.
As Matrix A is not a square matrix, I use x= (ATA)-1ATB
Different experiments result different matrix A and B. matrix A is always accurate but, Matrix B is obtained from some practical experiments therefore, is not accurate.
I know the accuracy of the answer is dependent  on the condition number of matrix (ATA), and I am using "Cond()" command in Matlab to calculate it. and then I want to use this number to find the most accurate answer.
Now I want to Know what is the computational complexity of  doing this procedure.How many operations is used to calculate it? (matrix A is M by 2 and matrix B is M by 1).
I hope this is clear enough.
Thanks in advance.
  • asked a question related to Theoretical Computer Science
Question
2 answers
Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?
Definitions:
1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.
2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.
3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.
Relevant answer
Answer
There is an obvious reduction form 3DM to the problem of finding a maximum heterochromatic matching in an ede-colored graph (represent the third gender in each triplet by colors).
Therefore, the problem of finding a maximum heterochromatic matching in an ede-colored graph is NP-complete.
Moreover, 3DM is NP-complete even when no two triplets intersect in more than one element. When we start from instances with this property, the reduction mentioned above yields only properly edge-colored graphs. Therefore, the same problem remains NP-complete for properly edge colored bipartite graphs.
Assessing NP-completeness in the case of strong edge-colorings is more technical but  appears now to be downhill.
  • asked a question related to Theoretical Computer Science
Question
7 answers
I am wondering if anybody can provide any handy resources (for a theoretical computer scientist) in relation to the convex cost flow problem?  I have found texts (mostly my combinatorial optimization texts on my shelf), but they sparingly discuss the problem and its algorithmic properties, and the ones I've found so far take a very deep dive into it without explaining a whole lot or providing any examples.  I get the formulation of the problem, but a bit more would be helpful.
 I'm new to the problem, and wondering if anybody can suggest some good texts, or papers that do a good job of covering the problem, and some of the major algorithmic results for this problem (computational complexity, and algorithms primarily), or maybe applications of it being used to see how researchers have applied it to solve other problems in theoretical computer science, combinatorial optimization, or operations research.
If you have resources or suggestions, that would be helpful!  Thank you so much, and have a beautiful day :).  
Relevant answer
Answer
Ah yes, I forgot earlier. Dimitri Bertsekas has made available his network flows book for free. It has god stuff on single-commodity network flows - dual methods in particular. Here is a link to download the book from: 
Again - good luck!
  • asked a question related to Theoretical Computer Science
Question
3 answers
Is there any way to make an absolute execution flow of all programs so that it may not effect on output. For example if there two conditional statement used in computer programs so that it may give four execution flow statically bit dynamically only one. So I want to know that is there any way to make absolute execution flow in static analysis of program. Or any way to make general execution flow of programs in a way where the output of programs may not affect.
Relevant answer
Answer
Wikipedia has an acceptable overview of static program analysis at
which may give you the English magic words to clarify your question.  There is also an Urdu Wikipedia at
  • asked a question related to Theoretical Computer Science
Question
4 answers
When or where do we use both?
Thanks.
Relevant answer
Answer
Different statistical distribution base :-)
  • asked a question related to Theoretical Computer Science
Question
6 answers
if i work with quantitative method..the output should model? if mix method.. the framework is suitable for the output, is that right?
Relevant answer
Answer
although it should ideally be "research driven", there are alternatives...for example, if you have large data sets.that can be "mined" for the content they have hidden within them (e.g., by using cluster analysis or other data mining techniques)...obviously, it's not that simple, but they are both places to begin your search...good luck... 
  • asked a question related to Theoretical Computer Science
Question
9 answers
Some optimization problems are well known, but the instances or the benchmarks used are not available. Generally what can we do in this situation?
Relevant answer
Answer
Dear Dahi Abd Zakaria Abd El Moiz,
please have a look at the discussion "How to evaluate the quality of metaheuristic algorithms without any benchmarks?" on the link https://www.researchgate.net/post/How_to_evaluate_the_quality_of_metaheuristic_algorithms_without_any_benchmarks
and in particular on the link  http://www.fedulov.ge  as an application.
  • asked a question related to Theoretical Computer Science
Question
17 answers
Most measures that I have seen such as cosine similarity measure similarity between two attributes. I'm not familiar with any measures than can be used (or extended) to compared the values for three of more attributes. Please comment.
Relevant answer
Answer
What's wrong with Atkin's Simplicial Complex?
  • asked a question related to Theoretical Computer Science
Question
2 answers
As above.
Relevant answer
Answer
On the other hand you need all paths to be disjoint - they may share nodes but not links. You may need to use the graph search to enumerate paths, in order to see if there is even a feasible solution. Is there a particular application? There are several such approaches in telecommunication, one being found through this link:
  • asked a question related to Theoretical Computer Science
Question
2 answers
We use reduction to solve problem P1 using problem P2 such that a solution of P2 is also a solution of P1.
While a problem P1 is transformed into a simpler form so that solving P1 becomes easy.
So the solution set is same in both cases.
Relevant answer
Answer
I'm not quite following. What you are saying in both cases is a reduction (at least to me), just rephrased. Let me expand on this:
In Theoretical Computer Science, we use this idea all the time. It is usually to illustrate the difficulty of solving one problem in relation to another; and provides a clear way to develop an algorithm.
Keep in mind what I saw below are assumed that the results are proven to be true as a premise:
1) What exactly do you mean by "simpler form"? Give a concrete example. Are you meaning something like taking a graph, using some intuition and making some kind of bipartite graph to say something about the original graph? I'd think something like this could technically be skewed as a reduction as well since you just made a new instance that just coincidentally may solve the same problem. Reductions can even occur within the same set of problem instances.
2) The solution set may not be the same between P1 and P2 because the instances are not really the same. Assuming you are talking about a reduction in the scope of something like an optimization problem, you would need to show how you take one instance of a problem and transform it into another one. Typically a reduction is taking an instance I of P1, creating an instance I' using I for problem P2, solving P2, then giving how to get the solution for I from the solution of I' (or their correspondence). In decision problems it is pretty straightforward (as all of them are 'yes'/'no' answers).
To me, it just sounds like a bit of different language. I've seen people call reductions other things like transformations. There are special kinds of reductions though, like a Karp reduction, or Turing reduction. At their heart is taking an instance, making a new instance from it, solving that new instance, and showing what it means for the original by proving a correspondence.
  • asked a question related to Theoretical Computer Science
Question
2 answers
I am aware that a lot of work is going on regarding graphene material. This material has good characteristics, it is rechargeable and consumes less energy.
Relevant answer
Answer
Transistors can be made out of graphene for fast amplification, but for
computing machinery switches with nonlinear characteristics are needed.
Therefore ways are being investigated to induce band gaps.
Regards,
Joachim
  • asked a question related to Theoretical Computer Science
Question
21 answers
Formal methods have mathematical foundations. They are based on algebra or finite state machines. Are they practical? Do they justify their costs?
Is it better to use light-weight implementations of formal methods in industrial projects to reduce costs and increase flexibility and practicality?
Relevant answer
Answer
I'd say it depends on what you are designing. If you look at it from a process perspective, the whole software development process is like a funnel, from vague requirements over a (hopefully) not so vague design to a concrete implementation.
While this is usually fine for systems where there is no "right" or "wrong" solution (the look and feel of a web shop, for example), other systems are much more restrictive. You don't want to be vague about alarm and shutdown criteria in the control software of a nuclear power plant or in aircraft control.
So I think using formal methods will be worthwile in only some scenarios, while in others semi-formal notation (such as UML) will suffice.
  • asked a question related to Theoretical Computer Science
Question
2 answers
see above
Relevant answer
Answer
Expand the standard reduction from 3-SAT to SUBSET SUM adding an extra 1bit column 2^c (in a way that 2^c doesn't interfere with the other bits); set to 1 the bit of that column for all the M addends of the subset sum; add M new dummy addends with only that 2^c bit set; and set as target sum the original target sum T plus M2^c. For what regards the number of addends pick K=M. In this way you can pick an arbitrary number of addends of the original subset sum problem and use the dummy addends to reach exactly K addends (and the K2^c part of the modified target sum T+K2^c).
I hope that it is not a homework question ....
  • asked a question related to Theoretical Computer Science
Question
7 answers
Nobody knows. It is known that we live in three measurements: it is length, width and height.
In certain cases for the description of difficult processes, it is applied the bigger number of measurements of space is considerable: four, six, ten, hundred, one thousand etc.
However, there is a contradiction in physical and geometrical measurements.
In the main classical cases great researchers manage only one measurement.
So, de Broglie's wave has only one measurement – length. But after all any physical subject has to have except length also height.
Relevant answer
Answer
I am grateful to colleagues for a set of answers diversely on this matter, especially, for the answer that the size of "heights" of a wave is proportional to a square of its length.
  • asked a question related to Theoretical Computer Science
Question
6 answers
I am writing a new computing course and need some examples I can point students to. People talk about using functional programming arguing that it will allow programmers to build formally provable software. That was what I was told about 20 years ago and I was wondering if any real software had been built in a functional language then mathematically proved to be correct? Are there any examples I could point at? Particularly something people would have heard of, if possible.
Relevant answer
Answer
It depends how you define "real industrial". Since verification is still very costly, the applications are rather limited to domains as military or air/space transportation.
E.g.. the IMHO most famous and largest project is the seL4 kernel. The was writen in Haskell and proved with Isabell/HOL (and refinded in C). It was used by OK Labs, what was later bought by General Dynamics, world's fifth-largest defence compancy (according WIkipedia).
There are smaller projects in industrial companies. E.g., Daimler Benz once used the OPAL language to develop a proven compiler for ECUs, but I am not aware that the compiler is actual used in non-prototype development (most ECU development is outsourced anyway).
  • asked a question related to Theoretical Computer Science
Question
7 answers
Is there anything faster than solving the dual simplex problem?
Relevant answer
Answer
What is your metric for "fast". Time complexity or experimental results? That may help researchers answer your question.
  • asked a question related to Theoretical Computer Science
Question
7 answers
Say we have a set of variables X={ x_1, x_2, ...., x_n}. The goal is to sort the set of elements in X without knowing the values and only using a set of known inequalities such as ( x_i < x_k ). These inequalities come from a noisy source so they contain wrong inequalities as well. Therefore we need an "approximate sorting algorithm". Can someone direct me to some sources of information on related topics?
Relevant answer
Answer
From your explanation u-statistics should work, but if you don't have judgments for all n(n-1)/2 pairs you may have a bias.
  • asked a question related to Theoretical Computer Science
Question
4 answers
Organisation, its omni-presence, preservation and dependence, is possible the distinctive aspect of biological phenomena. Achieving a consensus about what is organisation and how can it be used to generate biological explanations would greatly benefit the development of (theoretical) biology.
Relevant answer
Answer
I Think you can find hints in René Thom (catastroph theory and forms growth), Ilya Prigogine (auto-organisation, dissipative structures) and Classical theory of entropy.
  • asked a question related to Theoretical Computer Science
Question
5 answers
General and scientific responses welcomed.
Relevant answer
Answer
Colin,
the emphasis is on fuzzy and associative computation.
  • asked a question related to Theoretical Computer Science
Question
2 answers
Some references point to Bellman's The Art of Dynamic Programming, but after a quick look, the book, under "Bottleneck problems", does not seem to contain an explicit solution description as can be commonly seen in research papers or course material (such as the Introduction to Algorithms by Cormen, Rivest, Leiserson and Stein) on the subject, unless I overlooked a few important equations.
Relevant answer
Answer
@Afaq Ahmad: thank you for the link... My question was more historical than technical... I was interested in knowing who first came up with a description of the algorithm in a textbook or a book... I found somewhere that somebody referred to Bellman, but Bellman does not give an explicit description of the algorithm after a first quick glance (but I didn't try yet to link his differential equation notation to the one's we conventionally see in textbooks such as the one you linked)...
Has the algorithm become somewhat folklore? Most of us can remember that Quick Sort was first published by Hoare. Some of us were told that Merge Sort was a Von Neumann trick... We sometimes link binary systems to p-adic systems, at least in the Western canon...
To be honest, I wouldn't even be surprised if the algorithm could be traced back to Ancient Japan, China or India... due to its "abacus" nature...
  • asked a question related to Theoretical Computer Science
Question
5 answers
Theta-combinator: \Theta \equiv (\lambda xy. y(xxy))(\lambda xy. y(xxy))
Y-combinator: Y \equiv (\lambda f. (\lambda x. f(xx))(\lambda x. f(xx)))
What's the difference between them? In what sense do you say Theta combinator is more powerful than the Y combinator?
In a reduction path, how can I distinguish beta-reduction and beta-equivalence of terms?
Relevant answer
Answer
@Peter: Did you check the references that I gave? They contain the actual beta-reduction steps for \Theta, and show how Y is different. I don't see a problem there.
  • asked a question related to Theoretical Computer Science
Question
3 answers
I need to measure a chaotic event based on the entropy. I have the results of the experiment, but i see that entropy do not use real time results to plot a clear information about the pattern formations but is based on probability if i ve correctly understood. Does anybody has a deep information about that?
Relevant answer
Answer
Thank you very much for your answer. So, it is interesting to add the sampling rate in the entropy to calculate the modification in the system.
A further question is about the mathematical equation so i can combine the experimental results with the entropy in general. When i access to read about entropy, based always on probability we are computing a graph but i couldn t find a metric that helps to combine the results with the entropy to assess the attractors of the chaotic phenomene that i am trying to measure. Please do you have any idea about that?
  • asked a question related to Theoretical Computer Science
Question
12 answers
Go (the surrounding game) is over 4000 year old strategy game between two players. It is played with Black and White tokens (called stones) on a square board with intersections drawn. The goal of the game is to surround opponent's territory (intersections on the board) using the stones. Also you can surround your opponent's stones to capture them and remove them from the board. The game ends when each intersection on the board has clearly decided to which player's territory it belongs, or both players consecutively pass a move. The player that surrounds more territory than his opponent wins the game. Each surrounded intersection counts as one point of territory and each captured stone reduces the opponent's territory by one point. The rules of Go are very simple and can be learned in just a few minutes, but to perfect your game it might take you a lifetime, as Go is very rich in strategy and the number of possible games outnumbers the number of atoms in the observable Universe.
Go being so complex, yet so simple, my question is, if computation (a Universal Turing Machine) can be simulated by it? If such a thing is possible, what about a Go game itself being simulated on such a machine?
The rules of Go:
Relevant answer
Answer
Ulrich, I had something like this in mind when I was formulating my question. It's the proof that GO is PSPACE hard. QBF formulas are coded into a Geography game which is then converted to a GO position. Maybe Turing computation, or some weaker form of computation can be similary coded.
  • asked a question related to Theoretical Computer Science
Question
2 answers
The numerators for dyadic rationals which approximate sqrt(2) within precision 2^-k (within k-bits of precision) can be given by a recursive function f: |N -> |N
f(k) = least n [ n^2 > 2^(2k+1) ] for all k.
f(k):
2, 3, 6, 12, 23, 46, 91, 182, 363, 725, 1449, 2897, 5793, 11586, 23171, 46341, 92682, 185364, 370728,....
The idea of the function f is that for each k>0 it outputs the number of 1/2^k segments that fit into sqrt(2) plus one.
Since for each k, the value f(k) is the smallest integer such that f(k)/2^k > sqrt(2) we have
f(k)/2^k < sqrt(2) < (f(k)+1)/2^k for each k>0.
From this we obtain the error bound | f(k)/2^k - sqrt(2) | < 2^-k.
Conclusion:
Approximation of Sqrt(2) within precision 2^-k can be "represented" by a natural number f(k). To get the actual approximation by a dyadic rational just divide by 2^k.
Note:
This can be generalized to calculating the square root of any number and of any order.
QUESTION:
Can you give a Wang tile program for calculating the function f ?
Relevant answer
Answer
"Can you give a Wang tile program for calculating the function f ?"
I think that a "real world example" would require *A LOT* of work :-). However another approach, slightly different from the one suggested by Philon Nguyen, is 1) build an Universal Turing Machine with Wang Tiles, 2) build the equivalent TM program, 3) feed the Wang Tiles UTM with it
  • asked a question related to Theoretical Computer Science
Question
8 answers
The basic ingredients of an optimization problem are the set of instances or input objects, the set of feasible solutions or output objects associated with any instance, and the measure defined for any feasible solution. Most of the NPO problems that I know of have discreet input space.
Relevant answer
Answer
I have worked on feature weighting (FW) as a continuous search space problem. Feature weighting involves assigning weights to the features of the data sets before performing machine learning tasks (like classification). As it is reported by many authors, FW improves the performance of classification algorithms. When the weights are limited to 0/1, the task will be Feature selection (FS) instead of feature weighting. (Feature selection is a discrete search space problem). For more information you can study this article:
  • asked a question related to Theoretical Computer Science
Question
10 answers
Can some people suggest some good syllabus for graph theory, advanced graph theory and discrete mathematics for Computer science Engineering students
Relevant answer
Answer
Basic Concepts: Graphs and digraphs, incidence and adjacency matrices, isomorphism, the automorphism group; Trees: Equivalent definitions of trees and forests, Cayley's formula, the Matrix-Tree theorem, minimum spanning trees; Connectivity: Cut vertices, cut edges, bonds, the cycle space and the bond space, blocks, Menger's theorem; Paths and Cycles: Euler tours, Hamilton paths and cycles, theorems of Dirac, Ore, Bondy and Chvatal, girth, circumference, the Chinese Postman Problem, the Traveling Salesman problem, diameter and maximum degree, shortest paths; Matchings: Berge's Theorem, perfect matchings, Hall's theorem, Tutte's theorem, Konig's theorem, Petersen's theorem, algorithms for matching and weighted matching (in both bipartitie and general graphs), factors of graphs (decompositions of the complete graph), Tutte's f-factor theorem; Extremal problems: Independent sets and covering numbers, Turan's theorem, Ramsey theorems; Colorings: Brooks theorem, the greedy algorithm, the Welsh-Powell bound, critical graphs, chromatic polynomials, girth and chromatic number, Vizing's theorem; Graphs on surfaces: Planar graphs, duality, Euler's formula, Kuratowski's theorem, toroidal graphs, 2-cell embeddings, graphs on other surfaces; Directed graphs: Tournaments, directed paths and cycles, connectivity and strongly connected digraphs, branchings; Networks and flows: Flow cuts, max flow min cut theorem, perfect square; Selected topics: Dominating sets, the reconstruction problem, intersection graphs, perfect graphs, random graphs.
  • asked a question related to Theoretical Computer Science
Question
3 answers
Is it NP-complete? Hard to approximate? Its a maximum clique problem in a special graph.
Relevant answer
Answer
@Maximiliano, I am sure that this problem is pretty hard: all us coding theorists are not going to be replaced by computer programs overnight! Then proving it is another matter...
  • asked a question related to Theoretical Computer Science
Question
4 answers
The Halting problem can theoretically be solved using an Oracle machine, so can N-digits of Chaitin's Omega. However, all the digits of Chaitin's Omega cannot be decided by an Oracle in finite time as the problem then takes a discrete/real duality.
Additional comment: Robert Solovay seems to have written a paper on "When Oracles Do Not Help" that I was not able to find at the moment but that would seem to indicate that in the case of computational learning through oracles/query there are such cases.
Relevant answer
Answer
Thanks for the link Pierre-Yves. I'll have a look at it and put my comments on the thread.
There are 2 definitions of an Oracle machine I am aware of:
The decision version returns true or false to any question. In my previous comment, I was referring to the decision version. If my memory is good, Turing's 1937 thesis referred to that version.
But even an oracle that can answer the question with the full answer would not be able to answer the question on a non-recursively enumerable real (at least assuming that we are limited by such a class), as it cannot return the complete enumeration of digits (assuming finite time) and it cannot return a compact representation (assuming the Church-Turing thesis). Now, Oracle machines are effectively not bound by computability as they can solve the Halting problem as a decision problem, but the representation would not be finite and therefore the answer cannot be provided, other than by assuming, as a thought experiment, that the Oracle can return an answer we cannot "understand".
Now my question was more about if there are simpler classes of problems that cannot be solved by an Oracle, without having to refer to the discrete/real duality.
A weaker version of the problem would be: what is the class of problems that cannot be solved by a Gentzen system.
These are the problematics of the Church-Turing thesis I wanted to ask about.
The distinction may be confusing as the literature itself on the subject refers to different sets of hypotheses.
But posited shortly, any numbering (in the sense of Gödel) that can be ordered (Gentzen) has a practical implementation as an Oracle machine (computable). Now, what are the problems that cannot be solved in such a way, that cannot be ordered sufficiently... Perhaps this weaker statement of the question is clearer.
Not an easy question to ask...
  • asked a question related to Theoretical Computer Science
Question
20 answers
I'm currently writing a short essay on Formal Verification and Program Derivation, and I would like to include an example of an algorithm that seems to be correct under a good number of test cases, but fails for very specific ones. The idea is to present the importance of a logical, deductive approach to infer the properties of an algorithm, in order to avoid such mistakes.
I can't quite think of a good example, though, because I'd like something simple that a layman can understand, but all the examples I came up with are related to NP-complete graph theory problems (usually an algorithm that is correct for a subset of the inputs, but not to the general problem).
Could someone help me think of a simpler example?
Relevant answer
Answer
A folklore example is swapping the value of two (integer) references without using a local variables. Take a programming language with call-by-reference and the following function:
swap(&int a, &int b) {
a := a - b; // value of a is a0 - b0
b := a + b; // value of b is a0 - b0 + b0 = a0
a := b - a; // value of a is a0 - a0 + b0
}
Assume the value of a is a pointer to the value a0 and b to b0. Then after a call to swap(a, b) the value pointed to by a and b are swapped ... except if a and b both point to the same memory address, in which case they point to the value 0.
This is more an example of the problems related to reasonning about programs in the presence of pointers but I think that it qualifies as an example of tricky algorithm.
  • asked a question related to Theoretical Computer Science
Question
2 answers
This is a very interesting question, such that is internet representable as a turing machine?
Relevant answer
Answer
In general, the more carefully a computational task is defined, and the better the task is understood, the more difficult it becomes to argue that it is not a Turing process. This idea is reflected in Von Neumann's (perhaps apocryphal) remark that "you tell me what it is that computers cannot do, and I will program a computer to do it." My opinion is that the burden of proof should fall on those who argue that a process is NOT a Turing process, to demonstrate why it is not. No one ever said that a Turing machine is the fastest way to execute an algorithm - parallel/distributed processes are obviously vastly faster - but a I've never seen a convincing demonstration that a process (such as predicting stopping time) cannot be represented in the form of an algorithm that a Turing machine can read - and that such a process nevertheless exists.
  • asked a question related to Theoretical Computer Science
Question
7 answers
I'm doing research in formal languages and automata theory.. in that I need to draw the state diagrams for DFA or NFA. Currently I'm using Msoffice powerpoint tools.
Relevant answer
Answer
JFlap - is a quick draw/simulate tool, LaTeX + Tikz - the best combination to draw beautiful pictures (and they have Automata package too)
  • asked a question related to Theoretical Computer Science
Question
12 answers
I'm working on a problem where I would need an error correcting code of length at least 5616 bits (and preferably not much longer) that could achieve correct decoding even if 40% of the received word is erroneous. I have looked into some basic coding theory textbooks, but I have not found anything that would suit my purpose. Does such a code exist? If it does, what kind of code is it and can it be efficiently realized? If it doesn't, why not?
Any insight to the problem will be much appreciated.
Relevant answer
Answer
I agree with Patrick. 40% correction if errors are random cannot be achieved for that length. If errors are not random, that is a different story, but in that case we need a model for the channel (like a bursty channel), as other people are pointing out.