Science topics: Computer Science and EngineeringTheory of Computation
Science topic
Theory of Computation - Science topic
Explore the latest questions and answers in Theory of Computation, and find Theory of Computation experts.
Questions related to Theory of Computation
Can we apply the theoretical computer science for proofs of theorems in Math?
Greetings everyone,
I was wondering whether there are any interesting aspects of cellular automata as computational models in the strict(er) sense of the term:
- Formulate the problem into an initial state of the automaton
- Run n generations
- Transform the resulting state back into a solution that makes sense in terms of the original formulation
For example: a 2D cellular automaton sorting an array of numbers or so, or something more complicated. Note: I am not referring for example to a Turing Machine designed in the Game of Life where the gliders can be seen as bits/wires etc, but rather a rule that has been designed for such purposes.
Are there any interesting works/papers you could suggest?
Can the complexity theory solve complete or partially problems in Math?
I am not fully molecular modelling researcher, so am not able to understand this one. I have gone through J. Chem. Theory Comput. 2011, 7, 2498–2506 on this concept, but I did not get what exactly have to compute.
Thanks.....
I am simulating a system with Ca2+ ions using the 12-6-4 Li-Merz parameters [1] for divalent atoms. Is there a way to implement this 12-6-4 LJ model in LAMMPS?
Ref:
[1] Pengfei Li and Kenneth M. Merz, Jr. "Taking into Account the Ion-Induced Dipole Interaction in the Nonbonded Model of Ions", J. Chem. Theory Comput., 2014, 10, 289-297
I am doing research on load balancing for web server clusters. Please suggest which simulator can we use for the same.
For my dissertation work, I have been studying Constructivism, Critical Theory, and Computers.
I am curious what people think about this premise:
There is no single software program to use in the learning process, rather it will be that students will program their own software as part of their learning process.
-----
Do you see this potential in all areas of school K-20? Or do you see it limited to particular slices?
Thanks for thinking about this! -- Bryan
I studied this algorithm for a lot of years and I am very sure it is correct. It does not need you too much time to understand it, because I show how to understand it in the abstract and remarks. This is my paper’s website:
But now I meet some problems that need your help and cooperate. As a coauthor, you can discuss with me and I can explain every possible doubt of you.
1) A top journal said my English expression is hard to follow, and suggested me to find a person to cooperate who is strong in algorithm and also strong in English paper writing.
2) A top journal said NP cannot equal P, next are their comment and my answer:
Their rejecting email:
Although it remains a logical possibility that the P=NP? question has a positive answer, the overwhelming view of the research community is that its liklihood is negligible.
The sincerity and technical ability reflected in your submission led us to an initial effort to recruit a reviewer who might be willing to try identifying your work. We regret that we could not find such a reviewer. No competent reviewer we know of willing to put in the effort to find a bug in a sophisticated densely written 40 page paper.
My answer:
At present, most of well-known authorities in this area tend to think that NP is not equal to P. It is absolutely certain that the authorities have no strong basis for this view, but this view seems to have been tacitly accepted by most people.
As a result, various academic papers often talk about NP, especially NPC, directly declare that there can be no polynomial time algorithm. Such acquiescence is undoubtedly harmful.
A top international journal should be a responsible journal. To reject an algorithm paper, they should make sure that the algorithm is not innovative or is wrong. My paper is only an algorithm, the right or wrong of the algorithm is not difficult to determine. I can explain any doubts. In order to understand my algorithm, one only needs to understand series of destroying edges. I said it again: one only needs to understand series of destroying edges. This is a very possible job. Why don't you read it carefully and then understand it? However, some top journals often reject papers based on guesses and assumptions rather than carefully reading and understanding them. Some journals reject my paper for two reasons: 1) It is impossible for you, an ordinary and unknown person, to solve such a difficult problem, and your paper must be wrong. But how many major problems have been solved in history, and the solvers are not ordinary people before solving them? 2) Experts generally believe that NP is not equal to P, so your paper that NP is equal to P cannot be correct. My answer is: only strictly proven conclusions are meaningful. Why do some experts always like to assert something? How many experts have asserted somethings in history, and later these assertions have been broken by new achievements. Let's briefly discuss why some experts assert that NP is not equal to P.
Two famous scientists on algorithm wrote in one of their books[22] that so far a large number of NP-complete problems have arisen. Because many excellent algorithm scientists and mathematicians have studied these problems for a long time, but have not found a polynomial algorithm, then we tend to think that NP is not equal to P. Such inferences are logically untenable. From another point of view, there are so many NP-complete problems that no algorithm scientists or mathematicians can prove that any problem is exponential.
Lance Fortnow, the editor-in-chief of a famous ACM journal, wrote a review of P. vs. NP [23], in which he believed that: 1) no of us really understood NP, 2) NP was unlikely to equal P (unlikely), and 3) human beings could not solve the problem in a short time (as explained above, this assertion is meaningless). To illustrate that NP is not likely to equal P, he described a very beautiful world under the premise that NP is equal to P: all parallel problems can be solved in polynomial time, all process problems, optimization problems, Internet paths, networking problems, etc., can quickly get the best solution, even all number problems. The solution of hard problems can also be completed quickly in polynomial time, because solving any mathematical problems is actually a parallel, multi-branch, exponential expansion process. There may be only one correct channel for it. This is actually an NP problem. So he thinks that if anyone proves NP = P, it means that he has solved all seven world problems. He did not say that if anyone proves NP = P, then who can control the whole universe, because the evolution of the universe, including human intelligence activities, can theoretically be seen as a multi-branch, continuous parallel development process. The real world we are in at this time is only one of its branches, or just a possibility of its evolution and development. Despite Mr. Lance Fortnow's authority (editor-in-chief of internationally renowned journals), his argument is logically untenable. Even in his own article, he admits that even if NP = P is proved, it does not mean that we can get an efficient polynomial time algorithm for any NP problem. Here I change his view slightly: if human beings have got unlimited Non-deterministic Turing machine, the wonderful world he describes can indeed appear. What does it mean to have an unlimited Non-deterministic Turing machine? It means that you have countless labor forces that work for you on your own terms, without overlapping, along different branches. Imagine that if you had countless mathematicians who were going to tackle a math problem in parallel along all possible directions (branches). What math problem would not be solved quickly? However, NP = P is not equivalent to having unlimited Non-deterministic Turing machines. Logically, it is impossible for human beings to create unlimited Non-deterministic Turing machines.
Hilbert, a great mathematician of the twentieth century, has a famous saying: we must know; we will know. It can be seen that Hilbert essentially agreed that NP equals P. Many mathematical problems in human history, including Hilbert's famous 23 mathematical problems, are constantly being solved. Isn't it a confirmation that NP equals P?
From the heuristic point of view, any NPC problem can be reduced to any other NPC problem in polynomial time. That is to say, every distance between two NPC problems is polynomial. The fact itself strongly shows that NP problems have a unified solution law and difficulty, and its solution difficulty should be polynomial order of magnitude. The difference of an attribute value between any group of individuals in the objective world is usually in the same order of magnitude as the absolute value of an individual attribute. For example, one adult weighs in 100 pounds, and the difference between a very fat man and a very thin man is also in 100 pounds. Similarly, the weight of an ant is in gram, and the difference between a big ant and a small ant is also in gram. Etc. Of course, these are not strictly proven conclusions.
Anyway, I studied this algorithm for many years and I am very sure I am correct. Remember Calois, E? There are a lot of Cauchys and Fouriers in this world, but I believe that I can meet a Joseph Liouville.
u
Theory of Computation is core subject of Computer science. Looking for resources for study material including presentations, tutorials to solve and question papers with guidelines to solve / verify solved problems. Thanks
I teach computer programming (in high school) for the mere reason of increasing the impact of the future of the Computer Science Industry. I stand on my theory that computer science curriculum is missing the mark by focusing on computer science instead of building a foundation through teaching programming so when high school students gain the math skills needed to excel in CS they will be successful. I teach programming to 150 students a year with a waiting list when most colleagues can't fill one class due to the prerequisite of a strong math foundation. I am interested in your project. Feel free to contact me.
I have just begun reading background theory on Computational Chemistry, and finding it really difficult to understand Density Functional Theory. Are there any suggestions or references to help me in understanding it
i want to work on game theory so i want to know the recent work on this topic
I've been reading the paper by J. R. Cheeseman, M. J. Frisch, “Basis Set Dependence of Vibrational Raman and Raman Optical Activity Intensities”, J. Chem. Theory and Comput., 7, (2011), 3323-3334. Moreover, as reported in the gaussian09 manual: "Raman and ROA intensities can be calculated separately from calculation of the force constants and normal modes, to facilitate using a larger basis for these properties as recommended."
I'm performing a frequency calculation with B3LYP/maug-cc-pVDZ and I want to perform a subsequent Raman computation with B3LYP/aug-cc-pVTZ using the previous checkpoint file. I've been tinkering with the keyword Polar=Raman but the job fails, complaining about missing input.
Is there anybody who can provide any suggestion and possibly a route card example to solve this problem?
Recursion theorem (paraphrasing Sipser, Introduction to the Theory of Computation):
Let T be a program that computes a function t: N x N --> N. There is a program R that computes a function r: N --> N, where for every w,
t(<R>, w) = r(w)
The proof is by construction. The program R thus constructed computes the same thing as program T with one input fixed at <R>. But what if the right side does not compute anything? In that event we can reason as follows:
T(<R>, w) halts <--> R(w) halts
and by modus tollens
T(<R>, w) does not halt <--> R(w) does not halt
In particular we have
R(w) halts --> T(<R>, w) halts
T(<R>, w) does not halt --> R(w) does not halt
The transposition is based on the table below
T(<R>, w) halts R(w) halts
T T
F F
But what if the value in the right column is unknown and unknowable?
T(<R>, w) halts R(w) halts
T T
? unknowable
F F
We know that there is no universal halting decider, so the value in the right column may be unknowable to a given decider. So what if the decider concludes that T(<R>,w) does not halt? Is there anything wrong with the table below?
T(<R>, w) halts R(w) halts
T T
F unknowable
F F
In particular is there any compelling reason that if the value in the left column is F that the value in right column must also be F, i.e. that it cannot remain unknowable? It seems to me that if T(<R>, w) does not halt then it does not imply anything.
Is it possible for a decider to say [for some T] that T(<R>, w) does NOT halt yet remain agnostic about R(w) halting??
We know there is an elementary cellular automata (ECA) with 2 states (Rule 110) that is universal, i.e. Turing-complete. One-way cellular automata (OCA's) are a subcategory of ECA's where the next state only depends on the state of the current cell and one neighbor. I believe that one can make a universal OCA with 4 states pretty easily by simulating two cells of Rule 110 with each cell of the OCA. Does anyone know if it can be done with only three states? Thanks!
I would like to know when one can apply the nearest neighbor algorithm, like
image processing, robotic, DNA sequence, etc.
I have implemented an algorithm for NFA by giving the adjacency matrix as an input, but I want to get it by structure.
My Algorithm and .exe download for Hamilton cycle(path),need help for English expression of the paper. This paper is in the arxiv.
I took 10 years to study this problem and I am very sure I got the correct result. My English is not good, also the paper style is not good, please you help. You can download my .exe file to calculate Hamilton paths, in VC++.The help menu tell how to use it. Can you help me?
For example, assume two entities P and Q, where we are using 'proof-by-contradiction' to validate P (by using Q).
Something like:
- if P then Q;
- Not Q. Hence not P
IMO, one can only use such scenario, when P and Q are independent of each other existentially.
In other words, can one use proof-by-contradiction in cases where Q's existence is dependent on P's validity?
For example, Q exists if and only if P exists.
In such case does the proof-by-contradiction still hold valid?
(OMPs in road networks not euclidian space.).
Mass scaling is a way of reducing the computational cost of analysis. I read it on tutorials, I learnt that mass scaling effects the critical time increment. I couldn't find any source about how It can reduce the computational cost.
We can also increase the load rate, in order to reduce the computational cost, but if material is rate-dependent, load rate effects the results. Does the mass scaling have negative effects like load rate?
If you can suggest me any source which explain the reducing the computational cost process and If you can explain possible negative effects of mass scaling, I will be appreciated to you.
My objective is to create, accumulate physical evidence and demonstrate irrefutable physical evidence to prove that the existing definitions for software components and CBSE/CBSD are fundamentally flawed. Today no computer science text book for introducing software components and CBSD (Component based design for software products) presents assumptions (i.e. first principles) that resulted in such flawed definitions for software components and CBSD.
In real science, anything not having irrefutable proof is an assumption. What are the undocumented scientific assumptions (or first principles) at the root of computer science that resulted in fundamentally flawed definitions for so called software components and CBD (Component Based Design) for software products? Each of the definitions for each kind of so called software components has no basis in reality but in clear contradiction to the facts we know about the physical functional components for achieving CBD of physical products. What are the undocumented assumptions that forced researchers to define properties of software components, without giving any consideration to reality and facts we all knows about the physical functional components and CBD of physical products?
Except text books for computer science or software engineering for introducing software components and CBSD (Component Based Design for software products), I believe, first chapter of any text book for any other scientific discipline discusses first principles at the root of the scientific discipline. Each of the definitions and concepts of the scientific discipline is derived by relying on the first principles, observations (e.g. including empirical results) and by applying sound rational reasoning. For example, any text book on basic sciences for school kids starts by teaching that “Copernicus discovered that the Sun is at the center”. This is one of the first principles at the root of our scientific knowledge, so if it is wrong, a large portion of our scientific knowledge would end up invalid.
I asked countless expert, why we need different and new description (i.e. definitions and/or list of properties) for software components and CBSD, where the new description, properties and observations are in clear contradiction to the facts, concepts and observations we know about the physical functional components and CBD of large physical products (having at least a dozen physical functional components). I was given many excuses/answers, such as, software is different/unique or it is impossible to invent software components equivalent to the physical functional components.
All such excuses are mere undocumented assumptions. It is impossible to find any evidence that any one ever validated these assumptions. Such assumptions must be documented, but no text book or paper on software components even mentioned about the baseless assumptions they relied on to conclude that each kind of useful parts is a kind of software components, for example, reusable software parts are a kind of software components. Then CBD for software is defined as using such fake components. Using highly reusable ingredient parts (e.g. plastic, steel, cement, alloy or silicon in wafers) is not CBD. If anyone asks 10 different experts for definition/description for the software components, he gets 10 different answers (without any basis in reality we know about the physical components). Only the God has more mysterious descriptions, as if no one alieve seen the physical functional components.
The existing descriptions and definitions for so called CBSD and so called software components were invented and made out of thin air (based on wishful thinking) by relying on such undocumented myths. Today many experts defend the definitions by using such undocumented myths as inalienable truths of nature, not much different from how researchers defended epicycles by relying on assumption ‘the Earth is static’ up until 500 years ago. Also most of the concepts of CBSD and software components created during past 50 years derived by relying on such fundamentally flawed definitions of software components/CBSD (where the definitions, properties and descriptions are rooted in undocumented and unsubstantiated assumptions).
Is there any proof that it is impossible to invent real software components equivalent to the physical functional components for achieving real CBSD (CBD for software products), where real CBSD is equivalent to the CBD of large physical products (having at least a dozen physical functional components)? There exists no proof for such assumptions are accurate, so it is wrong to rely on such unsubstantiated assumptions. It is fundamental error, if such assumptions (i.e. first principles) are not documented.
I strongly believe, such assumptions must be documented in the first chapters of each of the respective scientific disciplines, because it forces us to keep the assumptions on the radar of our collective conscious and compels future researchers to validate the assumptions (i.e. first principles), for example, when technology makes sufficient progress for validating the assumptions.
I am not saying, it is wrong to make such assumptions/definitions created for software components 50 years ago. But it is huge error to not documenting the assumptions, on which they relied upon for making such different and new definitions (by ignoring reality and known facts). Such assumptions may be acceptable and true 50 years ago (when computer science and software engineering was in infancy and assembly language and FORTRAN were leading edge languages), but are such assumptions still valid? If each of the first principles (i.e. assumptions) is a proven fact, who proved it and where can I find the proof? Such information must be presented in the first chapters.
In real science, anything not having irrefutable proof is an assumption. Is such undocumented unsubstantiated assumptions are facts? Don’t the computer science text books on software components need to document proof for such assumptions before relying on such speculative unsubstantiated assumptions for defining the nature and properties of software components? All the definitions and concepts for software components and CBSD could be wrong, if the undocumented and unsubstantiated assumptions end up having huge errors.
My objective is to provide physical evidence (i) to prove that it is possible to discover accurate descriptions for the physical functional components and CBD of large physical products (having at least a dozen physical functional components), and (ii) to prove that it is not hard to invent real software components (that satisfy the accurate description for the physical functional components) for achieving real CBSD (that satisfy the accurate description for the CBD of physical products), once the accurate descriptions are discovered.
It is impossible to expose any error at the root of any deeply entrenched paradigm such as CBSE/CBSD (evolving for 50 years) and geocentric paradigm (evolved for 1000 years). For example, assumption “the Earth is static” considered an inalienable truth (not only of nature and but also of the God/Bible) for thousands of years, but ended up a flaw and sidetracked research efforts of countless researchers of basic sciences into a scientific crisis. Now we know, no meaningful scientific progress would have been possible, if that error was not yet exposed. Only possible way expose such error is showing physical evidence, even if most experts refuse to see the physical evidence, by finding few experts who are willing to see the physical evidence with open mind.
I have lot of physical evidence and now in the process of building a team of engineers and necessary tools for building software applications by assembling real software components for achieving real CBSD (e.g. for achieving CBD-structure http://real-software-components.com/CBD/CBD-structure.html by using CBD-process http://real-software-components.com/CBD/CBD-process.html). When our tools and team is ready, we should be able to build any GUI application by assembling real software components.
In real science, any thing not having irrefutable proof is an assumption. Any real scientific discipline must document each of the assumptions (i.e. first principles) at the root of the scientific discipline, before relying on the assumptions to derive concepts, definitions and observations (perceived to be accurate, only if the assumptions are proven to be True): https://www.researchgate.net/publication/273897031_In_real_science_anything_not_having_proof_is_an_assumption_and_such_assumptions_must_be_documented_before_relying_on_them_to_create_definitionsconcepts
I tried to write papers and give presentations to educate about the error, but none of them worked. I learned in hard way, that this kind of complex paradigm shift can’t happen in just couple of hour’s presentation or by reading 15 to 20 page long papers. Only possible way left for me to expose the flawed first principles at the root of any deeply entrenched paradigm is by finding experts willing to see physical evidence and showing them the physical evidence: https://www.researchgate.net/publication/273897524_What_kind_of_physical_evidence_is_needed__How_can_I_provide_such_physical_evidence_to_expose_undocumented_and_flawed_assumptions_at_the_root_of_definitions_for_CBSDcomponents
So I am planning to work with willing customers to build their applications, which gives us few weeks to even couple of months time to work with them to build their software by identifying ‘self-contained features and functionality’ that can be designed as replaceable components to achieve real CBSD.
How can I find experts or companies willing to work with us to see the physical evidence, for example, by allowing us the work with them to implement their applications as a CBD-structure? What kind of physical evidence would be compelling, when any one willing to give us a chance (at no cost to them, since we can work for free to provide compelling physical evidence)? I failed so many times in this complex effort, so I am not sure what could work? Does this work?
Best Regards,
Raju
Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?
Definitions:
1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.
2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.
3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.
Some books say that a function can be Turing-computed if and only if it is a Mu-recursive function [1] . So Ackermann function should be Mu-recursive,Here,Mu-recursive is defined by primary recursive functions by composition or recursion schemes.
Meanwhile, it is proved that Ackermann function is not primary recursive function, so it should not be Mu-recursive, which is contradictive to the above proposition.
Then, does Ackermann function belong to Mu-recursive function after all?
-----------------------------
[1]Thomas A Sudkamp.Languages and Machine:An Introduction to the Theory of Computer Science,Third Edition, Pearson Education, Inc.,2006,pp.415-416.
The philosophy and causality can be denoted as:
Data Computers (Turing & von Neumann) vs. Knowledge Computers (ICIC, http://www.ucalgary.ca/icic/).
A fundamental observation is that humans are not reasoning at the binary level because it’s too distance and indirect from the real-world entities/problems and their mental neurophysiological representation. It’s also too inefficient in natural inference because human knowledge and thought are not numbers rather than hyperstructures of concepts and behavioral processes.
Considering finite automata as a set of states with well defined transition function, how will one formally define the element 'state' in an automaton?
Right now, we are working on modelling arithmetic operations (+, -, /, *, !) towards simulating series as means of computing values of real functions, in order to create grounds for more serious research.
I know what an np complete problem is and the procedure to prove it but why do we have to prove a problem is NP complete or NP hard? What was the need to define this whole new class?
Is so what is its complexity?
This is the equation:
f(n) = [ (4 ^ ( 4 ^ n)) * (3 ^ ( 4 ^ n)) * (2 ^ (4 ^ n)) * (1 ^ (4 ^ n)) ] / 4
What is the recursive function to decide if a number (from domain R, set of real numbers) belongs to (or has an equivalent value in) the set of integers?
Start from 1 and keep incrementing till you hit the number is not really what I am looking for, since it does not reveal how exactly the next incremented number is being decided (or been assumed) to be part of the set of integers.
Explicitly stating 1, 2, 3... so on all belong to N is not really a characteristic function - is there any mathematical characteristic function that does not depend upon such enumeration, but rather on some inherent properties that differentiates between elements of N and R?
What is the Indicator function for set membership of integers?
I have a black-box function (whose definition is not known) producing some numbers (precision is not known).
How to test if the generated number is integer or not?
I am looking for computable definition of "integerness".
Ceil, Truncate, Floor etc.. does not really fit the bill.
What is the method to decide if a given number (integer) is going to have an integer square-root or not (without actually computing the square-root)?
For example, assume I have a method M that does this. Then it should behave like below:
M(16) should return true (since sq-root(16)==4 which is integer)
M(17) should return false (since sq-root(17) is not an integer)
and M should not actually compute the square-root to decide this.
Any literature or info of this?
I am interested in the study of continuous systems. Can anyone provide me with materials/links that deal extensively with the subject?
Basic causality:
- Numbers (for data, conventional) vs. Hyperstructures (for knowledge, contemporary)
- The domain of human reasoning is hyperstructures (H) of concepts and behavioral processes rather than numbers (N, R).
- Suitable mathematical means known as denotational mathematics for facilitating rigorous inference of knowledge and thought in H are required.
Paradigms of denotational mathematics [Wang, 2002-2013]:
- Semantic algebra, concept algebra, behavioral process algebra (RTPA), system algebra, inference algebra, visual semantic algebra (VSA), granular algebra, … (ICIC, http://www.ucalgary.ca/icic/)
For classes over NP (EXPTIME, EXPSPACE etc.) we define complete problems in terms of polynomial time reductions. I can understand that it is useful in cases of that class being equal to P, but it is highly unlikely. I think we should use corresponding limits for complexity classes. (polynomial space reduction for EXPTIME, exponential time reductions for EXPSPACE etc.). This will probably increase the size of complete problems for each class.
What do you think?
Software developers suggested formal software development methods and later they found some short comings as well. Now it is suggested to go for RAD, Agile, etc. which concentrate on the problem and quickly produce the software with some compromise. Now the present trend is to develop software suitable to the to the business as soon as possible. There are plus and minus's, but what might be the final conclusion?
There seem to be various numbers floating around. kTln2 seems to be one theoretical limit put forward by John von Neumann and Rolf Landauer at different times. CMOS seems to be many orders of magnitude above that. Is there good data or ideas to close the gap between the CMOS energy per operation and the theoretical limit? When does CMOS bottom out?
Please, what are the area of research in thoery of computation ?
TMs compute algorithms and are the standard model of computation, often considered complete. Models of concurrency include the pi calculus, Petri nets, and actors. Unlike TMs that execute algorithms, concurrency and sequential interaction produce infinite behaviors and interleave inputs with outputs. What models bridge this gap?