Science topic

Theory of Computation - Science topic

Explore the latest questions and answers in Theory of Computation, and find Theory of Computation experts.
Questions related to Theory of Computation
  • asked a question related to Theory of Computation
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Theory of Computation
Question
12 answers
Greetings everyone,
I was wondering whether there are any interesting aspects of cellular automata as computational models in the strict(er) sense of the term:
  1. Formulate the problem into an initial state of the automaton
  2. Run n generations
  3. Transform the resulting state back into a solution that makes sense in terms of the original formulation
For example: a 2D cellular automaton sorting an array of numbers or so, or something more complicated. Note: I am not referring for example to a Turing Machine designed in the Game of Life where the gliders can be seen as bits/wires etc, but rather a rule that has been designed for such purposes.
Are there any interesting works/papers you could suggest?
Relevant answer
Answer
There is existing a very important source of various models, approaches, and theory on cellular automata, which is provided in the book:
Andrew Ilachinski: "Cellular Automata: A Discrete Universe", WORLD SCIENTIFIC (July 2001) DOI: 10.1142/4702,
I do recommend to read this book as it serves as a very well written advanced introduction into cellular automata.
  • asked a question related to Theory of Computation
Question
6 answers
Can the complexity theory solve complete or partially problems in Math?
Relevant answer
Answer
I read your notes, but I got nothing! You described some open problems. Complexity theory is useful in the presence of an algorithm to tackle the problem.
First, you need to show a concrete theory and then build your algorithm with a suitable complexity time to support your proofs.
We have nothing to do with complexity in the absence of the theory.
Best regards
  • asked a question related to Theory of Computation
Question
4 answers
I am not fully molecular modelling researcher, so am not able to understand this one. I have gone through J. Chem. Theory Comput. 2011, 7, 2498–2506 on this concept, but I did not get what exactly have to compute.
Thanks.....
Relevant answer
Answer
Thank You.
  • asked a question related to Theory of Computation
Question
1 answer
I am simulating a system with Ca2+ ions using the 12-6-4 Li-Merz parameters [1] for divalent atoms. Is there a way to implement this 12-6-4 LJ model in LAMMPS?
Ref:
[1] Pengfei Li and Kenneth M. Merz, Jr. "Taking into Account the Ion-Induced Dipole Interaction in the Nonbonded Model of Ions", J. Chem. Theory Comput., 2014, 10, 289-297
Relevant answer
Answer
Hi,
I think that you can use the 'table' pair style. You just have to create a file where you tabulate your potential and force. Look here:
  • asked a question related to Theory of Computation
Question
14 answers
I am doing research on load balancing for web server clusters. Please suggest which simulator can we use for the same.
Relevant answer
Answer
If you want to do research then just try this: https://github.com/lalithsuresh/absim/tree/table and paper: "C3: Cutting Tail Latency in Cloud Data Stores via Adaptive Replica Selection"
  • asked a question related to Theory of Computation
Question
7 answers
For my dissertation work, I have been studying Constructivism, Critical Theory, and Computers.
I am curious what people think about this premise:
There is no single software program to use in the learning process, rather it will be that students will program their own software as part of their learning process.
-----
Do you see this potential in all areas of school K-20? Or do you see it limited to particular slices?
Thanks for thinking about this! -- Bryan
Relevant answer
Answer
So glad that what I wrote seems to have made sense to you. I see we are somewhat at similar lines. I do not yet think we have seen any final outcome of the use of digital ICTs. The next big, and somewhat quite disruptive thing I expect is development in the direction of learning analytics and adaptive learning. This will begin to respect that learners need different time and other different resources and relations to learn. Instead of time being the constant and learning variable, learning can be constant and time variable. But this destroys school scheduling and organisation and calls for reconstruction of social learning structures in organised education.
I leave a couple of attachments here to papers on RG, by Floridi, that I think you will like, related to my previous post - and a conference presentation of mine.
You write about constructivism - but I wonder if you are in your thinking closer to constructionism. Papert was a friend of Piaget (constructivist) but became in the end a constructionist. A lot of difference here. Floridi sees it as the difference between knowledge of the user and knowledge of the maker, and blames the knowledge of the user, knowledge as a spectator sport, on Plato. Floridi sees it as a historic mistake - it is the maker of artefacts (which Plato despised) who has deeper knowledge - he or she knows how it works, can take the artefact apart, explain and put it together again. Think of the difference between a car user and a car mechanic. Artefacts can also be non-physical of course. See attached paper ”What the makers knowledge could be”, and ”In defence of constructionism”. I will also send you on RG mail a short text behind a paywall.
However, if you are doing a dissertation under surveillance, deviations from the almost all-dominant constructivism dogma may not be problem-free. Just a heads-up...
  • asked a question related to Theory of Computation
Question
6 answers
I studied this algorithm for a lot of years and I am very sure it is correct. It does not need you too much time to understand it, because I show how to understand it in the abstract and remarks. This is my paper’s website:
But now I meet some problems that need your help and cooperate. As a coauthor, you can discuss with me and I can explain every possible doubt of you.
1) A top journal said my English expression is hard to follow, and suggested me to find a person to cooperate who is strong in algorithm and also strong in English paper writing.
2) A top journal said NP cannot equal P, next are their comment and my answer:
Their rejecting email:
Although it remains a logical possibility that the P=NP? question has a positive answer, the overwhelming view of the research community is that its liklihood is negligible.
The sincerity and technical ability reflected in your submission led us to an initial effort to recruit a reviewer who might be willing to try identifying your work. We regret that we could not find such a reviewer. No competent reviewer we know of willing to put in the effort to find a bug in a sophisticated densely written 40 page paper.
My answer:
At present, most of well-known authorities in this area tend to think that NP is not equal to P. It is absolutely certain that the authorities have no strong basis for this view, but this view seems to have been tacitly accepted by most people.
As a result, various academic papers often talk about NP, especially NPC, directly declare that there can be no polynomial time algorithm. Such acquiescence is undoubtedly harmful.
A top international journal should be a responsible journal. To reject an algorithm paper, they should make sure that the algorithm is not innovative or is wrong. My paper is only an algorithm, the right or wrong of the algorithm is not difficult to determine. I can explain any doubts. In order to understand my algorithm, one only needs to understand series of destroying edges. I said it again: one only needs to understand series of destroying edges. This is a very possible job. Why don't you read it carefully and then understand it? However, some top journals often reject papers based on guesses and assumptions rather than carefully reading and understanding them. Some journals reject my paper for two reasons: 1) It is impossible for you, an ordinary and unknown person, to solve such a difficult problem, and your paper must be wrong. But how many major problems have been solved in history, and the solvers are not ordinary people before solving them? 2) Experts generally believe that NP is not equal to P, so your paper that NP is equal to P cannot be correct. My answer is: only strictly proven conclusions are meaningful. Why do some experts always like to assert something? How many experts have asserted somethings in history, and later these assertions have been broken by new achievements. Let's briefly discuss why some experts assert that NP is not equal to P.
Two famous scientists on algorithm wrote in one of their books[22] that so far a large number of NP-complete problems have arisen. Because many excellent algorithm scientists and mathematicians have studied these problems for a long time, but have not found a polynomial algorithm, then we tend to think that NP is not equal to P. Such inferences are logically untenable. From another point of view, there are so many NP-complete problems that no algorithm scientists or mathematicians can prove that any problem is exponential.
Lance Fortnow, the editor-in-chief of a famous ACM journal, wrote a review of P. vs. NP [23], in which he believed that: 1) no of us really understood NP, 2) NP was unlikely to equal P (unlikely), and 3) human beings could not solve the problem in a short time (as explained above, this assertion is meaningless). To illustrate that NP is not likely to equal P, he described a very beautiful world under the premise that NP is equal to P: all parallel problems can be solved in polynomial time, all process problems, optimization problems, Internet paths, networking problems, etc., can quickly get the best solution, even all number problems. The solution of hard problems can also be completed quickly in polynomial time, because solving any mathematical problems is actually a parallel, multi-branch, exponential expansion process. There may be only one correct channel for it. This is actually an NP problem. So he thinks that if anyone proves NP = P, it means that he has solved all seven world problems. He did not say that if anyone proves NP = P, then who can control the whole universe, because the evolution of the universe, including human intelligence activities, can theoretically be seen as a multi-branch, continuous parallel development process. The real world we are in at this time is only one of its branches, or just a possibility of its evolution and development. Despite Mr. Lance Fortnow's authority (editor-in-chief of internationally renowned journals), his argument is logically untenable. Even in his own article, he admits that even if NP = P is proved, it does not mean that we can get an efficient polynomial time algorithm for any NP problem. Here I change his view slightly: if human beings have got unlimited Non-deterministic Turing machine, the wonderful world he describes can indeed appear. What does it mean to have an unlimited Non-deterministic Turing machine? It means that you have countless labor forces that work for you on your own terms, without overlapping, along different branches. Imagine that if you had countless mathematicians who were going to tackle a math problem in parallel along all possible directions (branches). What math problem would not be solved quickly? However, NP = P is not equivalent to having unlimited Non-deterministic Turing machines. Logically, it is impossible for human beings to create unlimited Non-deterministic Turing machines.
Hilbert, a great mathematician of the twentieth century, has a famous saying: we must know; we will know. It can be seen that Hilbert essentially agreed that NP equals P. Many mathematical problems in human history, including Hilbert's famous 23 mathematical problems, are constantly being solved. Isn't it a confirmation that NP equals P?
From the heuristic point of view, any NPC problem can be reduced to any other NPC problem in polynomial time. That is to say, every distance between two NPC problems is polynomial. The fact itself strongly shows that NP problems have a unified solution law and difficulty, and its solution difficulty should be polynomial order of magnitude. The difference of an attribute value between any group of individuals in the objective world is usually in the same order of magnitude as the absolute value of an individual attribute. For example, one adult weighs in 100 pounds, and the difference between a very fat man and a very thin man is also in 100 pounds. Similarly, the weight of an ant is in gram, and the difference between a big ant and a small ant is also in gram. Etc. Of course, these are not strictly proven conclusions.
Anyway, I studied this algorithm for many years and I am very sure I am correct. Remember Calois, E? There are a lot of Cauchys and Fouriers in this world, but I believe that I can meet a Joseph Liouville.
u
Relevant answer
Answer
Dear Gokaran Shukla :
Thank you very much for your answer.
  • asked a question related to Theory of Computation
Question
13 answers
Theory of Computation is core subject of Computer science. Looking for resources for study material including presentations, tutorials to solve and question papers with guidelines to solve / verify solved problems. Thanks
Relevant answer
Answer
  • asked a question related to Theory of Computation
Question
13 answers
I teach computer programming (in high school) for the mere reason of increasing the impact of the future of the Computer Science Industry. I stand on my theory that computer science curriculum is missing the mark by focusing on computer science instead of building a foundation through teaching programming so when high school students gain the math skills needed to excel in CS they will be successful. I teach programming to 150 students a year with a waiting list when most colleagues can't fill one class due to the prerequisite of a strong math foundation. I am interested in your project. Feel free to contact me.
Relevant answer
Answer
Raju Chiluvuri that is part of my dilemma. I hear that all the time so those are the students that teachers are recruiting. I am in a high school that if I only recruited those students I would have to compete with engineering, aerospace, and advanced math and science classes for students. So I take the other approach for two reasons, I was never very good at math and I love programming and it fulfills me and gives me joy. The other reason, is students that are not labeled as being "good" in math and science have more fight and determination in them. They don't give up if they don't get it at first. The students that graduate after several years in my program are foundationally sound to pick up almost any language and report back to me their success in college. So maybe I should detail the question a bit different and add should we teach a few computer science or many to program? What would college professors rather see in their seats? More students with a solid foundation or a few students that already excel in programming before they reach college?
  • asked a question related to Theory of Computation
Question
17 answers
I have just begun reading background theory on Computational Chemistry, and finding it really difficult to understand Density Functional Theory. Are there any suggestions or references to help me in understanding it
Relevant answer
Answer
Try to read the following book, this book explains in an easy and smooth way:
Computational materials science: an-Introduction, second-Edition, for June Gunn Lee
  • asked a question related to Theory of Computation
Question
3 answers
i want to work on game theory so i want to know the recent work on this topic
Relevant answer
Answer
Dear Saifullah Khan,
One of the most popular and entertaining topics in game theory is the
( Life and death game of John Conway) based on cellular automaton.
Best regards
  • asked a question related to Theory of Computation
Question
4 answers
I've been reading the paper by J. R. Cheeseman, M. J. Frisch, “Basis Set Dependence of Vibrational Raman and Raman Optical Activity Intensities”, J. Chem. Theory and Comput., 7, (2011), 3323-3334. Moreover, as reported in the gaussian09 manual: "Raman and ROA intensities can be calculated separately from calculation of the force constants and normal modes, to facilitate using a larger basis for these properties as recommended."
I'm performing a frequency calculation with B3LYP/maug-cc-pVDZ and I want to perform a subsequent Raman computation with B3LYP/aug-cc-pVTZ using the previous checkpoint file. I've been tinkering with the keyword Polar=Raman but the job fails, complaining about missing input.
Is there anybody who can provide any suggestion and possibly a route card example to solve this problem?
Relevant answer
Answer
Hai, 
        I hope you got an answer, otherwise just use the following procedure. 
Keyword for running a Raman calculation for different wavelength in Gaussian 09 is
CPHF=RdFreq which will add in the Route section.  Then, give one line space after the molecular specification and specify the desired frequency in Hatress or use nm for frequency in wavelength. 
For example,
opt+freq CPHF=RdFreq
.
.
Molecular specification
580 nm
Thank U
  • asked a question related to Theory of Computation
Question
7 answers
Recursion theorem (paraphrasing Sipser, Introduction to the Theory of Computation):
Let T be a program that computes a function t: N x N --> N. There is a program R that computes a function r: N --> N, where for every w,
    t(<R>, w) = r(w)
The proof is by construction. The program R thus constructed computes the same thing as program T with one input fixed at <R>. But what if the right side does not compute anything? In that event we can reason as follows:
    T(<R>, w) halts <--> R(w) halts
and by modus tollens
    T(<R>, w) does not halt <--> R(w) does not halt
In particular we have
    R(w) halts --> T(<R>, w) halts
    T(<R>, w) does not halt --> R(w) does not halt
The transposition is based on the table below
T(<R>, w) halts           R(w) halts
            T                           T
            F                           F
But what if the value in the right column is unknown and unknowable?
    T(<R>, w) halts       R(w) halts
                T                       T
                ?                unknowable
                F                       F
We know that there is no universal halting decider, so the value in the right column may be unknowable to a given decider. So what if the decider concludes that T(<R>,w) does not halt? Is there anything wrong with the table below?
    T(<R>, w) halts       R(w) halts
                T                       T
                F                unknowable
                F                       F
In particular is there any compelling reason that if the value in the left column is F that the value in right column must also be F, i.e. that it cannot remain unknowable? It seems to me that if T(<R>, w) does not halt then it does not imply anything.
Is it possible for a decider to say [for some T] that T(<R>, w) does NOT halt yet remain agnostic about R(w) halting??
Relevant answer
Answer
To Peter Breuer:
* Given
    ~(∃x)(∃z)(Prf(x,z) & Diag(y,z))                       (U)
with one free variable y. Let the constant k be the Gödel number of U. We substitute k for the free variable y in U. We obtain
    ~(∃x)(∃z)(Prf(x,z) & Diag(k,z))                      (U’)
 
As a result of this construction Diag(k,z) is satisfied only by the Gödel number of U’. Instead of (U’) I write
 
    ~(∃x)(∃z)(Prf(x,z) & This(z))                          (G)
* I have proposed a semantics such that the following equivalence does NOT hold
    ~(Ex)Prf(x, <#G#>)  <=/=>  ~(∃x)(∃z)(Prf(x,z) & This(z))
 
It is based on the same principle as Gaifman’s solution of the liar paradox. http://www.columbia.edu/~hg17/gaifman6.pdf Given
 
Line 1: The sentence on line 1 is not true            (~T & ~F)
Line 2: The sentence on line 1 is not true            (T)
 
You cannot simply substitute one for the other. Even better example is
 
(1) This sentence is not true                                   (~T & ~F)
(2) “This sentence is not true” is not true             (T)
 
You cannot substitute (1) for (2).
 
* My question is not if a decider CAN reason that way but if he MUST reason that way. Semantics is relevant only to the extent that the decider should be sound, i.e. it should not say that something halts when it does not and vice versa. Beyond this we should not make any assumptions about semantics. So the question again is, is any decider compelled to say that R(w) does NOT halt just because it has said that T(<R>,w) does NOT halt? (We assume that T(<R>, w) actually does not halt.)
  • asked a question related to Theory of Computation
Question
7 answers
We know there is an elementary cellular automata (ECA) with 2 states (Rule 110) that is universal, i.e. Turing-complete.  One-way cellular automata (OCA's) are a subcategory of ECA's where the next state only depends on the state of the current cell and one neighbor.  I believe that one can make a universal OCA with 4 states pretty easily by simulating two cells of Rule 110 with each cell of the OCA.  Does anyone know if it can be done with only three states?  Thanks!
Relevant answer
Answer
You might find this very interesting! I think it might be close to what you have been asking for. http://www.wolframscience.com/prizes/tm23/solved.html
  • asked a question related to Theory of Computation
Question
6 answers
I would like to know when one can apply the nearest neighbor algorithm, like
image processing, robotic, DNA sequence, etc.
Relevant answer
Answer
Nearest neighbor algorithms is also can be use with the image processing. This method simplify and determine the nearest neighbor pixels. Please consider the intensity value of the pixels. 
  • asked a question related to Theory of Computation
Question
4 answers
I have implemented an algorithm for NFA by giving the adjacency matrix as an input, but I want to get it by structure.
Relevant answer
Answer
JFLAP
  • asked a question related to Theory of Computation
Question
31 answers
My Algorithm and .exe download for Hamilton cycle(path),need help for English expression of the paper. This paper is in the arxiv.
I took 10 years to study this problem and I am very sure I got the correct result. My English is not good, also the paper style is not good, please you help. You can download my .exe file to calculate Hamilton paths, in VC++.The help menu tell how to use it. Can you help me?
Relevant answer
Answer
Algorithms are never recognised without their details exposed.  There is no *secret trick* in software; pure determinism.  So, you can either explain in detail exactly how the algorithm works (no hand-waving, like *he proves his algorithm in three ways* as nobody is going to buy such vacuous argumentation), or you have nothing to offer.
  • asked a question related to Theory of Computation
Question
24 answers
For example, assume two entities P and Q, where we are using 'proof-by-contradiction' to validate P (by using Q).
Something like:
  • if P then Q;  
  • Not Q. Hence not P
IMO, one can only use such scenario, when P and Q are independent of each other existentially.
In other words, can one use proof-by-contradiction in cases where Q's existence is dependent on P's validity?
For example, Q exists if and only if P exists.
In such case does the proof-by-contradiction still hold valid?
Relevant answer
Answer
You must enlarge your question as if you could articulate ( kind of comparing ) the actual proposition with all the propositions like and axiomatic truthes and theorems system memories consist in so as "bugging" when these are processed or taken together. If you had already kept into memories only " truthes "  each  " new " proposition could be selected or rejected by this mean. But probably doing this you shut yourself to opportunities of radically " new " propositions that don't simply or immediately fit with older ones... It depends on you to work in a prehistoric or futur cyber-environnemental mood or with has been or promising concepts.
  • asked a question related to Theory of Computation
Question
6 answers
(OMPs in road networks not euclidian space.).
  • asked a question related to Theory of Computation
Question
2 answers
data stream classifiaction
Relevant answer
Answer
Hi, 
You also have SAMOA https://samoa.incubator.apache.org for scalable stream machine learning on Apache Storm and other engines. I think some former collaborator of MOA are part of SAMOA. Another options is Spark's MLlib, which is starting to have some streaming ML algorithms. For now only linear regression https://spark.apache.org/docs/latest/mllib-linear-methods.html#streaming-linear-regression, and for unsupervised learning you have stream kmeans https://spark.apache.org/docs/latest/mllib-clustering.html#streaming-k-means, but MLlib is a very active project, so probably more models will appear in the future.
Greetings, 
Juan Rodriguez
  • asked a question related to Theory of Computation
Question
11 answers
Mass scaling is a way of reducing the computational cost of analysis. I read it on tutorials, I learnt that mass scaling effects the critical time increment. I couldn't find any source about how It can reduce the computational cost.
We can also increase the load rate, in order to reduce the computational cost, but if material is rate-dependent, load rate effects the results. Does the mass scaling have negative effects like load rate? 
If you can suggest me any source which explain the reducing the computational cost process and If you can explain possible negative effects of mass scaling, I will be appreciated to you.
Relevant answer
Answer
read the Abaqus Documentation- Abaqus Analysis User's Manual- Section 6.3.3 -Explicit dynamic analysis.
The time increment in an explicit analysis is calculated from an characteristic dimension of the smallest element and the wave speed. Sometimes ist possible to scale the mass of some small elements without altering the results- there is the total change of mass reported in the status file and if that is less than e.g. 1 percent and the smallest elements are not in critical regions you should be fine!
  • asked a question related to Theory of Computation
Question
23 answers
          My objective is to create, accumulate physical evidence and demonstrate irrefutable physical evidence to prove that the existing definitions for software components and CBSE/CBSD are fundamentally flawed. Today no computer science text book for introducing software components and CBSD (Component based design for software products) presents assumptions (i.e. first principles) that resulted in such flawed definitions for software components and CBSD.
In real science, anything not having irrefutable proof is an assumption. What are the undocumented scientific assumptions (or first principles) at the root of computer science that resulted in fundamentally flawed definitions for so called software components and CBD (Component Based Design) for software products? Each of the definitions for each kind of so called software components has no basis in reality but in clear contradiction to the facts we know about the physical functional components for achieving CBD of physical products. What are the undocumented assumptions that forced researchers to define properties of software components, without giving any consideration to reality and facts we all knows about the physical functional components and CBD of physical products?
Except text books for computer science or software engineering for introducing software components and CBSD (Component Based Design for software products), I believe, first chapter of any text book for any other scientific discipline discusses first principles at the root of the scientific discipline. Each of the definitions and concepts of the scientific discipline is derived by relying on the first principles, observations (e.g. including empirical results) and by applying sound rational reasoning. For example, any text book on basic sciences for school kids starts by teaching that “Copernicus discovered that the Sun is at the center”. This is one of the first principles at the root of our scientific knowledge, so if it is wrong, a large portion of our scientific knowledge would end up invalid.
I asked countless expert, why we need different and new description (i.e. definitions and/or list of properties) for software components and CBSD, where the new description, properties and observations are in clear contradiction to the facts, concepts and observations we know about the physical functional components and CBD of large physical products (having at least a dozen physical functional components). I was given many excuses/answers, such as, software is different/unique or it is impossible to invent software components equivalent to the physical functional components.
All such excuses are mere undocumented assumptions. It is impossible to find any evidence that any one ever validated these assumptions. Such assumptions must be documented, but no text book or paper on software components even mentioned about the baseless assumptions they relied on to conclude that each kind of useful parts is a kind of software components, for example, reusable software parts are a kind of software components. Then CBD for software is defined as using such fake components. Using highly reusable ingredient parts (e.g. plastic, steel, cement, alloy or silicon in wafers) is not CBD. If anyone asks 10 different experts for definition/description for the software components, he gets 10 different answers (without any basis in reality we know about the physical components). Only the God has more mysterious descriptions, as if no one alieve seen the physical functional components.
The existing descriptions and definitions for so called CBSD and so called software components were invented and made out of thin air (based on wishful thinking) by relying on such undocumented myths. Today many experts defend the definitions by using such undocumented myths as inalienable truths of nature, not much different from how researchers defended epicycles by relying on assumption ‘the Earth is static’ up until 500 years ago. Also most of the concepts of CBSD and software components created during past 50 years derived by relying on such fundamentally flawed definitions of software components/CBSD (where the definitions, properties and descriptions are rooted in undocumented and unsubstantiated assumptions).
Is there any proof that it is impossible to invent real software components equivalent to the physical functional components for achieving real CBSD (CBD for software products), where real CBSD is equivalent to the CBD of large physical products (having at least a dozen physical functional components)? There exists no proof for such assumptions are accurate, so it is wrong to rely on such unsubstantiated assumptions. It is fundamental error, if such assumptions (i.e. first principles) are not documented.
I strongly believe, such assumptions must be documented in the first chapters of each of the respective scientific disciplines, because it forces us to keep the assumptions on the radar of our collective conscious and compels future researchers to validate the assumptions (i.e. first principles), for example, when technology makes sufficient progress for validating the assumptions.
I am not saying, it is wrong to make such assumptions/definitions created for software components 50 years ago. But it is huge error to not documenting the assumptions, on which they relied upon for making such different and new definitions (by ignoring reality and known facts). Such assumptions may be acceptable and true 50 years ago (when computer science and software engineering was in infancy and assembly language and FORTRAN were leading edge languages), but are such assumptions still valid? If each of the first principles (i.e. assumptions) is a proven fact, who proved it and where can I find the proof? Such information must be presented in the first chapters.
In real science, anything not having irrefutable proof is an assumption. Is such undocumented unsubstantiated assumptions are facts? Don’t the computer science text books on software components need to document proof for such assumptions before relying on such speculative unsubstantiated assumptions for defining the nature and properties of software components? All the definitions and concepts for software components and CBSD could be wrong, if the undocumented and unsubstantiated assumptions end up having huge errors.
My objective is to provide physical evidence (i) to prove that it is possible to discover accurate descriptions for the physical functional components and CBD of large physical products (having at least a dozen physical functional components), and (ii) to prove that it is not hard to invent real software components (that satisfy the accurate description for the physical functional components) for achieving real CBSD (that satisfy the accurate description for the CBD of physical products), once the accurate descriptions are discovered.
It is impossible to expose any error at the root of any deeply entrenched paradigm such as CBSE/CBSD (evolving for 50 years) and geocentric paradigm (evolved for 1000 years). For example, assumption “the Earth is static” considered an inalienable truth (not only of nature and but also of the God/Bible) for thousands of years, but ended up a flaw and sidetracked research efforts of countless researchers of basic sciences into a scientific crisis. Now we know, no meaningful scientific progress would have been possible, if that error was not yet exposed. Only possible way expose such error is showing physical evidence, even if most experts refuse to see the physical evidence, by finding few experts who are willing to see the physical evidence with open mind.
I have lot of physical evidence and now in the process of building a team of engineers and necessary tools for building software applications by assembling real software components for achieving real CBSD (e.g. for achieving CBD-structure http://real-software-components.com/CBD/CBD-structure.html by using CBD-process http://real-software-components.com/CBD/CBD-process.html). When our tools and team is ready, we should be able to build any GUI application by assembling real software components.
In real science, any thing not having irrefutable proof is an assumption. Any real scientific discipline must document each of the assumptions (i.e. first principles) at the root of the scientific discipline, before relying on the assumptions to derive concepts, definitions and observations (perceived to be accurate, only if the assumptions are proven to be True):  https://www.researchgate.net/publication/273897031_In_real_science_anything_not_having_proof_is_an_assumption_and_such_assumptions_must_be_documented_before_relying_on_them_to_create_definitionsconcepts
I tried to write papers and give presentations to educate about the error, but none of them worked. I learned in hard way, that this kind of complex paradigm shift can’t happen in just couple of hour’s presentation or by reading 15 to 20 page long papers. Only possible way left for me to expose the flawed first principles at the root of any deeply entrenched paradigm is by finding experts willing to see physical evidence and showing them the physical evidence: https://www.researchgate.net/publication/273897524_What_kind_of_physical_evidence_is_needed__How_can_I_provide_such_physical_evidence_to_expose_undocumented_and_flawed_assumptions_at_the_root_of_definitions_for_CBSDcomponents
So I am planning to work with willing customers to build their applications, which gives us few weeks to even couple of months time to work with them to build their software by identifying ‘self-contained features and functionality’ that can be designed as replaceable components to achieve real CBSD.
How can I find experts or companies willing to work with us to see the physical evidence, for example, by allowing us the work with them to implement their applications as a CBD-structure? What kind of physical evidence would be compelling, when any one willing to give us a chance (at no cost to them, since we can work for free to provide compelling physical evidence)? I failed so many times in this complex effort, so I am not sure what could work? Does this work?
Best Regards,
Raju
Relevant answer
Answer
Raju,
"When any scientific discipline was in infancy, researchers are forced to make assumptions."
We are always making assumptions. Infancy or not. You've mentioned the geocentric model of the universe. It is as truth as an assumption as it is the non-geocentric model of the universe. Although I won't try it (I'm guessing it would take a while) it's probably possible to make our whole physics based on the geocentric model with no significant loss of accuracy. It's numbers and you can "engineer" your way to whatever assumption you want to believe. The reason why non-geocentric model is accepted is just because it makes more intuitive sense. So common sense is the great decider.
Science is not the art of truth, it's the art of accurate. Better science does not mean you are closer to the truth (you would have to know the truth to be able to claim that!), it just means you are able to model more accurately a phenomena. Nowadays modern societies seem to view science almost to religious status (we keep assuming dogmas, only this time with university degrees). Personally I see it as a fork. I know it, I use it and than I wash it for the next meal.
Besides the philosophical ideas I would suggest you to define some few points open to debate. It's very complicated to debate such a broad subject. It's too vague. I've read your comments and I still can't put my finger in what exactly are your trying to reform (in good part maybe because I don't have enough skill in some of the areas you're touching).
  • asked a question related to Theory of Computation
Question
2 answers
Is the same problem NP-complete for strong edge colored graphs and proper edge colored graphs?
Definitions:
1) An edge coloring is 'proper' if each pair of adjacent edges have different colors.
2) An edge coloring is 'vertex distinguished' if no two vertices have the same set of colors of edges incident with them.
3) An edge coloring is 'strong' if it is both proper and vertex distinguishing.
Relevant answer
Answer
There is an obvious reduction form 3DM to the problem of finding a maximum heterochromatic matching in an ede-colored graph (represent the third gender in each triplet by colors).
Therefore, the problem of finding a maximum heterochromatic matching in an ede-colored graph is NP-complete.
Moreover, 3DM is NP-complete even when no two triplets intersect in more than one element. When we start from instances with this property, the reduction mentioned above yields only properly edge-colored graphs. Therefore, the same problem remains NP-complete for properly edge colored bipartite graphs.
Assessing NP-completeness in the case of strong edge-colorings is more technical but  appears now to be downhill.
  • asked a question related to Theory of Computation
Question
44 answers
Some books say that a function can be Turing-computed if and only if it is a Mu-recursive function [1] . So Ackermann function should be Mu-recursive,Here,Mu-recursive is defined by primary recursive functions by composition or recursion schemes.
Meanwhile, it is proved that Ackermann function is not primary recursive function, so it should not be Mu-recursive, which is contradictive to the above proposition.
Then, does Ackermann function belong to Mu-recursive function after all?
-----------------------------
[1]Thomas A Sudkamp.Languages and Machine:An Introduction to the Theory of Computer Science,Third Edition, Pearson Education, Inc.,2006,pp.415-416.
Relevant answer
Answer
Primitive recursive functions are the "blue" functions in the book "Gödel, Esher, Bach". They corresponds to functions that can be computed by programs without loops (except for loops... but NO WHILE). It can be proven that Ackermann functions is not a primitive recursive function.
If you add while-loops, you have mu-recursive functions. It corresponds intuitively to computable functions. It concides with functions that can be computed by Turing machines. Ackermann is a mu-recursive functions.
  • asked a question related to Theory of Computation
Question
32 answers
The philosophy and causality can be denoted as:
Data Computers (Turing & von Neumann) vs. Knowledge Computers (ICIC, http://www.ucalgary.ca/icic/).
A fundamental observation is that humans are not reasoning at the binary level because it’s too distance and indirect from the real-world entities/problems and their mental neurophysiological representation. It’s also too inefficient in natural inference because human knowledge and thought are not numbers rather than hyperstructures of concepts and behavioral processes.
Relevant answer
Answer
I think the field is still in the Self-Defining state but essentially Cognitive Informatics is informatics of Cognitive Systems.
  • asked a question related to Theory of Computation
Question
9 answers
Considering finite automata as a set of states with well defined transition function, how will one formally define the element 'state' in an automaton?
Relevant answer
Answer
It simply is a member of a finite set. Not more.
For simplicity often the natural numbers including 0
are used, up to n-1.
Informally, a state is a means of conveying
information through time. In synchronous
product automata are two or more distinct
sub-automata, each having its own state,
say the first one out of set S1, the second
automaton out of S2... The state of the complete
product automaton can then be described as
a tuple (s1, s2, ... Sn) with s1 member of S2,
s2 member of S2...
In analog computers states are continuous.
Continuous (real, complex) variables describe
the content of integrators, e.g. the charge of
capacitors.
Regards,
Joachim
  • asked a question related to Theory of Computation
Question
4 answers
Right now, we are working on modelling arithmetic operations (+, -, /, *, !) towards simulating series as means of computing values of real functions, in order to create grounds for more serious research.
Relevant answer
Answer
The first question I have is have you had any success doing that? Reals can do some "funny things" to TMs when values are irrational. Is this symbolic computation (where you are storing the numbers as symbols (e.g., PI)), or numeric (decimals)? I mean, we want the TM to eventually terminate at some point in order to properly model some kind of algorithmic procedure that is correct.
  • asked a question related to Theory of Computation
Question
7 answers
I know what an np complete problem is and the procedure to prove it but why do we have to prove a problem is NP complete or NP hard? What was the need to define this whole new class?
Relevant answer
Answer
Researchers believe NP-complete (decision problems) and NP-hard (optimization problems) are intractable. If you can show your problem of interest is NP-hard, then you have eluded to the computational difficulty for solving that problem. Problems such as SAT are widely believed to be very difficult to solve in polynomial time with respect to the input size.
Why is proving a problem is NP-hard or a decision problem is NP-complete important? There are many reasons (here are 3):
1. If you find a polynomial-time algorithm for a given NP-hard problem, then any problem that you derived your result from can also be solved in polynomial time. For example, if I found a polynomial-time algorithm for the makespan problem on identical parallel machines (when number of machines m=2), then I can also solve the partition problem in polynomial-time. These reductions can be traced right down to the root. Why? In polynomial time I can turn any partition problem instance into a scheduling instance in polynomial time. That's why these reductions are useful (among many other reasons). It means "it is at least as hard". It is worth noting that NP-hard isn't the same as NP-complete. There are problems that are NP-hard, but not NP-complete. For example, the Halting Problem is NP-hard, but not NP-complete (because it is not in NP). There are other interesting implications you can draw from polynomial-time reductions.
2. It gives us the ability to categorize problems. We normally don't talk about algorithms then problems, we talk about algorithms with respect to problems. This is just another class among many others. This is a big part of Computational Complexity Theory.
3. Gives motivation to study approximation algorithms. Solving for exact optimal solutions can be very costly. Using approximation algorithms give polynomial-time time complexity, and an approximate feasible solution. It introduces the idea of hardness of approximation as well, which introduces a whole other realm of the P vs. NP problem where the existence of certain types of approximation algorithms could imply P=NP.
Hope this helps!
  • asked a question related to Theory of Computation
Question
18 answers
Is so what is its complexity?
Relevant answer
Answer
The answer: there is (almost certainly) no efficient algorithm. The reason is from complexity theory: most questions about regular expressions are at least PSPACE-hard (e.g., does a regular expression generate (or does an NFA accept) all strings over its alphabet). BUT: such questions are in PTIME for languages if presented by DFAs.
The point is that transforming an NFA to a DFA is PSPACE-hard, proven by Meyer and Stockmeyer (and others) in the 1970s.
So, unless by a miracle PSPACE = PTIME, there is a difference. So I am not surprised that proposed algorithms only reached halfway, there are deep theoretical reasons (requiring a major breakthrough) that one couldn't reach all the way.
  • asked a question related to Theory of Computation
Question
13 answers
This is the equation:
f(n) = [ (4 ^ ( 4 ^ n)) * (3 ^ ( 4 ^ n)) * (2 ^ (4 ^ n)) * (1 ^ (4 ^ n)) ] / 4
Relevant answer
Answer
To continue the pattern:
?..AAA
3.....AAC
4........ACT
4..........CTA
4............TAA
2..............AAG
4.................AGA
4...................GAA
1......................AAT
At this point, there are no more tiles that have the prefix AA.
AAC was one of three available choices at the time it was chosen.
AAG was one of two available choices at the time it was chosen.
AAT was the only choice available when it was chosen.
A similar patter occurs for each of the four tiles that have a similar two letter prefix combination; there are a total of 16 such prefixes.
If you generate this pattern for one symbol tiles, and also for two symbol tiles, you get a similar distribution, and can easily see the equation given above.
All you need do is to count the number of 4s, the number of 3s, the number of 2s and the number of 1s. The equation is thereby made easily apparent.
Thanks again for providing the simplification.
  • asked a question related to Theory of Computation
Question
10 answers
What is the recursive function to decide if a number (from domain R, set of real numbers) belongs to (or has an equivalent value in) the set of integers?
Start from 1 and keep incrementing till you hit the number is not really what I am looking for, since it does not reveal how exactly the next incremented number is being decided (or been assumed) to be part of the set of integers.
Explicitly stating 1, 2, 3... so on all belong to N is not really a characteristic function - is there any mathematical characteristic function that does not depend upon such enumeration, but rather on some inherent properties that differentiates between elements of N and R?
Relevant answer
Answer
@Golpalakrishna : Independent of representation is too strong a restriction. If the representation allows the computation (decidability) of the fractional part, then you can use that. If not, I don't see any other way to decide if a number is integer in general other than the one you don't like.
  • asked a question related to Theory of Computation
Question
5 answers
What is the Indicator function for set membership of integers?
I have a black-box function (whose definition is not known) producing some numbers (precision is not known).
How to test if the generated number is integer or not?
I am looking for computable definition of "integerness".
Ceil, Truncate, Floor etc.. does not really fit the bill.
Relevant answer
Answer
Interesting. Could you please give a mathematical definition of such ceil and floor?
Would like to see how they are able to work without worrying about ranges.
Especially the "wipe off" part and how it takes care of the arbitrary precision.
Thank you.
  • asked a question related to Theory of Computation
Question
39 answers
What is the method to decide if a given number (integer) is going to have an integer square-root or not (without actually computing the square-root)?
For example, assume I have a method M that does this. Then it should behave like below:
M(16) should return true (since sq-root(16)==4 which is integer)
M(17) should return false (since sq-root(17) is not an integer)
and M should not actually compute the square-root to decide this.
Any literature or info of this?
Relevant answer
Answer
There are 3 main solutions to this problem.
1. Simple to implement is binary search (as already mentioned). Given a pair of values (L,U) such that L^2 < n < U^2 (where n is the number to be tested), we compute ((L+U)/2)^2 and deduce that the solution is contained in an interval of half size. This is polynomial-time (unlike some of the other solutions already stated).
2. Optimal complexity is to use Newton interation. See Chapter 9 of the book by von zur Gathen and Gerhard. In particular, Section 9.5 talks of a 3-adic Newton iteration to compute integer square roots. This leads to a running time of O( M(n)) = O( \log(n) \log\log(n) \log\log\log(n)) the same as integer multiplication. But this is harder to implement.
3. A Monte-Carlo method is to reduce n modulo small primes p and compute the Legendre symbol (n/p) (which will always be 0 or 1 if n is an integer square). One would expect to test around log(n) primes to have good confidence that n is a square. I do not know a reference for a theoretical analysis of this algorithm.
  • asked a question related to Theory of Computation
Question
6 answers
I am interested in the study of continuous systems. Can anyone provide me with materials/links that deal extensively with the subject?
Relevant answer
Answer
As Peter Breur indicated, there are quite different type of continuous models:
(1) the ordinary differential equations model, where the system is just a function of time, and equations are of the form der(x) = f(x,y,..). [der(x) is the time derivative, called dot in VHDL-AMS.]
They can be solved by a simple integrator, such as Simulink (that can also handle discrete events, i.e. hybrid systems).
(2) the algebraic differential equations. These models are used e.g. by Modelica.
(3) the partial differential equations, used e.g. in fluid dynamics. They are usually solved by finite elements.
  • asked a question related to Theory of Computation
Question
5 answers
.
Relevant answer
Answer
K(x) represents the shortest program that prints the string on a universal turing machine (say computer). You can compare it a bit to the effort needed to memorize some text. A text which where each letter was randomly generated, is hard to memorize, and has large complexity. On the other hand, memorizing a full page containing only blancs is easy, and it can been be printed by a short program. On the other hand, Of coarse a randomly generated text has no useful information, the notion useful information is subjective. (Although some attempts for mathematical definition of an objective notion exist too, it is called algorithmic minimal sufficient statistic.)
  • asked a question related to Theory of Computation
Question
68 answers
Basic causality:
- Numbers (for data, conventional) vs. Hyperstructures (for knowledge, contemporary)
- The domain of human reasoning is hyperstructures (H) of concepts and behavioral processes rather than numbers (N, R).
- Suitable mathematical means known as denotational mathematics for facilitating rigorous inference of knowledge and thought in H are required.
Paradigms of denotational mathematics [Wang, 2002-2013]:
- Semantic algebra, concept algebra, behavioral process algebra (RTPA), system algebra, inference algebra, visual semantic algebra (VSA), granular algebra, … (ICIC, http://www.ucalgary.ca/icic/)
Relevant answer
You are right!!! it is very funny!!!!!!
  • asked a question related to Theory of Computation
Question
2 answers
For classes over NP (EXPTIME, EXPSPACE etc.) we define complete problems in terms of polynomial time reductions. I can understand that it is useful in cases of that class being equal to P, but it is highly unlikely. I think we should use corresponding limits for complexity classes. (polynomial space reduction for EXPTIME, exponential time reductions for EXPSPACE etc.). This will probably increase the size of complete problems for each class.
What do you think?
Relevant answer
Answer
In complexity theory, we consider a variety of notions of reduction. However, completeness is only meaningful if you use a class of reductions from a potentially smaller class. Every non-trivial problem in PSPACE is complete for PSPACE under PSPACE reductions: the reduction simply solves the problem in question and produces an instance with the same value. On the other hand, it makes sense to talk about problems complete for EXP under PSPACE reductions, although I do not know of any such problems that are not also known to be complete under P reductions.
  • asked a question related to Theory of Computation
Question
8 answers
Software developers suggested formal software development methods and later they found some short comings as well. Now it is suggested to go for RAD, Agile, etc. which concentrate on the problem and quickly produce the software with some compromise. Now the present trend is to develop software suitable to the to the business as soon as possible. There are plus and minus's, but what might be the final conclusion?
Relevant answer
Answer
If you mean "formal methods", you might be interested in reading:
IFIP workshop on Balancing Agility and Formalism in Software Engineering (October 2007), LNCS 5082, Springer-Verlag (December 2008).
S. Black, P.P. Boca, J.P. Bowen, J. Gorman, M. Hinchey: Formal versus Agile - Survival of the Fittest? IEEE Computer 12/9, pp. 37-45, 2009.
S. Gruner & B. Rumpe (eds.), Proceedings FM+AM'2010: 2nd International Workshop on Formal Methods and Agile Methods. Lecture Notes in Informatics, Vol. 179, GI Publ., September 2010.
S. Gruner (guest-ed.), Proceedings FM+AM'2009: 1st International Workshop on Formal Methods and Agile Methods. Innovations in Systems and Software Engineering, Vol. 6, Springer Verlag, March 2010.
  • asked a question related to Theory of Computation
Question
22 answers
There seem to be various numbers floating around. kTln2 seems to be one theoretical limit put forward by John von Neumann and Rolf Landauer at different times. CMOS seems to be many orders of magnitude above that. Is there good data or ideas to close the gap between the CMOS energy per operation and the theoretical limit? When does CMOS bottom out?
Relevant answer
Answer
Dear John, this is very interesting and inspirational question. (A) If I remember correctly, the lower bound on energy kTln2 (~2.8E-21 @ 20°C) you mention is associated with minimum heat generated during logic operation due to the erase of one bit of information. In general heat is generated only in the case of "irreversible logic operations" (e.g. AND, OR). However, "reversible logic operations" can dissipate arbitrarily little heat (e.g. inverter, Toffoli, Fredkin), hence, theoretically it is not necessary to consume energy for well selected logic operations. (B) I'm not sure if the CMOS is candidate for achieving limit above. In the case of CMOS the minimum energy operation is achieved (currently) in sub threshold regime by optimizing supply voltage Vdd. [S. HANSON ET AL, Ultralow-voltage, minimum-energy CMOS. IBM J. RES. & DEV. VOL. 50 NO. 4/5 JULY/SEPTEMBER 2006]. Minimum energy point is trade-off of dynamic and static energy (e.g. Minimum energy of inverter is ~10^-15J at 130nm) and leakage is the main limiting factor for energy consumption minimization. Hence, in order to reduce leakage the methods improving channel control in MOS transistor at lower supply voltage are necessary i.e. transistor geometry, materials. (C) Delay at minimum energy point for CMOS is usually much higher as minimum achievable delay in max. performance regime, hence, the number of operations per time unit is lower as would be possible. For processing constant amount of work we need more gates in minimum energy regime against maximum performance regime, and hence, we have to consider additional (overhead) energy for information exchange between gates. Is the energy/logic operation representative metric in this case?
  • asked a question related to Theory of Computation
Question
12 answers
Please, what are the area of research in thoery of computation ?
Relevant answer
Answer
Allow me to answer the question with a question:
Does the theory of computation include theoretical work in interactive computing?
Allow me to suggest a "yes" answer to my question.
Finally, I would like to point out that, if services such as those provided by web sites are considered computational problems, then we have a class of cases of computational problems that is clearly not Turing computable. (In the field called "Theory of Computation", which is equated with algorithmic computation, the Turing machine is widely considered to be unsurpassed.
For a counter example to this assumption, consider the service of counting hits on a web site and displaying the number of hits so far. It is clearly interactive, and the interactions (e.g.,as evidenced by number of hits) clearly depend on what is displayed at the web site, so that "input" cannot be considered a single data item independent of outputs displayed at the web site.
Now, this service is not provided by any Turing machine. The reason it cannot be provided is that a TM accepts one (possibly compound) input, then computes a function on the input, then emits output. A service, on the other hand, consists of a sequence of pairs, each consisting of an input and an output.
Therefore, either the theory of computation excludes interactive computation, or else it invites extension by people ready to define models capable of providing interactive services. Since the idea of a theory of computation that excludes interaction is too sad for me to dwell on, I am suggesting that this thread consider taking up the matter of extending the theory of computation with new models.
  • asked a question related to Theory of Computation
Question
49 answers
TMs compute algorithms and are the standard model of computation, often considered complete. Models of concurrency include the pi calculus, Petri nets, and actors. Unlike TMs that execute algorithms, concurrency and sequential interaction produce infinite behaviors and interleave inputs with outputs. What models bridge this gap?
Relevant answer
Answer
If you look at interleaving model of concurrency, then you can model any concurrent system with a regular Turing machine, with your state set being a vector <q11,q12,....q1k> if there are k components in the concurrent system, and the transition relation is defined with interleaving semantics. If you consider true concurrency also, you could do it with a little more challenge. The multiple input tapes can be used to model the input streams, and the multiple output tapes for the output streams. The encoding becomes harder when you can dynamically spawn threads, but can also be done by encoding the states of the new threads in tapes.
However, as it has been mentioned, TM models is usually used to model time and space complexity. So if your goal is to study models of concurrency as per their properties of modularity, congruence, refinement relations, equivalences and preorders as proof methods of spec to implementation refinement relations -- then TMs are not the convenient modeling paradigm -- hence process algebras, petrinets and other formalisms were invented to study the nature of concurrency, model composition, and refinement.