Questions related to History and Philosophy of Science
We are looking for volunteer translators who could translate 19th century German texts to English.
- Do you read 19th century Fraktur German?
- Your task would be translating 1,000 to 16,000 word texts from German to English.
If you are interested please send me a message.
One of the central themes in the philosophy of formal sciences (or mathematics) is the debate between realism (sometimes misnamed Platonism) and nominalism (also called "anti-realism"), which has different versions.
In my opinion, what is decisive in this regard is the position adopted on the question of whether objects postulated by the theories of the formal sciences (such as the arithmetic of natural numbers) have some mode of existence independently of the language that we humans use to refer to them; that is, independently of linguistic representations and theories. The affirmative answer assumes that things like numbers or the golden ratio are genuine discoveries, while the negative one understands that numbers are not discoveries but human inventions, they are not entities but mere referents of a language whose postulation has been useful for various purposes.
However, it does not occur to me how an anti-realist or nominalist position can respond to these two realist arguments in philosophy of mathematics: first, if numbers have no existence independently of language, how can one explain the metaphysical difference, which we call numerical, at a time before the existence of humans in which at t0 there was in a certain space-time region what we call two dinosaurs and then at t1 what we call three dinosaurs? That seems to be a real metaphysical difference in the sense in which we use the word "numerical", and it does not even require human language, which suggests that number, quantities, etc., seem to be included in the very idea of an individual entity.
Secondly, if the so-called golden ratio (also represented as the golden number and related to the Fibonacci sequence) is a human invention, how can it be explained that this relationship exists in various manifestations of nature such as the shell of certain mollusks, the florets of sunflowers, waves, the structure of galaxies, the spiral of DNA, etc.? That seems to be a discovery and not an invention, a genuine mathematical discovery. And if it is, it seems something like a universal of which those examples are particular cases, perhaps in a Platonic-like sense, which seems to suggest that mathematical entities express characteristics of the spatio-temporal world. However, this form of mathematical realism does not seem compatible with the version that maintains that the entities that mathematical theories talk about exist outside of spacetime. That is to say, if mathematical objects bear to physical and natural objects the relationship that the golden ratio bears to those mentioned, then it seems that there must be a true geometry and that, ultimately, mathematical entities are not as far out of space-time as has been suggested. After all, not everything that exists in spacetime has to be material, as the social sciences well know, that refer to norms, values or attitudes that are not. (I apologize for using a translator. Thank you.)
We have learned in QM the famous U. Principle which is probably the most important thing in this branch.
We also have learned that space-time stays together in GR.
The problem of measurements in QM comes from U. Principle & vice-versa and why it is not present in GR, not in the same form but analog?
is there a historical map of academic disciplines? what is the trend of academic disciplines changes(number, nature and label of disciplines)?
i will be thanks full if someone introduce any article, book, handbook or report about historical map of disciplines and history of academic disciplines.
What kind of scientific research dominate in the field of Philosophy of science and research?
Please, provide your suggestions for a question, problem or research thesis in the issues: Philosophy of science and research.
I invite you to the discussion
Thank you very much
1) There is some tradition in philosophy of mathematics starting at the late 19th century and culminating in the crisis of foundations at the beginning of the 20th century. Names here are Zermelo, Frege, Whitehead and Russel, Cantor, Brouwer, Hilbert, Gödel, Cavaillès, and some more. At that time mathematics was already focused on itself, separated from general rationalist philosophy and epistemology, from a philosophy of the cosmos and the spirit.
2) Stepping backwards in time we have the great “rationalist” philosophers of the 17th, 18th, 19th century: Descartes, Leibniz, Malebranche, Spinoza, Hegel proposing a global view of the universe in which the subject, trying to understand his situation, is immersed.
3) Still making a big step backwards in time, we have the philosophers of the late antiquity and the beginning of our era (Greek philosophy, Neoplatonist schools, oriental philosophies). These should not be left out from our considerations.
4) Returning to the late 20th century we see inside mathematics appears the foundation (Eilenberg, Lavwere, Grothendieck, Maclane,…) of Category theory, which is in some sense a transversal theory inside mathematics. Among its basic principles are the notions of object, arrow, functor, on which then are founded adjunctions, (co-)limits, monads, and more evolved concepts.
Do you think these principles have their signification a) for science b) the rationalist philosophies we described before, and ultimately c) for more general philosophies of the cosmos?
Examples: The existence of an adjunction of two functors could have a meaning in physics e.g.. The existence of a natural numbers - object known from topos theory could have philosophical consequences. (cf. Immanuel Kant, Antinomien der reinen Vernunft).
Is "Quantization of Time" theory possible ?
According to science Time is a physical parameter but according to philosphy it is an illusion . How can we define Time ? Can we quantize illusions?
Black hole thermodynamics and the Zeroth Law [1,2].
(a) black hole temperature: TH = hc3/16π2GkM
The LHS is intensive but the RHS is not intensive; therefore a violation of thermodynamics [1,2].
(b) black hole entropy: S = πkc3A/2hG
The LHS is extensive but the RHS is neither intensive nor extensive; therefore a violation of thermodynamics [1,2].
(c) Black holes do not exist [1-3].
Hawking leaves nothing of value to science.
 Robitaille, P.-M., Hawking Radiation: A Violation of the Zeroth Law of Thermodynamics, American Physical Society (ABSTRACT), March, 2018, http://meetings.aps.org/Meeting/NES18/Session/D01.3
 Robitaille, P.-M., Hawking Radiation: A Violation of the Zeroth Law of Thermodynamics, American Physical Society (SLIDE PRESENTATION), March, 2018, http://vixra.org/pdf/1803.0264v1.pdf
 Crothers, S.J., A Critical Analysis of LIGO's Recent Detection of Gravitational Waves Caused by Merging Black Holes, Hadronic Journal, n.3, Vol. 39, 2016, pp.271-302, http://vixra.org/pdf/1603.0127v5.pdf
I have posted a comment on André Orléan's "open letter" to the French Minister of Education (See the first answer below of my own). The letter and the comments on background explain what is happening in France in the field of economics education. In the comment, I have mentioned what had happened in Japan. An e-mail I have received this morning tells that similar dispute is repeated in University College London.
At the bottom of all arguments, there lies a problem how to interpret the status of neoclassical economics. The neoclassical economics occupies now a mainstream position and is trying to monopolize the economics education and academic posts, whereas various heterodox economists are resisting to the current, claiming the necessity of pluralism in economics education and research.
I have mentioned cases of three countries. There must be many similar stories in almost all countries. It would be wonderful if we can know all what is happening in other countries. So my question is:
What is happening in your country?
Einstein’s geometrodynamics considers 4-D spacetime geometry whose curvature is governed by mass. But the FLRW universe considers a 3-D space of curvature k (+ve, zero, or –ve) with time as an orthogonal coordinate. Hence it seems, the standard cosmology based on the FLRW space time tracked off the stated essence of general relativity.
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
[I had heard of the Know-Nothing Party, but apparently the internet tells me that that was a disclaimer used by members of what became the American Party, which was anti-immigration in the mid-nineteenth century ... another area of discussion, though proponents today may often fall into the category for discussion here as well, but that is still a bit out-of-scope for this discussion.]
For historians and other history buffs out there, and those interested in current events, what do you see as the path that has been taken to arrive at popular anti-intellectual, anti-science views in politics? The rejection of some members of the US House of Representatives with regard to correction of (US) census undercounts - the rejection of sampling statistics - comes to mind, in addition to the usual comments on climate change.
And are there any similar anti-intellectualism movements to be found in history anywhere in the world, including ancient history, which anyone would care to share? Can you draw any parallels?
Reasoned comments and historical evidence are requested. I do not intend to make further comments but instead wish to hear what applicable history lessons you may find interesting regarding this topic.
Has the experimental science got limits in its discipline? Many actual knowledges are not consequence of repetitive experiments. Regarding the sources of science, are they limited to experimentation? Other disciplines as history, unique experiences, philosophy, etc., can they be more important for the man?
In my studies many years ago, i came across the very influential thinker alexander bain. Most of his ideas are obsolete today, i know, but he was still an extremely influential person. I skimmed through his autobiography once, but i could not find any study on him by a modern scholar which could place him in a historical perspective. I thought this was odd, considering who he was.
Does anyone know if there are any standard works on bain? It didn't pop up on amazon.
Language, as an expression of the various 'knowledge' is subject to continuous transformations. I’d like to focus in particular on one of them in the field of scientific research.
As science can not critically verify its own assumptions, it is up to history, epistemology, philosophy and to the analysis of language to deepen the horizons of pre-understanding of each scientific proposition. In particular this is the understanding of a reality based on the assumption and tradition of antecedent interpretations, which precedes the direct experience of reality itself.
Popper was very attentive about the instrumental aspect of science (and therefore also to language), not interested in things in themselves, but to their verifiable aspects through measurements. Therefore, he invited not to interpret theories as descriptions or using their results in practical applications. He recalled that, as "knowledge", science is nothing but a set of conjectures or highly informative guesses about the world, which, although not verifiable (i.e. such that it is possible to demonstrate the truth) they can be subjected to strict critical controls.
This is evident from various texts and Popper emphasized these ideas in ‘The Logic of Scientific Discovery’: "Science is not a system of certain assertions, or established once and for all, nor is it a system that progresses steadily towards a definitive state. Our science is not knowledge (episteme): it can never claim to have reached the truth, not even a substitute for the truth, as probability .... "
We do not know, we can only presume. Our attempts to conceit are guided by the unscientific belief, metaphysical in the laws, in the regularities that we can uncover, discover.
A kind of approach which is not exempt from ethical questions because the operation has fluid boundaries. The borders can be crossed, leading to the possibility of manipulation and abuse of power against the same identity and autonomy of the persons involved.
As Bacon we could describe our contemporary science - the method of reasoning that today men routinely apply to Nature - consisting of hasty advances, premature and of prejudices. But, once advanced, none of our advances is supported dogmatically. Our research method is not what is to defend them, to prove how right we were; on the contrary, we try to subvert them, using all the tools of our logical, mathematical and technical ‘baggage’".
Hence the maximal caution: "The old scientific ideal of episteme, of absolutely certain and demonstrable knowledge, has proved an idol.
The need for scientific objectivity makes it inevitable that every assertion of Science remains necessarily and forever to the status of an attempt. The wrong view of science is betrayed because of its desire to be the right one. Since it is not the possession of knowledge, of irrefutable truth, that makes a man of science, but the critical research, persistent and anxious for the truth ".
[In this regard I consulted the following texts: H. R. Schlette, Philosophie, Theologie, Ideologies. Erläuterung der Differenzen, Cologne, 1968 (Italian transl c / o Morcelliana, Brescia, 1970, pp. 56, 78); G. Gismondi, The critique of ideology in the science foundation's speech, in "Relata Technica", 4 (1972), 145-156; Id., Criticism and ethics in scientific research, Marietti, Torino, 1978].
Then, Hermeneutics, applied to language, to human action and ethics allows to articulate text and action. An action may be told because it is the human life itself that deserves to be narrated; it presents possible narrative paths that the individual highlights, excluding others. Story and action also confirm the inter-subjectivity dimension of human beings: the action can be told because it is the same human life that deserves to be told. The story presents thoroughly the three moments of ethical reflection: describe, tell and prescribe.
In his 1963 book "little science, big science" Derek de Solla Price shows science as aa whole been growing exponentially for 400 years. He hypothesises this to be the first part of a logistic curve. If his predictions were right the growth of science should have been started to decline by now. Are there recent measurements that can be compared to his 1963 estimates? And... was he right?
Through many discussions in RearchGate, I came to recognize that majority of economists are still deeply influenced by the Friedmanian methodology. An evidence is the fact that they take little care for the economic consistency and relevance of the model. They pay enormous time and efforts in "empirical studies" and discuss the result, but they rarely question if the basic theory on which their model lies is sensible. This ubiquitous tendency gives grave effects in economics: neglect of theory and indulgence in empirics. I wonder why people do not argue this state of economics. Economic science should take back a more suitable balance between theory and empirics.
It is clear that we should distinguish two levels of Friedmanian methodology.
(1) Friedman's methodology and thought that is written in the texts, more specifically in his article The Methodology of Positive Economics (Chapter 7 of Essays in positive economics, 1953).
(2) The methodology that is believed to be Friedan's thought.
Apparently, (2) is much more important for this question. I see dozens of papers that examines Friedmanian methodology based on his text. Many of them detect that widely spread understanding is not correctly reflecting Friedman's original message. They may be right, but what is important is the widely spread belief in the name of Milton Friedman.
Verificationism (according to Wikipedia) is an epistemological and philosophical positioning that considers necessary and sufficient a criterion of verification for acceptance or validation of a hypothesis, a theory or a single statement or proposition. Essentially the verificationism says that a statement, added to a scientific theory, which can not be verified, is not necessarily false, but basically meaningless because it is not demonstrable at the empirical evidence of the facts. There could in fact be multiple statements inherently logical for the explanation / interpretation of a certain phenomenon, which, however, in principle only one by definition is true.
Nonsense does not mean false; only its value of truth can not be decided and then such a proposition can have no claim to be cognitive or foundational in scientific theory. It is defined a proposition any statement that may be assigned a truth value (in the classical logic, true or false). A proposition for which it is not possible to attribute this value is therefore a statement devoid of verifiability and so, for this kind of epistemology, not with any sense, and finally to be eliminated as mere opinion or metaphysical proposition. Verificationism is usually associated with the logical positivism of the Vienna Circle, in particular to one of its greatest exponents, Moritz Schlick, whose basic thesis can be summarized as follows:
The propositions with sense are those that can be verified empirically.
Science through the scientific method is the cognitive activity par excellence, since bases the truth of his propositions on this verificationist criterion .
The propositions of metaphysics are meaningless as they are based on illusory and unverifiable concepts .The propositions of metaphysics, says Carnap, express at most feelings or needs.
The valid propositions are, as had claimed the English empiricist Hume, the analytical ones, which express relationships between ideas (like mathematical propositions), and propositions that express facts (such as the propositions of physics). Math, as logic, does not express anything of the world, it should not be empirically verifiable, but must serve to concatenate propositions among themselves those verifiable and meaningful to give them the character of generality that is missing for the contingent propositions.
• The purpose of philosophy is to perform a critique of knowledge in order to eliminate all nonsensical propositions that claim to be cognitive. The philosopher must be able to perform on the language both a semantic analysis (relationship reality-language) and a syntactic analysis (ratio of the signs as they are linked together).
Verificationism has as a structural basis to find a connection between statements and experience, that is, sensations that give meaning to those. This connection is called verification.
The epistemological attitude that gives rise to verificationism, can be found within the history of philosophy and science as early as the Greek philosophy, to Thomas Aquinas passing by William of Occam, and English empiricism, positivism and Empiriocriticism of Avenarius and Mach.
According to English empiricism (whose leading exponents can be considered Locke, Berkeley and Hume) the only source of knowledge is experience.
As Berkeley says, in fact, "the objects of human knowledge are or ideas really impressed by the senses or ideas formed with the help of memory and imagination composing or dividing those perceived by the senses." So there is no other way of formulating sentences or judgments from the data of experience and the only way to verify the truth value is still using experience. The judgments that are thus based on data that can not be verified through experience do not have sense and are therefore to be rejected as unscientific.
A position that seriously reflects the consequences of empiricism is the version of Hume, who, considering that only experience can provide the truth value of a proposition, rejects all of them that claim to have universal validity. A law becomes true only if verified, but once it is verified, through experience, nothing can guarantee that the experience will occur whenever you present similar conditions that made it possible. The verification of an empirical proposition is always contingent, never needed. Difficult for Hume, therefore, is to give a definitive foundation to the same science in the traditional sense, i.e. as a set of knowledge that be certain and necessary.
Sciences, says the positivist Comte, must seek the immutable laws of nature and as such be verified regardless of any contingent experience that shows them to senses or should occur whenever the law so provides.
Some positivists (principle of verification ‘strong’) note, however, that the principle of verifiability makes significant some metaphysical judgments, such as "The soul is immortal." Indeed, there is a method of verification and simply “wait a while and die”. To avoid that statements of this type can be equipped with sense, it is processed a stronger version of the principle of verifiability. This states that a judgment has meaning only because it can be shown definitively true or false; i.e. it must give an experience that can show this value of truth.
This version is called strong because of the fact that it excludes that any knowledge be given that is not empirical and logical and therefore excludes that a sense can be given to any expression that is not the result of empirical knowledge or logical deduction derived from empirical propositions. This version of verificationism will be criticized by some positivists less radical, as Neurath and Carnap, for the simple fact that, if to give sense to a proposition is necessary its verification, even the principle of verifiability itself must be verified, and this It is not possible.
Numerous propositions of common use, whose meaning seems clear for the terms that we use, are unverifiable as statements that express the past or the future, such as “Churchill sneezed 47 times in 1949” or "Tomorrow it will rain." These propositions can, in principle, be verified, then it can be provided a method for the verification and for the principle of verifiability ‘weak version’ are equipped with meaning, but not for the ‘strong version’; they are only nonsense.
There are to be rejected the assertions about the Absolute and in general of metaphysical nature, at least as propositions to which it is possible to apply the positive verificationist method, even though this does not exclude its existence: to try to deny a metaphysical proposition has the same meaning as to try to prove it. The metaphysical propositions are therefore omitted, unrebutted.
Comte rejects the so-called absolute empiricism, which states that any proposition that is not established by the facts is to be rejected as senseless and therefore not liable to be taken as a scientific proposition.
Special mention must be made of math, no science, for Comte, but language and therefore the basis of any positive science. Mathematics as well as logic, as will say the logical empiricists, has the purpose of showing the connections between propositions in order to maintain the truth value of these, not to produce new values. The propositions of mathematics are ‘a priori’ truth, therefore, as such, can not be verified and therefore they say nothing of the world, but tell us how of the world it must be spoken after having experienced it.
The critique perhaps best known to the principle of verifiability is provided by Popper. He, though being its main critic, never abandons the beliefs set in the positivist poster and the idea that science has a rational and deductive structure, though describable in ways other than those contemplated by Schlick. In particular the principle of verification, weak and strong version, is abolished and replaced by that of falsifiability. This principle is in fact an admission of the impossibility of science to arrive at statements that they claim to be checked as they are, and also a condemnation of the principle of induction when it claims to provide a basis for the formulation of necessary laws . Popper says that billions of checks are not enough to determine if a given theory is certain; it is enough a falsification to show it is not true. The criterion of controllability of Carnap becomes the possibility of a statement to be subjected to falsification and the structure of science, as already stated by Hume, is that it does not confirm the hypothesis, to the maximum falsifies it. The experiments themselves to which are subject the laws of science are useful when trying to falsify the laws themselves foreseen by them and not if they try to verify them.
Criticism burying verificationism come from the so-called post-positivist epistemology, whose leading exponents are Kuhn, Lakatos and Feyerabend. In varying degrees all three claim that a fact can not be verified because the bare facts not even exist, but can only be represented in a theory already considered scientific. Therefore, there is no distinction between terms of observation and theoretical terms, and even the same concepts considered basic of science possess the same meaning if designed within two different theories (think for example to the concept of mass for Newton and Einstein) . According to post-positivism also science itself is not empirical because even its data are not empirically verifiable and there is no criterion of significance, that is, it is not possible to separate a scientific statement from one that concerns other human activities.
Now, finally, we follow the position of Professor Franco Giudice for whom in the work “Controllability and meaning” (1936-1937) Rudolf Carnap recognizes that absolute verification in science is almost impossible. It must, therefore, change the criterion of significance; the principle of verification must be replaced with the concept of confirmation: a proposition is significant if, and only if, it is confirmable. The criterion of verifiability of propositions consists only of confirmations gradually increasing. Thus, the acceptance or rejection of a proposition depends on the conventional decision to consider a given degree of confirmation of the proposition as sufficient or insufficient. Then, the meaning of a proposition is determined by the conditions of its verification (verification principle): a proposition is significant if, and only if, there is an empirical method for deciding if it is true or false. If such a method is not given, then it is an insignificant pseudo-proposition.
Should hypotheses always be based on a theory? I will provide an example here without variable names. I am reading a paper where the authors argue that X (an action) should be related to Y (an emotion). In order to support this argument the authors suggest that when individuals engage in X, they are more likely to feel a sense of absorption and thus they should experience Y. There is no theory here to support the relationship between X and Y. They are also not proposing absorption as the mediator. They are just using this variable to explain why X should lead to Y. Would this argument be stronger if I used a theory to support the relationship between X and Y? Can someone refer me to a research paper that emphasizes the need for theory driven hypotheses? Thanks!
I am quite surprised everybody says Galileo is the one who first scientifically described the relativity of motion which is contrary to the fact that at least Copernicus did it earlier and in quite explicit form:
"Every observed change of place is caused by a motion of either the observed
object or the observer or, of course, by an unequal displacement of each. For when things move with equal speed in the same direction, the motion is not perceived, as between the observed object and the observer."
NICHOLAS COPERNICUS OF TORUÑ, THE REVOLUTIONS OF THE HEAVENLY SPHERES 1543.
I am also surprised from time to time by statements that it was Galileo who proposed heliocentric system.
Its an interesting aspect of distortion of historical facts. Any thoughts or other examples of similar injustice? Why does it take place?
This refers to the recent experiments of Radin et al :
1) D. Radin, L. Michel, K. Galdamez, P. Wendland, R Rickenbach and A. Delorme
Physics Essays, 25, 2, 157 (2012).
2) D. Radin, L. Michel, J. Johnston and A. Delorme, Physics Essays, 26, 4, 553 (2013).
These experiments show that observers can affect the outcome of a double slit experiments as evidenced by a definite change in the interference pattern.
It requires urgent attention from the scientific community, especially Physicists.
If these observed effects are real, then we must have a scientific theory that can account for them.
I'm interested in comparing Indigenous research methods with other ancient cultures. Indigenous research methods are relatively well documented for Australian Aboriginals, New Zealand Maori and North American Indians. I was hoping to locate examples of other non-Western (non-Eurocentric) research methods used by cultures, such as China, Africa, South America, India etc. For example, what methodology did the Chinese use to develop their knowledge of Chinese medicine? I realise these methods may not have been documented or may be in a non-English language. Any leads would be helpful at this stage.
While scientific cosmology rarely occurs in the work Karl Popper, nevertheless it is a subject that interested him. The problem now is whether falsifiability criterion can be used for cosmology theories.
For instance, there are certain issues in cosmology which have never been refuted, but instead the same methods are used over and over despite their lack of observational support, for instance mutliverse idea (often used in string theory) and also Wheeler DeWitt equation (often used in quantum cosmology).
So do you think that Popperian falsifiability can be applied to cosmology science too? Your comments are welcome.
My objective is to create, accumulate physical evidence and demonstrate irrefutable physical evidence to prove that the existing definitions for software components and CBSE/CBSD are fundamentally flawed. Today no computer science text book for introducing software components and CBSD (Component based design for software products) presents assumptions (i.e. first principles) that resulted in such flawed definitions for software components and CBSD.
In real science, anything not having irrefutable proof is an assumption. What are the undocumented scientific assumptions (or first principles) at the root of computer science that resulted in fundamentally flawed definitions for so called software components and CBD (Component Based Design) for software products? Each of the definitions for each kind of so called software components has no basis in reality but in clear contradiction to the facts we know about the physical functional components for achieving CBD of physical products. What are the undocumented assumptions that forced researchers to define properties of software components, without giving any consideration to reality and facts we all knows about the physical functional components and CBD of physical products?
Except text books for computer science or software engineering for introducing software components and CBSD (Component Based Design for software products), I believe, first chapter of any text book for any other scientific discipline discusses first principles at the root of the scientific discipline. Each of the definitions and concepts of the scientific discipline is derived by relying on the first principles, observations (e.g. including empirical results) and by applying sound rational reasoning. For example, any text book on basic sciences for school kids starts by teaching that “Copernicus discovered that the Sun is at the center”. This is one of the first principles at the root of our scientific knowledge, so if it is wrong, a large portion of our scientific knowledge would end up invalid.
I asked countless expert, why we need different and new description (i.e. definitions and/or list of properties) for software components and CBSD, where the new description, properties and observations are in clear contradiction to the facts, concepts and observations we know about the physical functional components and CBD of large physical products (having at least a dozen physical functional components). I was given many excuses/answers, such as, software is different/unique or it is impossible to invent software components equivalent to the physical functional components.
All such excuses are mere undocumented assumptions. It is impossible to find any evidence that any one ever validated these assumptions. Such assumptions must be documented, but no text book or paper on software components even mentioned about the baseless assumptions they relied on to conclude that each kind of useful parts is a kind of software components, for example, reusable software parts are a kind of software components. Then CBD for software is defined as using such fake components. Using highly reusable ingredient parts (e.g. plastic, steel, cement, alloy or silicon in wafers) is not CBD. If anyone asks 10 different experts for definition/description for the software components, he gets 10 different answers (without any basis in reality we know about the physical components). Only the God has more mysterious descriptions, as if no one alieve seen the physical functional components.
The existing descriptions and definitions for so called CBSD and so called software components were invented and made out of thin air (based on wishful thinking) by relying on such undocumented myths. Today many experts defend the definitions by using such undocumented myths as inalienable truths of nature, not much different from how researchers defended epicycles by relying on assumption ‘the Earth is static’ up until 500 years ago. Also most of the concepts of CBSD and software components created during past 50 years derived by relying on such fundamentally flawed definitions of software components/CBSD (where the definitions, properties and descriptions are rooted in undocumented and unsubstantiated assumptions).
Is there any proof that it is impossible to invent real software components equivalent to the physical functional components for achieving real CBSD (CBD for software products), where real CBSD is equivalent to the CBD of large physical products (having at least a dozen physical functional components)? There exists no proof for such assumptions are accurate, so it is wrong to rely on such unsubstantiated assumptions. It is fundamental error, if such assumptions (i.e. first principles) are not documented.
I strongly believe, such assumptions must be documented in the first chapters of each of the respective scientific disciplines, because it forces us to keep the assumptions on the radar of our collective conscious and compels future researchers to validate the assumptions (i.e. first principles), for example, when technology makes sufficient progress for validating the assumptions.
I am not saying, it is wrong to make such assumptions/definitions created for software components 50 years ago. But it is huge error to not documenting the assumptions, on which they relied upon for making such different and new definitions (by ignoring reality and known facts). Such assumptions may be acceptable and true 50 years ago (when computer science and software engineering was in infancy and assembly language and FORTRAN were leading edge languages), but are such assumptions still valid? If each of the first principles (i.e. assumptions) is a proven fact, who proved it and where can I find the proof? Such information must be presented in the first chapters.
In real science, anything not having irrefutable proof is an assumption. Is such undocumented unsubstantiated assumptions are facts? Don’t the computer science text books on software components need to document proof for such assumptions before relying on such speculative unsubstantiated assumptions for defining the nature and properties of software components? All the definitions and concepts for software components and CBSD could be wrong, if the undocumented and unsubstantiated assumptions end up having huge errors.
My objective is to provide physical evidence (i) to prove that it is possible to discover accurate descriptions for the physical functional components and CBD of large physical products (having at least a dozen physical functional components), and (ii) to prove that it is not hard to invent real software components (that satisfy the accurate description for the physical functional components) for achieving real CBSD (that satisfy the accurate description for the CBD of physical products), once the accurate descriptions are discovered.
It is impossible to expose any error at the root of any deeply entrenched paradigm such as CBSE/CBSD (evolving for 50 years) and geocentric paradigm (evolved for 1000 years). For example, assumption “the Earth is static” considered an inalienable truth (not only of nature and but also of the God/Bible) for thousands of years, but ended up a flaw and sidetracked research efforts of countless researchers of basic sciences into a scientific crisis. Now we know, no meaningful scientific progress would have been possible, if that error was not yet exposed. Only possible way expose such error is showing physical evidence, even if most experts refuse to see the physical evidence, by finding few experts who are willing to see the physical evidence with open mind.
I have lot of physical evidence and now in the process of building a team of engineers and necessary tools for building software applications by assembling real software components for achieving real CBSD (e.g. for achieving CBD-structure http://real-software-components.com/CBD/CBD-structure.html by using CBD-process http://real-software-components.com/CBD/CBD-process.html). When our tools and team is ready, we should be able to build any GUI application by assembling real software components.
In real science, any thing not having irrefutable proof is an assumption. Any real scientific discipline must document each of the assumptions (i.e. first principles) at the root of the scientific discipline, before relying on the assumptions to derive concepts, definitions and observations (perceived to be accurate, only if the assumptions are proven to be True): https://www.researchgate.net/publication/273897031_In_real_science_anything_not_having_proof_is_an_assumption_and_such_assumptions_must_be_documented_before_relying_on_them_to_create_definitionsconcepts
I tried to write papers and give presentations to educate about the error, but none of them worked. I learned in hard way, that this kind of complex paradigm shift can’t happen in just couple of hour’s presentation or by reading 15 to 20 page long papers. Only possible way left for me to expose the flawed first principles at the root of any deeply entrenched paradigm is by finding experts willing to see physical evidence and showing them the physical evidence: https://www.researchgate.net/publication/273897524_What_kind_of_physical_evidence_is_needed__How_can_I_provide_such_physical_evidence_to_expose_undocumented_and_flawed_assumptions_at_the_root_of_definitions_for_CBSDcomponents
So I am planning to work with willing customers to build their applications, which gives us few weeks to even couple of months time to work with them to build their software by identifying ‘self-contained features and functionality’ that can be designed as replaceable components to achieve real CBSD.
How can I find experts or companies willing to work with us to see the physical evidence, for example, by allowing us the work with them to implement their applications as a CBD-structure? What kind of physical evidence would be compelling, when any one willing to give us a chance (at no cost to them, since we can work for free to provide compelling physical evidence)? I failed so many times in this complex effort, so I am not sure what could work? Does this work?
I am looking for information on the history of the development of statistical significance formulae, the mathematical calculations and why they were chosen.
I would also like to learn the same about effect size.
It is known that physics is empirical science, in the sense that all propositions should be verified by experiments. But Bertrand Russell once remarked that the principle of verifiability itself cannot be verified, therefore it cannot be considered a principle of science.
In a 1917 paper, Bertrand suggested sense-data to replace the problem of verifiability in physics science (http://selfpace.uconn.edu/class/ana/RussellRelationSenseData.pdf), but later he changed his mind. see http://www.mcps.umn.edu/philosophy/12_8savage.pdf
So what do you think? Is there a role for sense-data in epistemology of modern physics?
Section II of “The fixation of belief”  opens dramatically with a one-premise argument—Peirce’s truth-preservation argument PTPA—concluding that truth-preservation is necessary and sufficient for validity: he uses ‘good’ interchangeably with ‘valid’. He premises an epistemic function and concludes an ontic nature.
The object of reasoning is determining from what we know something not known.
Consequently, reasoning is good if it gives true conclusions from true premises, and not otherwise.
Assuming Peirce’s premise for purposes of discussion, it becomes clear that PTPA is a formal fallacy: reasoning that concludes one of its known premises is truth-preserving without “determining” something not known. It is conceivable that Peirce’s conclusion be false with his premise true [1, pp. 19ff].
The above invalidation of PTPA overlooks epistemically important points that independently invalidate PTPA: nothing in the conclusion is about reasoning producing knowledge of the conclusion from premises known true: in fact, nothing is about premises known to be true, nothing is about conclusions known to be true, and nothing is about reasoning being knowledge-preservative.
The following is an emended form of PTPA.
One object of reasoning is determining from what we know something not known.
Consequently, reasoning is good if it gives knowledge of true conclusions not among the premises from premises known to be true, and not otherwise.
PTPA has other flaws. For example, besides being a formal non-sequitur, PTPA is also a petitio-principi [1, pp.34ff]. Peirce’s premise not only isn’t known to be true—which would be enough to establish question-begging—it’s false: reasoning also determines consequences of premises not known to be true [1, pp. 17f].
 JOHN CORCORAN, Argumentations and logic, Argumentation, vol. 3 (1989), pp. 17–43.
 CHARLES SANDERS PEIRCE, The fixation of belief, Popular Science Monthly. vol. 12 (1877), pp. 1–15.
Q1 Did Peirce ever retract PTPA?
Q2 Has PTPA been discussed in the literature?
Q3 Did Peirce ever recognize consequence-preservation as a desideratum of reasoning?
Q4 Did Peirce ever recognize knowledge-preservation as a desideratum of reasoning?
Q5 Did Peirce ever retract the premise or the conclusion of PTPA?
In The Nature of the Physical World, Eddington wrote:
The principle of indeterminacy. Thus far we have shown that modern physics is drifting away from the postulate that the future is predetermined, ignoring rather than deliberately rejecting it. With the discovery of the Principle of Indeterminacy its attitude has become definitely hostile.
Let us take the simplest case in which we think we can predict the future. Suppose we have a particle with known position and velocity at the present instant. Assuming that nothing interferes with it we can predict the position at a subsequent instant. ... It is just this simple prediction which the principle of indeterminacy expressly forbids. It states that we cannot know accurately both the velocity and the position of a particle at the present instant.
According to Eddington, then, we cannot predict the future of the particular particle beyond a level of accuracy related to the Planck constant (We can, in QM, predict only statistics of the results for similar particles). The outcome for a particular particle will fall within a range of possibilities, and this range can be predicted. But the specific outcome, regarding a particular particle is, we might say, sub-causal, and not subject to prediction. So, is universal causality (the claim that every event has a cause and when the same cause is repeated, the same result will follow) shown false as Eddington holds?
It was true that mathematics was done in argumentation and discourse or rhetoric in ancient times. The 6 volumes of Euclid’s elements have no symbols in it to describe behaviors of properties at all except for the geometric objects. The symbols of arithmetic: =, +, -, X, ÷ were created in the 15th and 16th centuries which most people hard to believe it - you heard me write. The equality sign “=” and “+,-“ appeared in writing in 1575, the multiplication symbol “X “ was created in 1631, and the division sign “ ÷” was created in 1659. It will be to the contrary of the beliefs of most people as to how recent the creations of these symbols were.
It is because of lack of symbols that mathematics was not developed as fast as it has been after the times where symbols were introduced and representations, writing expressions and algebraic manipulations were made handy, enjoyable and easy.
These things made way to the progress of mathematics in to a galaxy – to become a galaxy of mathematics. What is your take on this issue and your expertise on the chronology of symbol creations and the advances mathematics made because of this?
The British astrophysicist, A.S. Eddington wrote (1928), interpreting QM, "It has become doubtful whether it will ever be possible to construct a physical world solely out of the knowable - the guiding principle of our macroscopic theories. ...It seems more likely that we must be content to admit a mixture of the knowable and the unknowable. ...This means a denial of determinism, because the data required for a prediction of the future will include the unknowable elements of the past. I think it was Heisenberg who said, 'The question whether from a complete knowledge of the past we can predict the future, does not arise because a complete knowledge of the past involves a self-contradiction.' "
Does the uncertainty principle imply, then, that particular elements of the world are unknowable, - some things are knowable, others not, as Eddington has it? More generally, do results in physics tell us something substantial about epistemology - the theory of knowledge? Does epistemology thus have an empirical basis or empirical conditions it must adequately meet?
I recently published my book "The Origin of Science" which can be downloaded at https://www.researchgate.net/profile/Louis_Liebenberg/publications/ I am interested in alternative theories on the origin of science and how this debate can lead to a better understanding of how our ability for scientific reasoning evolved.
Is it reasonable to use these terms?
A number of papers have been published a long time ago, but still have many citations.
If it is possible, then is it predictable?
Is citation can be a suitable measure to judge the useful age of a paper?
Which papers have more useful lifetime or long expire date?
Thanks for your inputs.
Back to my 2nd semester, I still remember those boring faces trying to hide their yawning during lectures on History of Science. But I found it unexpectedly interesting. Learning the manner of approach of ancient philosophers and naturalists was quite exciting. But it seems to me that most students neglect this valuable subject because their minds seem to be preoccupied with the notion that most of the thing taught in this course, like their manner of thinking about the cosmos, the earth, their perspective towards health and medicine, are almost apparent to everybody. But what they missed are what they ought to learn, their hardwork, their practices, their mode of approach, their determination, dedication at those days when everything seems to be mysterious, when there was no thing called apparent.
So what more can we learn from our forefathers? And how can this subject be popularized esp. among youngsters?