Science topics: PhilosophyLogicAxiom
Science topic

# Axiom - Science topic

This group invites people from globe who like to ask questions that can provoke human minds
Questions related to Axiom
Question
A common axiom says everything has both advantages and disadvantages. This question seeks to x-ray the disadvantages of conflict on the quality of scientific researches both in areas mostly and remotely affected
@Stephen David Edwards, its true conflicts can affect the quality of research by disrupting collaboration among researchers
Question
algebraic geometry
The projective plane satisfies the following axioms:
A. Any two distinct points are contained in a unique line.
B. Any two distinct lines are intersected in a unique point.
C. There exists four distinct points no three of them are collinear.
Question
P1: Ontology + Data = Knowledge Graph (KG)
P2: If a KG is the sum of these two summands, it follows:
C: An ontology is a framework for a KG
As a framework, an Ontology consists of person data, classes, properties, relations and axioms.
Personal data: Tom
classes: Interim project manager
Properties: takes over the project
Relationships: Tom is the successor of Bernd
Axioms: Tom takes over the project from Bernd
KG: We ask a question and the knowledge graph makes connections to the individual elements of the Ontology. It brings them to life, so to speak.
If you compare the KG with our neural network, you can see similarities. If I ask Tom a question, he will use his neural network to answer this question and generate new ideas.
That's why Knowledge Graphs are also defined as a kind of semantic network.
Does the community agree with this approach? Please give feedback! Thanks!
" In computing, an ontology is then a concrete, formal representation — a convention—on what terms mean within the scope in which they are used (e.g., a given domain). Like all conventions, the usefulness of an ontology depends on how broadly and consistently it is adopted and how detailed it is. Knowledge graphs that use a shared ontology will be more interoperable. Given that ontologies are formal representations, they can further be used to automate entailment."
Extracted from
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia D’amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmermann. 2021. Knowledge Graphs. ACM Comput. Surv. 54, 4, Article 71 (May 2022), 37 pages. DOI:https://doi.org/10.1145/3447772
Ontology Rules can be used for reasoning in order to deduce implicity knowledge based on existing KG data.
But an ontology itself can contain data (called individuals or instances or concrete objects). So what is the difference between an ontology and an ontology-based KG? Maybe data size.
Question
What kind of scientific research dominate in the field of New ideas, new concepts in science, in art, in business?
The new idea often contains something innovative in relation to what was previously invented, created, designed and manufactured.
New ideas take different forms of new solutions, new concepts, new models, new designs, new axioms, new directions of scientific thought, and many other forms that incorporate any aspects of novelty.
While conducting scientific research, new ideas, ideas, inventions, techniques, technologies, innovations, etc. are created. Thanks to this, the economy is developing, and civilization and cultural progress are being realized. Therefore, it is necessary to create good standards and conditions, including financing the conduct of research projects.
New ideas initiate new trends. New trends usher in new eras. From the first to the fourth technological revolution, new technologies and innovations also play an increasingly important role in terms of new trends in the development of civilization.
Best wishes,
Dariusz Prokopowicz
Question
This is a question about Godel Numbering. As I understand it, the axioms of a system are mapped to a set of composite numbers. Is this really the case, so for example the 5 axioms of Euclidean plane geometry are mapped to 5 composite numbers? Does this also imply that theorems of the system are now composite numbers that are dependent on the composite numbers that were the target of the map from the set of axioms PLUS the elementary numbers that describe the logical operations, such as +, if..then, There exists, ext.?
Jason Hadnot - the simple answer to your question is YES. The more complicated answer is that Godel numbering is just a coding of sentences into numbers. Yes, as has been pointed out there are many possible coding. The only thing that is really important is every sentence gets a unique representation as a number. The point about such a coding is that everything you can do to manipulate sentences can now be done by number theoretic functions. Schema present no particular difficulty, provided we get the coding right we can have a function for instantiating any instance of a schema i.e. a function that takes a number representing a schema and some numbers representing the items to be instantiated and delivers a number representing the instantiation.
One minor aside on the number of axioms: any finite list of axioms can, of course, be represented as a single conjunctive axiom, and so can be made to correspond to a single number under coding.
Question
Austrian-born mathematician, logician, and philosopher Kurt Gödel created in 1931 one of the most stunning intellectual achievements in history. His shocking incompleteness theorems, published when he was just 25, proved that within any axiomatic mathematical system there are propositions that cannot be proved or disproved from the axioms within the system. Such a system cannot be both complete and consistent.
The understanding of Gödel’s proof requires advanced knowledge of symbolic logic, as well as Hilbert and Peano mathematics. Hilbert’s Program was a proposal by German mathematician David Hilbert to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. Gödel’s 1931 paper proved that Hilbert’s Program is unattainable.
The book Gödel’s Proof by Ernest Nagel and James Newman provides a readable and accessible explanation of the main ideas and broad implications of Gödel's discovery.
Mathematicians, scholars and non-specialist readers are invited to offer their interpretations of Gödel's theory.
The idea is you have to go out of the system in order to prove some results.
Like the Fermat problem needed things like Eliptic curves also, foreign to just
plain integer arithmetic.
That is if you mix up a bunch of rules, there are also unexpected side effects which might be true, but cannot be proven. Some results are random (prime numbers)
Many other problems are still open.
Question
A difficulty in just starting a research, is that IF it is hard to even know -- what it means to have property X, such as spacetime in physics? The question of avoiding circular thought naturally appears.
To avoid circularly, we suggest to start lower than the level of measurements, in the case of spacetime or property X, before a metric function is even introduced.
We then start as a simple topological space, divisible in types by type theory only, a notion lower than sets. In that space, the metric is not introduced ad hoc, but by requiring observable properties, on the metric function.
The method, to be described in property X, mutatis mutandis as done in spacetime:
1. describe all free observers,
2. impose that all agree on the interval,
3. make the interval a differential defined by a metric function, and
4. find the metric function that fits.
For example, because one includes as a special case the condition (2), when all such observers agree on the free speed of light (free as in vacuo), an experimental, not disputable, fact, the determination of the interval ds2 is fixed by nature as the arbiter, by physics, in spacetime.
Other arbiters are possible, such as cosmology, the mind, mathenatics, or a historically-based sequence.
In physics, this produces the only answer possible in nature for ds2, the expression for the correct metric to use, which provides the fusion of space and time in the interval ds2 -- as already known by Minkowski and Einstein, more than 100 years ago. In cosmology, with the Hubble flow, other answers are possible. The introduction of dark matter could be done this way.
The Lorentz Transformation is then, first, introduced as a consequence of this precedent, not as an axiom anymore, not before.
This simple, proper, sequence of steps, exemplified for property X as spacetime, avoids the inconsistencies of the original treatment by Einstein, and was later adopted by Einstein in formulating general relativity, as a curvature of the same spacetime.
As well-known, Shannon proposed a method for applying a mathematical form of logic called Boolean algebra to the design of relay switching circuits, creating the basis for his Information Theory (IT). This was his Master thesis and the start of our digital future.
Less well-known, it did not work well in practice. Aging batteries, noise, and speed, limited those circuits, by the two-state nature of IT, Shannon's creation. We needed tri-state, soon made possible by Verilog, an IEEE Standard, and success was reached. Noise was part of the model Itself.
Quantum theory, however, has stayed two-state, and is therefore, limited by... noise. For a tri-state quantum future, see
Question
Dear Friends,
Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)?
I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)?
Let me give my answer to start this debate:
Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms.
Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed.
Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional.
This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline.
I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence.
If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles.
Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence).
Best Regards,
Raju Chiluvuri
Dear @Raju Chiluvuri
To my opinion, mathematics is the precursor to all the disciplines of science. And, in fact, mathematics is also a science.
Thanks!
Question
Is there an encyclopedia of all the branching mathematical axioms, together with various ways of proving different theorems based on those axioms?
Hello. Yours is a broad question, but I will try to give my contribution: NIST Digital Library of Mathematical Functions is a very rich source of information. Check it out: https://dlmf.nist.gov/
Question
I will be using Axiom Genome-wide Human Origins 1 array. Is there a way to tell whether a sample is contaminated with DNA from another sample during the extraction stage (for example, excessive heterozygous calls)? Is there a way to eliminate alleles of the contaminant from the data? What is the minimimum proportion of contaminating DNA which can be detected by most SNP arrays?
Hi, dear
The best to take a look at the allele frequencies, because contamination would give low allelic frequencies.
Question
this is axiomatic set theory . these axioms are needed for set theory and not for mathematics. so can we avoid them since the involve use of predicate and property. will experts guide in detail. can the use be restricted by using a mapping rather than property or predicate notion ?
You mentioned in your question that "these axioms are needed for set theory and not for mathematics", but this is not a simple claim.
Separation and replacement axioms are needed for establishing many important results dear to mathematicians. For instance, it is fundamentally necessary to prove recursion theorem for natural numbers. Replacement, on the other hand, is necessary for establishing transfinite recursion.
(Note: one do not need separation, for it is provable using replacement)
One can use functions directly in replacement axiom.
Replacement: if F is a function and A any set, then {F(x) | x in A} is a set.
However, the stronger version would be provable using the other axioms of ZF and the weak version of replacement.
Strong replacement:
If F(x, y) is a property the behaves like a function, then for any A, {F(x) | x in A} is a set.
********
In the chapter Constructible Sets of Set Theory: Third Millenium Edition (Thomas Jech), Jech defines Godel operations for building the contructible universe. This can be seen as a strategy for considering the generation of sets as operations. This may interest you.
About your question: I keep hearing that some subtheory of "hereditarily finite" set theory is OK that way, but I have only a fuzzy idea what it is and am too lazy to look it up ...
Hereditarily finite sets axiomatization is known as the 'set theory equivalent of peano arithmetics'. These theories are very closely connected: they are bi-interpretable. The idea is: you remove the infinity axiom, say that every set is finite and that every member of each set is finite. In this theory, the axiom of separation and replacement become the equivalent of the axiom of induction in arithmetics.
Notably, it is known that ZF can provide a model construction for PA and thus it can provide a model construction for this set theory. Using completeness theory for first order logic, it means that ZF proves consistency of this theory.
But, this is even more general. ZF has a property called reflection. It means that ZF provides truth predicates for any set-size part of itself. In particular, the class of hereditarily finite sets is a set in ZF. Therefore, ZF provides a truth predicate for it, i.e. ZF proves the consistency of hereditarily finite set theory.
Question
The answer must be yes; if all the principles of physics were known, then physics mysteries would have been answered, no? Or would they? Even with all the axioms of natural numbers, there are mysteries in the counting numbers, according to Godel? Does Godel's result in mathematics and logic analogize to physics? How lost are we in physics?
>Does Godel's result in mathematics and logic analogize to physics?
I would think it prevails over physics: any physics theory must be first self-consistent logically, only then one may question whether it correctly maps the reality or not.
However, Godel's leads us to think that we may get troubles already trying to create complete self-consistent logical system.
In fact, situation is even worse - we have problems with self-consistency even regarding classical electromagnetism e.g. applied to electron (as well described in Feynman lectures).
Question
In his Principia, in the Motte translation, Scholium at p. 77, he writes of time, in order to remove "certain prejudices": “Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration”.
In the Motte translation, p 506, Newton says: “... for whatever is not deduced from the phenomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy."
Is it possible that he set aside the issue of time in order to work out the consequences of a absolute time axiom? That absolute time was for Newton a provisional hypothesis?
According to Aristotle, time is a register of motions: if there were no motion there would be no time; if time is eternal it is because motion is eternal, not the other way around; that which does not move, i.e. the first mover (and nothing else), is timeless, everything else exists 'in time' and if a thing existing in time is at rest it is only relatively to other things.
Question
Chalmer’s contemplated in  the Chinese room argument for both the connectionist and symbolic approaches in AI as I have in the thread . I would expand upon the axiom ‘Syntax is not sufficient for semantics’ comments that as presented in the diagram of the thread (attached here also) there is another error in Searle’s argument.
The neural network system drawn there is a complex distributed system drawn at reflecting accuracy in translating a 1-gram not n-gram models where if taken as words would account for semantic interpretation( which would be another neural network).
I would like counterpoints which can refine the argument
References
 Subsymbolic Computation and the Chinese Room by David J. Chalmers http://consc.net/papers/subsymbolic.pdf
Here is the attachment of the figure for easy reference
Question
How can we define biodiversity so that it can be measured and the extent of its degradation and the effectiveness of safeguarding measures can be known?
It seems to be complicated, if not impossible, to give an objective definition of biodiversity (animal, plant... in forests, oceans...) based on common sense axioms, other than simply counting the number of species present in a geographical area of interest. However, this definition is very restrictive since, for example, it does not take into account the number of individuals per species (a species may become endangered, without the criterion showing this). In addition, the interest of biodiversity also stems from the interactions between species, their complementarity for different functions in ecosystems.
What measures can be put in place? What is the evolution of concepts, ideas, literature on this subject?
Biodiversity is the variability among living organisms from all sources, including terrestrial, marine, and other aquatic ecosystems and the ecological complexes of which they are part; this includes diversity within species, between species, and of ecosystems. But real biodiversity is that till now we have information only about 3% of total biodiversity of microorganisms.
Question
The open mapping theorem is usually proved in most texts using Baires Category theorem which depends upon the axiom of choice.
But if one studies differential calculus in Banach spaces say as in Dieuodenne Foundations of Modern Analysis the theorem is the first part of Inverse mapping theorem( as proved in Walter Rudin's classic Principles of Mathematical Analysis and the proof carries over to Banach space setting ) as a contiuous linear mapis differentiable This proof does not depend upon Baires Category Theorem..
There is a long story of dependence of the Baire's theorem on AC (Axiom of choice). In fact, it comes from the work of P. Bernays in 1942. The Baire's theorem depends on a weaker form of AC, called Axiom of dependent choice (AD).
I am recommending a short Wikipedia article: https://en.wikipedia.org/wiki/Axiom_of_dependent_choice
To study these issues more in-depth, start with the literature at the end of this article. About 99% of the mathematicians who use the Baire's theorem, Banach open mapping theorem, or Banach-Steinhaus theorem, really do not care whether the above depend on AC or DC. Nevertheless, connections with the foundations of mathematics are very important here.
Question
What is a geometry? What is required of a system of mathematical objects and propositions(axioms) so that they may be termed a geometry? What must the geometry reveal about the objects?
As an example in a vector space with a bilinear form we can calculate the norm and the orientation?
Are there any conditions that a theory must satisfy in order to be called a geometry?
I apologize if my questions sound a bit unclear. I thank you all in advance for your help and cooperation.
Regards,
Zubair
Interesting and though provoking question. Many of late nights I am sure maybe with a few beers in grad school ended up in the discussion of similar questions. We know what algebra is. It is easy to define. Similar for topology, the study of topological spaces invariant under homeomorphisms, i.e., a donut and coffee cup with a one loop handle are the same. Similar for analysis (real and complex). But geometry causes some pause in coming up with a good definition.
The article by Sir Michael Atiyah pretty much hit it on the head. Geometry is more a way of perceiving and "visualizing" and mathematical intuition of a problem than a field. In modern mathematics often geometry is related to the ability to make "measurements" on underlying spaces and determine properties of elements on objects in various categories, e.g., length or angle between tangent vectors or parallel transport, curvature, etc., in the category of Riemannian manifolds, i.e., Riemannian geometry, where the underlying structure of the space supports such concepts as length, etc. For example there is no structure that support length in the category of Topological manifolds, you need a Riemannian metric to do so.
However, there is more to it than that. Today we are trying to teach machines to have "vision" so they can autonomously navigate in the world. That requires a basic understanding both of Euclidian geometry in three space along with projective geometry since light that hits the sensors arrives from a ray. This gave rise to a new field - computational commutative algebra and algebraic geometry.
So it seems that geometry is more of a concept that spreads across all of mathematics rather than a fundamental area of mathematics. For example the geometry of attractors in a dynamic system led to things like Smale horseshoe and fractals. There are three pillars for modern mathematics, topology, algebra, analysis (real and complex). That is any problem encountered requires proficiency in these areas to solve. That's why in most graduate schools the PhD qualifying exams require proficiency in those four areas. However, there are "geometric" questions that arise in all those areas - with potentially the exception of algebra .
Question
I tend to consider mathematics the body of axioms, definitions, systems of logic, and their results. If you're not adding to this body, you're not doing mathematics.
Arithmetic, calculating, solving, and so on, just seem more like accounting than mathematics. If anything, I would call these things "calculus" after the original Latin root word, referring to a stone used for counting.
Maybe it's pedantic. But I don't think so. A lot of people who "hate math" really hate "calculus." And honestly, being good at mathematics, at constructing proofs, etc is so different from being good at working with numbers. I'm decent at the former. I'm horrible when it comes to the latter.
Ahmed Chahtou, among other things, it is frustrating that so many people believe that mathematics is all about numbers and calculations. That misconception is also unhealthy for the field.
Question
Dear Friends,
Isn’t it true: “Paradigm” is one of the most deeply useful and most used or abused term in the intellectual circles and discussions?
I used the term often, without fully comprehending its finer details. So, I started searching for comprehensive description to fully comprehend the meaning to gain deeper insights but could not reach the goal yet.
Hence, I decided to create one and share here for debate and discussion for improving my understanding by listening to different perspectives for gaining new insights. Let me share, my preliminary draft description and insights briefly:
Question: What is a scientific or technological Paradigm?
Answer: A Paradigm is a complex perception of reality painted by a huge BoK (Body of Knowledge) comprising thousands of pieces of Knowledge such as individual observations, experiences, shared background axiomatic-assumptions, values, theories, postulates and prevailing climate of opinions or thought patterns of a very large community or group of persons subscribed to the paradigm.
A paradigm can become a deeply entrenched paradigm, only if it attracts a very large community or groups of practitioners and researchers for expanding the paradigm and they together accumulate a huge BoK by acquiring knowledge for decades or even centuries. Each piece of knowledge in the BoK for a deeply entrenched paradigm is consistent and/or congruent with all the other pieces of the knowledge in the BoK and overall perception of reality painted by the BoK.
The Books and research publications for each discipline (e.g. Botany, Zoology, Chemistry, virology, mycology, parasitology, and bacteriology to name a few) comprises a huge BoK accumulated for decades, and the BoK paints a perception of reality, where the perception of reality is the “Paradigm”.
In other words, paradigm for a discipline is our understanding of the world or perception of reality painted by the BoK or Knowledge in text books and research papers. Every mature discipline must have a paradigm, which is nothing but a perception of reality painted by the BoK acquired and accumulated for the discipline.
Almost every discipline including soft-sciences (e.g. sociology, political sciences, psychology, economics or even each religion) having BoK that paints a perception, which may be referred to as a paradigm. My understating has few gray or blurred patches, so like to here other perspectives to improve clarity.
Our understanding of term “paradigm” can never be complete without knowing the state of Knowledge without a paradigm (e.g. during pre-paradigmatic state). The seminal and influential book “The Structure of Scientific Revolutions” By Thomas Kuhn (who coined the term “paradigm”) describes a period called pre-paradigmatic (or pre-science) state for each scientific discipline, when the scientific discipline is in its infancy (i.e. at the time of its inception).
During the pre-paradigmatic period, there exists a chaotic situation. There is a good summary for chaotic state during pre-paradigmatic (or pre-science) for any discipline in this informative video starting 1 minute 16 seconds for just two and half minutes: https://www.youtube.com/watch?v=JQPsc55zsXA (also next video may be interesting, which explains that creating a paradigm is essential to overcome such chaos: https://www.youtube.com/watch?v=sOGZEZ96ynI)
During the pre-paradigmatic (or pre-science) it is very hard to acquire knowledge. So, a basic foundation would be formed over the period for a paradigm by accumulating various theories, axioms, postulates that are created using reasoning and consensus and by relying on background assumptions, observations, prevailing climate of opinions or thought patterns.
For example, the pre-paradigmatic (or pre-science) period for basic sciences might be between 4th century BC and 1st century CE, during the many ancient philosophers (e.g. Plato, Aristotle, Pythagoras and Archimedes etc.) created the foundation for first scientific paradigm. This unfortunately also comprised a flawed axiomatic assumption or fallacy: The Earth is static. Exposing the fallacy resulted in a scientific revolution.
Likewise, even modern scientific disciplines would have a pre-paradigmatic (or pre-science) period. For example, the paradigmatic (or pre-science) period for computer science and software was approximately between mid-1950s and early-1970s.
For example, two NATO software engineering conferences 1st from 7th to 11th October 1968 and 2nd conference from 27th to 31st October 1969 defined (or coined) new terms such as “software engineering”, components and assembling etc., where the conferences were attended by many influential though leaders and researchers of computer science from almost all nations, which were engaged on computer science research at that period.
Although they became integral part of our vocabulary, the terms such as “software engineering” or “assembling” were perceived to be provocative or strange in 1968. There would be a period for transition from pre-science to normal science for such terms to become integral part of our vocabulary.
Also different groups may make the transition during different periods. It is also hard to know exact duration of transition, so my guess is that it happened between 1970 and 1975, but certainly culminated into a paradigm before 1979.
A paradigm would slowly become more and more entrenched (1) as more and more pieces of knowledge are accumulated and added to the BoK (Body of Knowledge), and (2) as more and more practitioners and researchers become subscribers to the paradigm. I think, software paradigm also has fallacies injected during pre-science period. Exposing those fallacies should result in a revolution.
According to the book “The Structure of Scientific Revolutions” By Thomas Kuhn (who coined the term “paradigm”): If any new piece of knowledge or fact is proposed or discovered for a deeply entrenched or dominant paradigm, it would face fierce resistance from the practitioners of the paradigm and they try their best to suppress the new piece of knowledge (e.g. even by resorting to attacks), if the new piece of knowledge is not congruent, but contradict or inconsistent with the perceptions or reality painted by the BoK for the dominant paradigm.
Normal science solves puzzles that are posed by the prevailing paradigm but does not challenge the paradigm's basic axiomatic tenets or postulates. But in fact, "normal science" will suppress novelties which undermine its foundations (i.e. the basic axiomatic tenets or postulates). If the fundamental axiomatic postulates are fallacies and exposing the fallacies would result in a revolution.
Best Regards,
Raju Chiluvuri
In science and philosophy, a paradigm (/ˈpærədaɪm/) is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.
Question
The non measurable set is formed by selecting one element from each equivalence class obtained by the relation x ~ y if x-y is rational.
but we suggest we do not accept this form of axiom of choice applied to collection of sets.
To form an arbitrary Cartesian product one needs an indexed family of sets and an indexing set is necessary.
In the above case it seems that the collection is not indexed . No indexing set and explicit indexing map is there.
So we can not form a nonmeasurable set. Thus Banach -Taraski paradox is absent.
we do not work with arbitrary collections but only indexed families and explicit indexing map.
how to avoid the paradox in measure theory
plane where axiom of choice is not needed is also open
Let the indexing set be the set of equivalence classes for that equivalence relation, R/~. Then, if C is such an equivalence class, define A_C =C. This gives an indexed family {A_C } for C in R/~. An element of the product will then select one element fro each class.
The point is that the axiom corresponding to non-empty products of indexed families of sets is equivalent to the usual axiom of choice.
Question
By studying in steps what a flat plane is, this paper shows that only six axioms are necessary for 2-dimensional Euclidean geometry until the Pythagorean Theorem: 1) the existence of stable space; 2) the existence of a straight line through any two points, 3) the existence of distance measurement between any two points; 4) the limitation of the space to 2-dimensional, 5) the repeated equivalence, and 6) the reflected equivalence.
I also use this to teach kids about math logic and math observation
This is the beginning of a worthwhile discussion.
Perhaps the followers of this thread would be interested in tackling variations of the Euclidean axioms from a physical space perspective. I suggest this because a view of the Euclidean axioms from a physical (tiny region, thick line) perspective rather than the abstract (point, line) perspective is more intuitive and more palatable.
Shape Descriptions and Classes of Shapes. A Proximal Physical Geometry Approach
Proximal Physical Geometry
and
TwoFormsOfProximalPhysicalGeometry20161608.06208v4
Question
Dear all, I am searching for a comparison of CTT with IRT. Unfortunately, I mostly get an outline of both theories, but they are not compared. Furthermore, generally the "old" interpretation of CTT using axioms is used and not the correct interpretation provided by Zimmerman (1975) and Steyer (1989). The only comparison of both theories that I found was in Tenko Raykov's book "Introduction to Psychometric Theory". Does anybody know of any other sources?
Kind regards, Karin
Greg, thanks for your detailed reply. I know that it is always described in text books, but the equation X = T + E is not a model. Therefore, I do not agree with you concerning this point. Raykov (2011, p. 121) explains this quite clearly:
“In a considerable part of the literature dealing with test theory (in particular, on other approaches to test theory), the version of Equation (5.1) [X = T + E] for a given test is occasionally incorrectly referred to as the ‘CTT model’. There is, however, no CTT model (cf. Steyer & Eid, 2001; Zimmerman, 1975). In fact, for a given test, the CTT decomposition (5.1) of observed score as the sum of true score and error score can always be made (of course as long as the underlying mathematical expectation of observed score—to yield the true score—exists, as mentioned above). Hence, Equation (5.1) is always true. Logically and scientifically, any model is a set of assumptions that is made about certain objects (scores here). These assumptions must, however, be falsifiable in order to speak of a model. In circumstances where no falsifiable assumptions are made, there is also no model present. Therefore, one can speak of a model only when a set of assumptions is made that can in principle be wrong (but need not be so in an empirical setting). Because Equation (5.1) is always true, however, it cannot be disconfirmed or falsified. For this reason, Equation (5.1) is not an assumption but rather a tautology. Therefore, Equation (5.1)—which is frequently incorrectly referred to in the literature as ‘CTT model’—cannot in fact represent a model. Hence, contrary to statements made in many other sources, CTT is not based on a model, and in actual fact, as mentioned earlier, there is no CTT model.”
But there are models developed within the framework of CTT. If one posits assumptions about true scores and errors for a given set of observed measures (items), which assumptions can be falsified, then one obtains models. This is closely related to confirmatory factor analysis, because the CTT-based models can be tested using CFA. If one assumes unidimensionality and uncorrelated errors, this would be a model of tau-congeneric variables, because these assumptions can be tested.
Raykov, T. & Marcoulides, G.A. (2011). Introduction to Psychometric Theory. New York, NY: Routledge.
Steyer, R. & Eid, M. (2001). Messen und Testen (Measurement and Testing). Heidelberg: Springer.
Zimmerman, D. W. (1975). Probability spaces, Hilbert spaces, and the axioms of test theory. Psychometrika, 40, 395-412.
Kind regards,
Karin
Question
Some people consider mathematical axioms as points of strength of mathematics, other say that axioms are points of weakness.
My question is why?! Is it so difficult to prove mathematical axioms?
The axiomatic system includes primitive statements. That is postulated as facts and called axioms. Obviously, all axioms in the same system should be independent and consistent.
To talk about proving axioms has no meaning. Unless we assume that axiom under consideration is not independent of the other axioms and one can try to show his claim.
A famous example is the well known geometric axioms of Euclid where we have five axioms:
Quoting:
1. A line can be drawn from a point to any other point.
2. A finite line can be extended indefinitely.
3. A circle can be drawn, given a center and a radius.
4. All right angles are ninety degrees.
5. If a line intersects two other lines such that the sum of the interior angles on one side of the intersecting line is less than the sum of two right angles, then the lines meet on that side and not on the other side. (also known as the Parallel Postulate).
In fact, the last axiom is equivalent to say the sum of the interior angles of any triangle is π (180 degrees).
Hundred of mathematicians claimed that the fifth axiom is not independent and can be deduced from the first four axioms.
But all failed to prove that claim.
Those attempts were the base stone to create new geometries such as the ( elliptic geometry), (hyperbolic geometry),etc, and hundreds of Non- Euclidean geometries where the sum of the angles of the triangle in some of such geometries is greater than π and in other geometries, it is less than π.
Nowadays facts and observations about the surrounding universe proved that our universe is Non-Euclidean one. So trying to prove axioms may lead to new creative ideas that change the whole axiomatic system into a new more efficient model. Why not?
Best regards
Question
The well-known Zermelio's theorem states that every set can be well-ordered. Since arbitrary well-ordering is a linear ordering, from this theorem it follows the following corollary:
(A) An arbitrary set can be linearly ordered.
It is well-known that Zermelio's theorem is equivalent to the axiom of choice.
Question: Can Corollary (A) be proven without axiom of choice?
.
if you can make sense of the following thread :
(i'm having hard time with it ... but thanks for the question ; had me review a pile of long-forgotten concepts !)
.
Question
Axiom of choice is debatable. it leads to pardoxes like well ordering theorem which is intuitively false.
Inceasingly people working in fields like tructive analysis or compter science tend to believe it is false
It is mostly used for results for a class of objects. If a particular instance is given the result can be proved to be true without thjs axiom
some results like every field has an algebraic cloosure strictly not necessary.
one can take a field and a specific polynomial and construct its splitting field. So Galois theory can be done.
yes tychonoff theorem will be false and we better live with this fact.
existence ofa complete orthonormal set will not be true. But when one computes fourier coefficients all but countable many are zero.
HAhan banch theorem for separable spaces will it hold?
anyway we can do Mathematics mostly under separability assumtion
why carry aan axiom whose one consequence well ordsring theorem which has to be false and also a consequnce leads to banach-tarski paradox..
The axiom only simplifies reasoning in that we can assume maximal ideals exist or dual space of Banach space is nonempty etc.
with the exception of Tychonoff theorem we really use the axiom as a convenient blanket for a class of objects.
may be we need to add extra assumptions to theorems but better than carry a wrong axiom
i used to believe the axiom in the sense that when product of three sets has more elements than the product of two sets the arbitrary cartesian product of nonempty sets must be nonempty.
But while product of two sets is defined using notion of ordered pair product of more sets is defined using the notion ofa mapping which depends on the notion of product of two sets. herein lies the point which changed my inutiion.
No a vector space has a Hemel basis is uselesstheorem there is no use of Hemel.basis of an infinite dim space and it has to be uncountable . Equally useless is rhevgheorem that evetyve field has algebraicclosure to so famous theory given a polynomial one needs to construct its splitting field in specific instance C is slgebraically closed field containing re. Complete orthogonal set exists not much needed as all but countable many Fourier covers must vanish. Maximal.ideals exist is a theorem one can not resist and one will be sad tychnoff theorem may be discarded with all its consequence for functional analysis one has dual of C(a.b) and la is dual of l.p. . So is that a.c. is really needed a good use of countable axiom of choice is for a cluster of there exists a sequence converging to it . Why carry an axiom whose consequence well ordering theorem is counterintuitive non constructive and so should be treated as false . YES I WAS AN INTUITIVE LELEIVER OF AC AT SOME TIME BUT AS I have explained in question arbitrary Cartesian product is not defined using tuples but mappings . So my intuition got corrected we do not want beach tarski paradox either and even if axiom of choice is true in the firm stated in my other question arbitrary product of indexed family of no memory sets is no memory one can not form a non measurable set unless we have explicit indexing ofvequivalence classes under partition
Question
While going through a paper for review, i found that the authors claim that any finite set can be ordered by using Zorn's Lemma and Axiom of choice.
i am unable to get how it can be concluded so. Can someone enlighten me in this connection!!!
Typo?
In ZF, any finite set with n elements can be counted in n! ways.
Therefore it can be totally ordered in n! ways.
Question
Is is suggested that bureaucratic phenomena may be visualized and rationalized by relating administrative structures and processes metaphorically to structural equivalents in nature. Examples: Max Weber's "iron cage" vs. Faraday cage and cage rearing. Ohm's law to describe hierarchical stress as the product of management power and staff resistance. Organizational deficiencies vs. lattice defects in solids and metastases in oncology. Order/disorder/chaos vs. 2nd law of thermodynamics. Physical formulae and axioms are much more concise than social science prose. They remind us of the famous laws of C. Northcote Parkinson and Laurence J. Peter.
Of course, the physics underlying our biology and our environmental contexts are fundamental for understanding life. As biological organisms interacting with our environments, we form part of a complex and dynamic system from which emerge our cognition, behavior, and extended networks of shared symbolic associations which we project onto ourselves, other people, and the rest of the environment. When you take it down to the level of the cell, the application of physical laws doesn't require metaphorical associations. At any rate, thinking metaphorically is inevitable, as understanding one thing in terms of another is a fundamental aspect of human cognition. If we step back and try to see the big picture, the dynamics of social behavior in an institutional setting should come into focus.
Question
To this day, cladograms are assumed as mathematical postulates that explain the evolution of living beings. So far it is said that the least speculative reasoning, which is assumed as an axiom. I write a book and I need to explain this concrete demonstrations, as is done in physics
Thank you very much friend
Question
As far as I could find they (Partial Field) were first introduced in 1996, but still playing an important role when it comes to Matroids Representation. Then I've lately read a paper about skew partial fields and matroids representation over it, that become the generalization of representability of matroids over any skew field. I'd like to know if there are any other theory beside Matroids' that are related to partial fields. If there's somebody to give me any clue.
I haven't seen the term partial field show up at all, other than the definition you have already presented.
If you are familiar with the notion of an abstract Witt ring, which was introduced in the early 80's, then you have seen a way that a partial field appears naturally. An ordered triple (G,Q,q), called a quaternionic structure, is a group which additively generates a ring, q is a map from the cartesian product of G with itself subjectively onto Q. This, with some effort, produces a ring with G as its multiplicative group of units. If you are interested in such things, from a historical standpoint, then you should read "Abstract Witt Rings" by Murray Marshall. If you'd like to see some combinatorial results involving quaternionic structures and the Witt ring, then you should consider reading my thesis.
Question
Dunn has proved that an inconsistent field with (a) a pair of classically distinct real numbers x,y identified x=y, and (b) the resulting theory closed under the laws of classical fields, then we have that every real number is provably identical to every other  (r,s)(r=s). The proof is simple: from x=y we have 0=(y-x), then both sides can be multiplied by any factor we like to get 0=r and 0=s for any r,s. Hence by Leibniz Law r=s. This is avoidable only by restricting functional substitution. How then is it prevented in the present construction? Especially in light of the (unproved) result of the preservation of function values P8? Is it because the only inconsistency contemplated in this construction is simply adding ~(x=y) for some classically identical x=y? If so, doesn’t this restrict its usefulness? There are of course inconsistent constructions which prevent Dunn’s argument, such as Mortensen’s.
Inconsistent Mathematics
C.E. Mortensen
Sounds like it has relations to the Archimedean axioms....
Question
In the standard proof of Hilbert projection theorem the axiom of countable choice (denoted by CC)  is used. I wonder whether there is a model of ZF+ the negation of CC in which Hilbert projection theorem for Hilbert spaces that are not finitely dimensional fails. Perhaps, there are experts in both functional analysis and set theory who can answer my question  easily. I would be grateful for their hint.
There is close interplay between the foundational theorems of functional analysis and some form of the axiom of choice or it's equivalences, e.g. Zorn's Lemma.  This led to a "constructive" school in the 1970's to establish the results of functional analysis, Hahn Banach, Open Mapping, etc. without use of the axiom of choice in any form.
I was discussing this topic the other day and when I went poking around to see the status of this approach  I found the following reference which might bear on your question.
Question
In neutrosophic sets all three measures (Truth, Falsehood, indeterminacy) are independent, how one effects another in decision making. For Example: In case of intuitionistic fuzzy sets, if membership of an element increases, then, certainly the sum of other two measures (non-membership and hesitation) will decrease.)
This is quite interesting,. RK Mohanty '; I was wondering if there is a particular paper you can mention that concerns this issue. I have immediately noticed, that as soon as tries to build a probabilistic model that involves but only 3 outcomes, based on some parameter, on [0,1], call that U (the unit interval)  which maps to a 3 tuples of numbers <x,y,z> where x,y,z\in[0,1];s.t f:c>{<x,y,z};for,all,x,y,z\in[0,1], where P(A)+P(B)+P(C)=1, where for any c values, A occurs if
f(c)1,ie x<P(A), B occurs if y<P(B), z<P(C), where A, B, C are mutually exhaustive and disjoint, which allows that
(A) only one outcome occurs at most (mutually exclusive),
(B) one outcome does indeed occur (exhaustive),
(C) using a mechanism that is somewhat uniform dfistributed,
(D) where all values of x,y,z are in [0,1] are present for some value of c; surjective,
(E) which does not involve swapping the values of the x,y,z for any specific x, depending on which outcome occurs, or swapping the positions of the outcomes geometric location of the circumference of the circle if you will, where the c value is some on [0,1] denotes some position said circumference,
(F) which does not hard-code, a whole lot of F(C)=1,0,0, , with or without swapping, and (
G) and which allows that any outcome is possible for any c value, depending on what the probability value of that outcome, is except for the very view <1,0,0>, <0,0,1. <0,1,0> perhaps,
(H) AND WHICH is defined in a probability value independent fashion
(I) and is otherwise non-adhoc, that is, does not have, for every or at least in a great number of <x,y,z>  one such 1, or zero for one of the x,y,z, whilst the others are not, with or without swapping of digit values, or geometric location, depending on the outcome and (some sense indirect sense) the probability values or inequalities
In fact, other then a very view adhoc sets of values, and generally these involves some swapping depending on which outcome occurs, and or sometimes at least a singular one or zero,
I dont think one even, so much as find a single 3-tuple of numbers of <x.y.z> that works for all probabilities values,  and which does not involve a 1 or 0 for any x,y,z, which works in the same sense as the two outcome case.
That, , the two outcome case works perfectly well, except for the somewhat trivial case, where P(A)=x, P(B)=y. One cannot so much as even get close to that with three variables where issues only arise when P(A)=x,P(B)=z, P(z)=c, which would not be such an issue, insofar as it still appears for a single probability values for any, c, for the two outcome case in any case; ie where f(c)=<c,1-,c>, which works regardless of the probability values and satisfies the above, despite the aforementioned issue. A occurs P(A)>C, B occurs if P(B)>1-c, c, ranges between all numbers in the unit interval, and P(A)+P(B)=1
I do not think one can have for a three outcome  even so much a a singular 3-tuple of numbers, that satisfies the above constrains, for all values that P(A), P(B) and P(C) could have, at least up to the case, where P(A)=x, P(B)=y. P(C)=z,  where none of the x,z,y are either 1, or 0,and satisfies
(A) mutually  exclusivity,
(B) mutually exhaustive,
(G) one can alter the probability values of A, B, C, some such such value will allow that A and only A would occur, a distinct set of pr values for B that it would ensure it and only it would and could occur, likewise for C,
(F) non-triviality, no hardcoding but a singular one, or zero into said said numbers, x,y,z, ie x,y,x\neq0,x,,yz\,neq=1,,
(E) No outcome dependent or x,y,z value dependent swapping, function swapping, location of outcome on the circle swapping/
(H) Probability value invariant; set of numbers and likewise no probability values or inequality dependent swapping
Unless one puts certain zero or ones, or has weird values for disjunctions, or swaps the values depending on which outcome occurs or does not occur, (whether or not said swapping, involves 1, or zeros, or not,and even then that makes it probability value dependent), so as to ensure that one outcome and only one outcome occurs, whilst allowing that it could be any of the three outcomes depending on the values of the probabilitie
s.  At least not  in the same sense that it works for two outcomes (ie other then the case where P(A)=x, P(B)=1-x).
For three outcomes, one can sometimes, get a mechanism that ensures mutually exclusivit, but not exhaustive, and mutually exhaustive but not exclusive nes, but generally not both, even with swapping,and to make it work perfectly it generally involves at least one such 1, or 0 in said swapping, but swapping in itself is a probability values dependency.. And when you get to more than three outcomes, you have to ask which value is held fixed, and which one swaps, else there aint but even a leg to stand on.
If these x+y+z, in nay particular caes, or all cases, have some specific number they must some up to its clear that it is greater then 1, and smaller or equal to 2, generally around sqrt2/2, but even that hardly works. And even if they vary
Question
According to the definition of equality between functions, f: R->R ( f(x)=2 for all x)
isn't equal to g: R->N ( g(x)=2 for all x). but they are equal according to the axiom of extensionality. (they are sets and this axiom must be true for them.)
So we have two definitions for "equality" here that aren't equal logically. This is indigestible for me,
@Amir
What is indigestible to you is in fact caused by a wide-spread floppyness of mathematical textbooks. Arguing only within pure mathematics and set theory (as opposed to computer science and type theory) there are actually two definitions of the concept function:
1. a set f of ordered pairs (x,y) such that (x,y) \in F and (x,y') \in F implies y=y'.
Such a set f determines  sets dom(f) and range(f) as.
dom(f) := {x | there exists y such that (x,y) \in f }
range(f) := { y | there exists x such that (x,y) \in f }
2. A 3-tuple (X,Y, f) of sets such that f is a function in the sense of 1 with the additional regulation that dom(f) = X and range(f) \subseteq Y.
(I assume that you know or will lookup the LaTex notation for the basic symbols of set theory)
If one works with the second definition (as most of my analysis books do) one has together with a function f in the sense of 1. the set Y . This is not determined by f since it is only required to comprise range(f). Of course, X is determined by f as dom(X). Y, although not defined by f is often said to be the co-domain of f.
Since both definitions work by specifying certain types of sets they imply a definition of equality for functions by the definition of equality in sets (that is by extensionality).
In case of the definition 2. two functions (X,Y, f ) and (X',Y', f' ) are equal iff
X=X' and Y=Y' and f=f' (everything understood as equality of sets).
Perhaps it helps to make all these notions  crystal clear if one applies them to your functions f and g:
f = (R,R,h),  g=(R,N,h), h:={(x,2)| x \in R}
Obviously dom(h) = R, range(h) = {2},  co-dom(f) = R, co-dom(g) = N
Of course, f and g are different since they have different co-domains. Already the notation in your example makes clear that the objects under consideration are functions in the sense 2.
In several places I found the proposal  to speak of functions in case 1 and of mappings in case 2.  This is perfectly reasonable and would solve the problem.
Question
(C1) to (C5) axioms can be found in below paper.
Sarma, I. Ramabhadra, et al. "Convergence axioms on dislocated symmetric spaces." Abstract and Applied Analysis. Hindawi Publishing Corporation, 2014.
In the same paper, please refer Example 10. It is proved that C5 holds and C2 holds, but C1 does not hold
Question
Dear colleagues,
I managed to solve problem which fingerprint scanner is optimal for data acquisition - minutiae just new born baby fingerprint.
Results of qualitative research are quite interesting, and it is possible to realize it within 1005 accuracy.
In mine case this confirmed biometry fingerprint axiom that fingerprint minutiae are formed prenatally since 7th month.
We felt proud after finishing this task since we provide enough data fact for Patenting device that will guarantee ID baby and Maternity over every new born baby in 100% and eliminate human error.
Has anyone did some similar research?
Thank you.
Proud!
Question
In 1999 I published a book, Axiomatic Theory of Economics
Since then I have found that economists who have not read even the simplified exposition will invoke the name Kurt Gödel when dismissing my theory.
I know who Gödel is, but I do not see what the foundations of mathematics have to do with me.  I rely only on widely accepted calculus and real analysis results that should be familiar to any practicing engineer.  The antipathy I get from economists has nothing to do with number theory – most of them would be hard pressed to even define a prime – it is all about me stating my assumptions clearly before proving my theorems.
So my question is:
How should I respond to people who invoke Gödel’s name when dismissing my work?
I am reminded of Van Helsing holding up a cross to Dracula, except for economists it is Gödel’s Incompleteness Theorems that ward off the evil logician.
Have other people at Research Gate faced similar criticism?  How did you respond?
FYI  I am NOT a follower of Gerard Debreu.  I have my own theory.  Something else that I have noticed about economists is that they are incapable of recognizing that it is possible to have more than one axiomatic theory that purports to describe the same phenomena.  I have found it impossible to disabuse economists of the belief that Debreu (who was parroting Bourbaki) fully defines the axiomatic method.
Economists claim that the practice of deductive logic rises or falls with the fortunes of this one man, regardless of what axioms the practitioner is using.  I reply that, since Debreu lost all of his followers in 1974 when his theory went down in flames, accusing me (who was eight years old at the time) of having ever been a follower is actually a straw man attack.
I agree with many of the above comments to the effect that Gödel is just a red herring in this context. A supplementary consideration might be to invoke a notion I would assume economists are familiar with, namely "satisficing". An axiomatic system need not attempt to capture everything, not even many things that would be easily capturable if one cared enough to capture them. One may simply want a system that is good enough for the purposes at hand. And such a pragmatic approach can itself be justified on economic grounds.
Question
recently, some researchers work on infinite structure of matroids.
independent axiom of matroids  have a first role in definition of codes on GF(q) that we can see this point in representable matroids. now , can we have any logical definition  of this point for related codes of infinite matroids?
Hi Hossein.
Section 2.6 seems to touch on the issue of infinite matroids being representable over a field.
If that field is GF(q), one is close to your question, I assume.
Question
Given an algebraic system based on polarities on the sphere. A pair of opposite points and their equator constitute a basic element. (Add the equator to Riemann's unification of opposite points in elliptic geometry.) Two elements determine a resulting element of the same set. This is a partial binary operation with two axioms: ab = ba; (ab)(ac) = a. I call any set of this type a projective sphere. (Cf. Baer's finite projective planes and Devidé's plane pre-projective geometries.) From these axioms a number of important properties can be deduced. For example, if the set has at least two elements a and b, then xx cannot be properly defined for the whole set, because (ab)(ab)=a=(ba)(ba)=b contradiction. This means that in the general case xx must remain undefined, as with the case of division by zero in fields. However, if a smooth curve is given on the sphere, or an oval in a finite set, then the xx operation CAN partially be defined for the elements of the curve or of the oval as the tangent to the given point. Example: given the oval of the four reflexive (self-conjugated) elements in a 13-element finite sphere; the derivative consists of the same four elements. Another example: Given the basic elements on the sphere with homogeneous coordinates. Take the circle with center (1,0,0) and radius pi/4, given by elements (1,√(1-c^2 ),c); its derivative is the curve given by elements (-1,√(1-c^2 ),c). In this interpretation, the derivative does not represent the number indicating the slope of a straight line, but a set of the same type of geometric objects out of which the original curve is made. Also, this gives that every smooth curve evokes a geometry of its own, defined specifically for the given curve.
Dear Geng, thank you for your empathic words and wishes. I am stubborn and ignorant enough to keep on my way (but look forward to any comment or correction), and leave to posterity to decide if there's a method in the madness.
Dear Anatolij, at the moment I try to focus on curves on the sphere and their tangents. I want to determine the homogeneous coordinates of a tangent, and to express some of its properties in the language of the algebraic system above. On the first picture attached there is a bigger yellow circle of radius 60° with tangents and a smaller yellow circle of 30°, its polar circle, which represents the derivative of the first circle. The green circle in the middle is of radius 45°. It coincides with its derivative, but its elements are not self-conjugated. I added the respective expressions for the circles in homogeneous coordinates. For the second picture, I begin with a definition: In this algebraic system, elements a and b are conjugated if at least one element x exists for which ax=b. It can be proved that in this case element y also exists for which by=a (symmetric relation), and there is more than one x or y with this property. The expressions on the second picture describe the property of polar reciprocity (if a pole is conjugated to another polar, then the other pole is conjugated to the other polar): If there is x for which (ab)x=(cc)(dd), then there is y for which (cd)y=(aa)(bb). Here aa, bb, cc, dd represent the tangents to elements a, b, c, d, resp.. If we define spherical ellipse with constant spherical sums from the foci (ellipse is the same as hyperbola, parabola is a special case), then I think that the same statement is valid for ellipses. Can this statement be generalized for other spherical curves or at least certain arcs of these? Another question: Can we replace the definition of aa as tangent to element a with another: a*a* is the normal line to element a? Can we save some form of polar reciprocity in this case? For circles we get degenerate case with all normals being concurrent in the center; but what happens with non-circle ellipses?
Question
What is needed is only scalar measurability that is measurability of | f| for forming l1 space and L2 space.
Since issues of measurability depend upon axiom of choice it is advisable to avoid these issues as far as possible.
It is well known that in any integration theory ( daniell, henstock kurzweil lebesgue bochner) each absolutely integral real valued function is measurable.
But in daniell mikisuinski or henstock kurzweil integral one does not need prior measurability for discussing integral
l1 can be defined as space of absolutely integrable mappings L2 as space of mappings such that | f| square is integrable.
These questions are penitent also in reference to vector measures.