Questions related to Axiom
A common axiom says everything has both advantages and disadvantages. This question seeks to x-ray the disadvantages of conflict on the quality of scientific researches both in areas mostly and remotely affected
P1: Ontology + Data = Knowledge Graph (KG)
P2: If a KG is the sum of these two summands, it follows:
C: An ontology is a framework for a KG
As a framework, an Ontology consists of person data, classes, properties, relations and axioms.
Personal data: Tom
classes: Interim project manager
Properties: takes over the project
Relationships: Tom is the successor of Bernd
Axioms: Tom takes over the project from Bernd
KG: We ask a question and the knowledge graph makes connections to the individual elements of the Ontology. It brings them to life, so to speak.
If you compare the KG with our neural network, you can see similarities. If I ask Tom a question, he will use his neural network to answer this question and generate new ideas.
That's why Knowledge Graphs are also defined as a kind of semantic network.
Does the community agree with this approach? Please give feedback! Thanks!
What kind of scientific research dominate in the field of New ideas, new concepts in science, in art, in business?
The new idea often contains something innovative in relation to what was previously invented, created, designed and manufactured.
New ideas take different forms of new solutions, new concepts, new models, new designs, new axioms, new directions of scientific thought, and many other forms that incorporate any aspects of novelty.
While conducting scientific research, new ideas, ideas, inventions, techniques, technologies, innovations, etc. are created. Thanks to this, the economy is developing, and civilization and cultural progress are being realized. Therefore, it is necessary to create good standards and conditions, including financing the conduct of research projects.
Please reply. I invite you to the discussion
This is a question about Godel Numbering. As I understand it, the axioms of a system are mapped to a set of composite numbers. Is this really the case, so for example the 5 axioms of Euclidean plane geometry are mapped to 5 composite numbers? Does this also imply that theorems of the system are now composite numbers that are dependent on the composite numbers that were the target of the map from the set of axioms PLUS the elementary numbers that describe the logical operations, such as +, if..then, There exists, ext.?
Austrian-born mathematician, logician, and philosopher Kurt Gödel created in 1931 one of the most stunning intellectual achievements in history. His shocking incompleteness theorems, published when he was just 25, proved that within any axiomatic mathematical system there are propositions that cannot be proved or disproved from the axioms within the system. Such a system cannot be both complete and consistent.
The understanding of Gödel’s proof requires advanced knowledge of symbolic logic, as well as Hilbert and Peano mathematics. Hilbert’s Program was a proposal by German mathematician David Hilbert to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. Gödel’s 1931 paper proved that Hilbert’s Program is unattainable.
The book Gödel’s Proof by Ernest Nagel and James Newman provides a readable and accessible explanation of the main ideas and broad implications of Gödel's discovery.
Mathematicians, scholars and non-specialist readers are invited to offer their interpretations of Gödel's theory.
A difficulty in just starting a research, is that IF it is hard to even know -- what it means to have property X, such as spacetime in physics? The question of avoiding circular thought naturally appears.
To avoid circularly, we suggest to start lower than the level of measurements, in the case of spacetime or property X, before a metric function is even introduced.
We then start as a simple topological space, divisible in types by type theory only, a notion lower than sets. In that space, the metric is not introduced ad hoc, but by requiring observable properties, on the metric function.
The method, to be described in property X, mutatis mutandis as done in spacetime:
- describe all free observers,
- impose that all agree on the interval,
- make the interval a differential defined by a metric function, and
- find the metric function that fits.
For example, because one includes as a special case the condition (2), when all such observers agree on the free speed of light (free as in vacuo), an experimental, not disputable, fact, the determination of the interval ds2 is fixed by nature as the arbiter, by physics, in spacetime.
Other arbiters are possible, such as cosmology, the mind, mathenatics, or a historically-based sequence.
In physics, this produces the only answer possible in nature for ds2, the expression for the correct metric to use, which provides the fusion of space and time in the interval ds2 -- as already known by Minkowski and Einstein, more than 100 years ago. In cosmology, with the Hubble flow, other answers are possible. The introduction of dark matter could be done this way.
The Lorentz Transformation is then, first, introduced as a consequence of this precedent, not as an axiom anymore, not before.
This simple, proper, sequence of steps, exemplified for property X as spacetime, avoids the inconsistencies of the original treatment by Einstein, and was later adopted by Einstein in formulating general relativity, as a curvature of the same spacetime.
Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)?
I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)?
Let me give my answer to start this debate:
Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms.
Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed.
Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional.
This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline.
I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence.
If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles.
Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence).
Is there an encyclopedia of all the branching mathematical axioms, together with various ways of proving different theorems based on those axioms?
I will be using Axiom Genome-wide Human Origins 1 array. Is there a way to tell whether a sample is contaminated with DNA from another sample during the extraction stage (for example, excessive heterozygous calls)? Is there a way to eliminate alleles of the contaminant from the data? What is the minimimum proportion of contaminating DNA which can be detected by most SNP arrays?
this is axiomatic set theory . these axioms are needed for set theory and not for mathematics. so can we avoid them since the involve use of predicate and property. will experts guide in detail. can the use be restricted by using a mapping rather than property or predicate notion ?
The answer must be yes; if all the principles of physics were known, then physics mysteries would have been answered, no? Or would they? Even with all the axioms of natural numbers, there are mysteries in the counting numbers, according to Godel? Does Godel's result in mathematics and logic analogize to physics? How lost are we in physics?
In his Principia, in the Motte translation, Scholium at p. 77, he writes of time, in order to remove "certain prejudices": “Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration”.
In the Motte translation, p 506, Newton says: “... for whatever is not deduced from the phenomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy."
Is it possible that he set aside the issue of time in order to work out the consequences of a absolute time axiom? That absolute time was for Newton a provisional hypothesis?
Chalmer’s contemplated in  the Chinese room argument for both the connectionist and symbolic approaches in AI as I have in the thread . I would expand upon the axiom ‘Syntax is not sufficient for semantics’ comments that as presented in the diagram of the thread (attached here also) there is another error in Searle’s argument.
The neural network system drawn there is a complex distributed system drawn at reflecting accuracy in translating a 1-gram not n-gram models where if taken as words would account for semantic interpretation( which would be another neural network).
I would like counterpoints which can refine the argument
 Subsymbolic Computation and the Chinese Room by David J. Chalmers http://consc.net/papers/subsymbolic.pdf
How can we define biodiversity so that it can be measured and the extent of its degradation and the effectiveness of safeguarding measures can be known?
It seems to be complicated, if not impossible, to give an objective definition of biodiversity (animal, plant... in forests, oceans...) based on common sense axioms, other than simply counting the number of species present in a geographical area of interest. However, this definition is very restrictive since, for example, it does not take into account the number of individuals per species (a species may become endangered, without the criterion showing this). In addition, the interest of biodiversity also stems from the interactions between species, their complementarity for different functions in ecosystems.
What measures can be put in place? What is the evolution of concepts, ideas, literature on this subject?
The open mapping theorem is usually proved in most texts using Baires Category theorem which depends upon the axiom of choice.
But if one studies differential calculus in Banach spaces say as in Dieuodenne Foundations of Modern Analysis the theorem is the first part of Inverse mapping theorem( as proved in Walter Rudin's classic Principles of Mathematical Analysis and the proof carries over to Banach space setting ) as a contiuous linear mapis differentiable This proof does not depend upon Baires Category Theorem..
What is a geometry? What is required of a system of mathematical objects and propositions(axioms) so that they may be termed a geometry? What must the geometry reveal about the objects?
As an example in a vector space with a bilinear form we can calculate the norm and the orientation?
Are there any conditions that a theory must satisfy in order to be called a geometry?
I apologize if my questions sound a bit unclear. I thank you all in advance for your help and cooperation.
I tend to consider mathematics the body of axioms, definitions, systems of logic, and their results. If you're not adding to this body, you're not doing mathematics.
Arithmetic, calculating, solving, and so on, just seem more like accounting than mathematics. If anything, I would call these things "calculus" after the original Latin root word, referring to a stone used for counting.
Maybe it's pedantic. But I don't think so. A lot of people who "hate math" really hate "calculus." And honestly, being good at mathematics, at constructing proofs, etc is so different from being good at working with numbers. I'm decent at the former. I'm horrible when it comes to the latter.
Isn’t it true: “Paradigm” is one of the most deeply useful and most used or abused term in the intellectual circles and discussions?
I used the term often, without fully comprehending its finer details. So, I started searching for comprehensive description to fully comprehend the meaning to gain deeper insights but could not reach the goal yet.
Hence, I decided to create one and share here for debate and discussion for improving my understanding by listening to different perspectives for gaining new insights. Let me share, my preliminary draft description and insights briefly:
Question: What is a scientific or technological Paradigm?
Answer: A Paradigm is a complex perception of reality painted by a huge BoK (Body of Knowledge) comprising thousands of pieces of Knowledge such as individual observations, experiences, shared background axiomatic-assumptions, values, theories, postulates and prevailing climate of opinions or thought patterns of a very large community or group of persons subscribed to the paradigm.
A paradigm can become a deeply entrenched paradigm, only if it attracts a very large community or groups of practitioners and researchers for expanding the paradigm and they together accumulate a huge BoK by acquiring knowledge for decades or even centuries. Each piece of knowledge in the BoK for a deeply entrenched paradigm is consistent and/or congruent with all the other pieces of the knowledge in the BoK and overall perception of reality painted by the BoK.
The Books and research publications for each discipline (e.g. Botany, Zoology, Chemistry, virology, mycology, parasitology, and bacteriology to name a few) comprises a huge BoK accumulated for decades, and the BoK paints a perception of reality, where the perception of reality is the “Paradigm”.
In other words, paradigm for a discipline is our understanding of the world or perception of reality painted by the BoK or Knowledge in text books and research papers. Every mature discipline must have a paradigm, which is nothing but a perception of reality painted by the BoK acquired and accumulated for the discipline.
Almost every discipline including soft-sciences (e.g. sociology, political sciences, psychology, economics or even each religion) having BoK that paints a perception, which may be referred to as a paradigm. My understating has few gray or blurred patches, so like to here other perspectives to improve clarity.
Our understanding of term “paradigm” can never be complete without knowing the state of Knowledge without a paradigm (e.g. during pre-paradigmatic state). The seminal and influential book “The Structure of Scientific Revolutions” By Thomas Kuhn (who coined the term “paradigm”) describes a period called pre-paradigmatic (or pre-science) state for each scientific discipline, when the scientific discipline is in its infancy (i.e. at the time of its inception).
During the pre-paradigmatic period, there exists a chaotic situation. There is a good summary for chaotic state during pre-paradigmatic (or pre-science) for any discipline in this informative video starting 1 minute 16 seconds for just two and half minutes: https://www.youtube.com/watch?v=JQPsc55zsXA (also next video may be interesting, which explains that creating a paradigm is essential to overcome such chaos: https://www.youtube.com/watch?v=sOGZEZ96ynI)
During the pre-paradigmatic (or pre-science) it is very hard to acquire knowledge. So, a basic foundation would be formed over the period for a paradigm by accumulating various theories, axioms, postulates that are created using reasoning and consensus and by relying on background assumptions, observations, prevailing climate of opinions or thought patterns.
For example, the pre-paradigmatic (or pre-science) period for basic sciences might be between 4th century BC and 1st century CE, during the many ancient philosophers (e.g. Plato, Aristotle, Pythagoras and Archimedes etc.) created the foundation for first scientific paradigm. This unfortunately also comprised a flawed axiomatic assumption or fallacy: The Earth is static. Exposing the fallacy resulted in a scientific revolution.
Likewise, even modern scientific disciplines would have a pre-paradigmatic (or pre-science) period. For example, the paradigmatic (or pre-science) period for computer science and software was approximately between mid-1950s and early-1970s.
For example, two NATO software engineering conferences 1st from 7th to 11th October 1968 and 2nd conference from 27th to 31st October 1969 defined (or coined) new terms such as “software engineering”, components and assembling etc., where the conferences were attended by many influential though leaders and researchers of computer science from almost all nations, which were engaged on computer science research at that period.
Although they became integral part of our vocabulary, the terms such as “software engineering” or “assembling” were perceived to be provocative or strange in 1968. There would be a period for transition from pre-science to normal science for such terms to become integral part of our vocabulary.
Also different groups may make the transition during different periods. It is also hard to know exact duration of transition, so my guess is that it happened between 1970 and 1975, but certainly culminated into a paradigm before 1979.
A paradigm would slowly become more and more entrenched (1) as more and more pieces of knowledge are accumulated and added to the BoK (Body of Knowledge), and (2) as more and more practitioners and researchers become subscribers to the paradigm. I think, software paradigm also has fallacies injected during pre-science period. Exposing those fallacies should result in a revolution.
According to the book “The Structure of Scientific Revolutions” By Thomas Kuhn (who coined the term “paradigm”): If any new piece of knowledge or fact is proposed or discovered for a deeply entrenched or dominant paradigm, it would face fierce resistance from the practitioners of the paradigm and they try their best to suppress the new piece of knowledge (e.g. even by resorting to attacks), if the new piece of knowledge is not congruent, but contradict or inconsistent with the perceptions or reality painted by the BoK for the dominant paradigm.
Normal science solves puzzles that are posed by the prevailing paradigm but does not challenge the paradigm's basic axiomatic tenets or postulates. But in fact, "normal science" will suppress novelties which undermine its foundations (i.e. the basic axiomatic tenets or postulates). If the fundamental axiomatic postulates are fallacies and exposing the fallacies would result in a revolution.
The non measurable set is formed by selecting one element from each equivalence class obtained by the relation x ~ y if x-y is rational.
but we suggest we do not accept this form of axiom of choice applied to collection of sets.
To form an arbitrary Cartesian product one needs an indexed family of sets and an indexing set is necessary.
In the above case it seems that the collection is not indexed . No indexing set and explicit indexing map is there.
So we can not form a nonmeasurable set. Thus Banach -Taraski paradox is absent.
we do not work with arbitrary collections but only indexed families and explicit indexing map.
how to avoid the paradox in measure theory
plane where axiom of choice is not needed is also open
By studying in steps what a flat plane is, this paper shows that only six axioms are necessary for 2-dimensional Euclidean geometry until the Pythagorean Theorem: 1) the existence of stable space; 2) the existence of a straight line through any two points, 3) the existence of distance measurement between any two points; 4) the limitation of the space to 2-dimensional, 5) the repeated equivalence, and 6) the reflected equivalence.
I also use this to teach kids about math logic and math observation
Dear all, I am searching for a comparison of CTT with IRT. Unfortunately, I mostly get an outline of both theories, but they are not compared. Furthermore, generally the "old" interpretation of CTT using axioms is used and not the correct interpretation provided by Zimmerman (1975) and Steyer (1989). The only comparison of both theories that I found was in Tenko Raykov's book "Introduction to Psychometric Theory". Does anybody know of any other sources?
Kind regards, Karin
Some people consider mathematical axioms as points of strength of mathematics, other say that axioms are points of weakness.
My question is why?! Is it so difficult to prove mathematical axioms?
The well-known Zermelio's theorem states that every set can be well-ordered. Since arbitrary well-ordering is a linear ordering, from this theorem it follows the following corollary:
(A) An arbitrary set can be linearly ordered.
It is well-known that Zermelio's theorem is equivalent to the axiom of choice.
Question: Can Corollary (A) be proven without axiom of choice?
Axiom of choice is debatable. it leads to pardoxes like well ordering theorem which is intuitively false.
Inceasingly people working in fields like tructive analysis or compter science tend to believe it is false
It is mostly used for results for a class of objects. If a particular instance is given the result can be proved to be true without thjs axiom
some results like every field has an algebraic cloosure strictly not necessary.
one can take a field and a specific polynomial and construct its splitting field. So Galois theory can be done.
yes tychonoff theorem will be false and we better live with this fact.
existence ofa complete orthonormal set will not be true. But when one computes fourier coefficients all but countable many are zero.
HAhan banch theorem for separable spaces will it hold?
anyway we can do Mathematics mostly under separability assumtion
why carry aan axiom whose one consequence well ordsring theorem which has to be false and also a consequnce leads to banach-tarski paradox..
The axiom only simplifies reasoning in that we can assume maximal ideals exist or dual space of Banach space is nonempty etc.
with the exception of Tychonoff theorem we really use the axiom as a convenient blanket for a class of objects.
may be we need to add extra assumptions to theorems but better than carry a wrong axiom
i used to believe the axiom in the sense that when product of three sets has more elements than the product of two sets the arbitrary cartesian product of nonempty sets must be nonempty.
But while product of two sets is defined using notion of ordered pair product of more sets is defined using the notion ofa mapping which depends on the notion of product of two sets. herein lies the point which changed my inutiion.
While going through a paper for review, i found that the authors claim that any finite set can be ordered by using Zorn's Lemma and Axiom of choice.
i am unable to get how it can be concluded so. Can someone enlighten me in this connection!!!
Is is suggested that bureaucratic phenomena may be visualized and rationalized by relating administrative structures and processes metaphorically to structural equivalents in nature. Examples: Max Weber's "iron cage" vs. Faraday cage and cage rearing. Ohm's law to describe hierarchical stress as the product of management power and staff resistance. Organizational deficiencies vs. lattice defects in solids and metastases in oncology. Order/disorder/chaos vs. 2nd law of thermodynamics. Physical formulae and axioms are much more concise than social science prose. They remind us of the famous laws of C. Northcote Parkinson and Laurence J. Peter.
To this day, cladograms are assumed as mathematical postulates that explain the evolution of living beings. So far it is said that the least speculative reasoning, which is assumed as an axiom. I write a book and I need to explain this concrete demonstrations, as is done in physics
As far as I could find they (Partial Field) were first introduced in 1996, but still playing an important role when it comes to Matroids Representation. Then I've lately read a paper about skew partial fields and matroids representation over it, that become the generalization of representability of matroids over any skew field. I'd like to know if there are any other theory beside Matroids' that are related to partial fields. If there's somebody to give me any clue.
Dunn has proved that an inconsistent field with (a) a pair of classically distinct real numbers x,y identified x=y, and (b) the resulting theory closed under the laws of classical fields, then we have that every real number is provably identical to every other (r,s)(r=s). The proof is simple: from x=y we have 0=(y-x), then both sides can be multiplied by any factor we like to get 0=r and 0=s for any r,s. Hence by Leibniz Law r=s. This is avoidable only by restricting functional substitution. How then is it prevented in the present construction? Especially in light of the (unproved) result of the preservation of function values P8? Is it because the only inconsistency contemplated in this construction is simply adding ~(x=y) for some classically identical x=y? If so, doesn’t this restrict its usefulness? There are of course inconsistent constructions which prevent Dunn’s argument, such as Mortensen’s.
In the standard proof of Hilbert projection theorem the axiom of countable choice (denoted by CC) is used. I wonder whether there is a model of ZF+ the negation of CC in which Hilbert projection theorem for Hilbert spaces that are not finitely dimensional fails. Perhaps, there are experts in both functional analysis and set theory who can answer my question easily. I would be grateful for their hint.
In neutrosophic sets all three measures (Truth, Falsehood, indeterminacy) are independent, how one effects another in decision making. For Example: In case of intuitionistic fuzzy sets, if membership of an element increases, then, certainly the sum of other two measures (non-membership and hesitation) will decrease.)
According to the definition of equality between functions, f: R->R ( f(x)=2 for all x)
isn't equal to g: R->N ( g(x)=2 for all x). but they are equal according to the axiom of extensionality. (they are sets and this axiom must be true for them.)
So we have two definitions for "equality" here that aren't equal logically. This is indigestible for me,
(C1) to (C5) axioms can be found in below paper.
Sarma, I. Ramabhadra, et al. "Convergence axioms on dislocated symmetric spaces." Abstract and Applied Analysis. Hindawi Publishing Corporation, 2014.
I managed to solve problem which fingerprint scanner is optimal for data acquisition - minutiae just new born baby fingerprint.
Results of qualitative research are quite interesting, and it is possible to realize it within 1005 accuracy.
In mine case this confirmed biometry fingerprint axiom that fingerprint minutiae are formed prenatally since 7th month.
We felt proud after finishing this task since we provide enough data fact for Patenting device that will guarantee ID baby and Maternity over every new born baby in 100% and eliminate human error.
Has anyone did some similar research?
In 1999 I published a book, Axiomatic Theory of Economics.
Since then I have found that economists who have not read even the simplified exposition will invoke the name Kurt Gödel when dismissing my theory.
I know who Gödel is, but I do not see what the foundations of mathematics have to do with me. I rely only on widely accepted calculus and real analysis results that should be familiar to any practicing engineer. The antipathy I get from economists has nothing to do with number theory – most of them would be hard pressed to even define a prime – it is all about me stating my assumptions clearly before proving my theorems.
So my question is:
How should I respond to people who invoke Gödel’s name when dismissing my work?
I am reminded of Van Helsing holding up a cross to Dracula, except for economists it is Gödel’s Incompleteness Theorems that ward off the evil logician.
Have other people at Research Gate faced similar criticism? How did you respond?
FYI I am NOT a follower of Gerard Debreu. I have my own theory. Something else that I have noticed about economists is that they are incapable of recognizing that it is possible to have more than one axiomatic theory that purports to describe the same phenomena. I have found it impossible to disabuse economists of the belief that Debreu (who was parroting Bourbaki) fully defines the axiomatic method.
Economists claim that the practice of deductive logic rises or falls with the fortunes of this one man, regardless of what axioms the practitioner is using. I reply that, since Debreu lost all of his followers in 1974 when his theory went down in flames, accusing me (who was eight years old at the time) of having ever been a follower is actually a straw man attack.
recently, some researchers work on infinite structure of matroids.
independent axiom of matroids have a first role in definition of codes on GF(q) that we can see this point in representable matroids. now , can we have any logical definition of this point for related codes of infinite matroids?
Given an algebraic system based on polarities on the sphere. A pair of opposite points and their equator constitute a basic element. (Add the equator to Riemann's unification of opposite points in elliptic geometry.) Two elements determine a resulting element of the same set. This is a partial binary operation with two axioms: ab = ba; (ab)(ac) = a. I call any set of this type a projective sphere. (Cf. Baer's finite projective planes and Devidé's plane pre-projective geometries.) From these axioms a number of important properties can be deduced. For example, if the set has at least two elements a and b, then xx cannot be properly defined for the whole set, because (ab)(ab)=a=(ba)(ba)=b contradiction. This means that in the general case xx must remain undefined, as with the case of division by zero in fields. However, if a smooth curve is given on the sphere, or an oval in a finite set, then the xx operation CAN partially be defined for the elements of the curve or of the oval as the tangent to the given point. Example: given the oval of the four reflexive (self-conjugated) elements in a 13-element finite sphere; the derivative consists of the same four elements. Another example: Given the basic elements on the sphere with homogeneous coordinates. Take the circle with center (1,0,0) and radius pi/4, given by elements (1,√(1-c^2 ),c); its derivative is the curve given by elements (-1,√(1-c^2 ),c). In this interpretation, the derivative does not represent the number indicating the slope of a straight line, but a set of the same type of geometric objects out of which the original curve is made. Also, this gives that every smooth curve evokes a geometry of its own, defined specifically for the given curve.
What is needed is only scalar measurability that is measurability of | f| for forming l1 space and L2 space.
Since issues of measurability depend upon axiom of choice it is advisable to avoid these issues as far as possible.
It is well known that in any integration theory ( daniell, henstock kurzweil lebesgue bochner) each absolutely integral real valued function is measurable.
But in daniell mikisuinski or henstock kurzweil integral one does not need prior measurability for discussing integral
l1 can be defined as space of absolutely integrable mappings L2 as space of mappings such that | f| square is integrable.
These questions are penitent also in reference to vector measures.
UML is a very popular modelling technique and tool. Formal specifications are based on mathematics. Formal verification is done by model checking, proving axioms and by algebraic based methods. How can we integrate formal verification in UML specifications? Is it feasible to make this integration?