Questions related to Mathematics
I have a transient flow across a pipe system. I want to prevent backflow in the pipe system. Someone please suggest me resources to model the valve mathematically.
Terry Tao blogged about this unfortunate event. Kindly share among Mathematical community and increase awareness.
In the recent paper which has been exhibited in the 51th Annual Iranian Mathematics Conference entitled "Notes on maximal subrings of rings of continuous functions" we give some
properties of maximal subrings of some classes of subrings of C(X). However, we could not answer the following two important questions in this context.
1. Is every maximal subring of C(X) unit-free (i.e., whenever R is a maximal subring of C(X) and
f is an element of R with empty zero-set, then f is a unit of R)?
2. Is every maximal subring of C(X) uniformly closed (i.e., closed under uniform topology on C(X))?
I would be very delighted if you could let me your opinion about any ideas towards approaching the answers of these questions.
Teaching Mathematics at school to all students is commonly justified by the opinion that it improves their problem-solving skills and "makes them smarter" (whichever measure is implied by this). I wonder a few things about this:
1) If there is a clear empirical support for this opinion. Does that evidence answer the question of the causality direction between learning maths and cognitive ability? Recommendations on good literature about this would be appreciated too.
2) Do the abilities students develop improve performance for solving problems that are non explicitly mathematical. For example - learning volumes of 3D shapes could improve spatial navigation.
3) And importantly, are these improvements particularly due to teaching maths? E.g. for the previous example - wouldn't learning world maps in a geography class or spatial maze tasks develop spatial navigation more efficiently than learning calculation of volumes?
I came across a term pseudo inverse laplacian/generalized inverse laplacian. What is the impact of pseudo inverse laplacian on the graph, both in directed and undirected graph.
What is the procedure for calculating the orthonormal basis for a matrix which is equivalent to requiring that [(1/sqrt (N)*[N*1 matrix] transpose of orthonormal basis) ] is an orthogonal matrix.
In the reference paper pseudo inverse is calculated for bipartite graph while my case involves directed and undirected graphs both.
Our knowledge of the world begins not with matter, but with perception. There are no physical quantities independent of the observer. All physical quantities used to describe Nature refer to the observer. Moreover, different observers can take into account the same sequence of events in different ways. Consequently, each observer assumes a “stay” in his physical world, which is determined by the context of his own observations.
If mathematics and physics, which describe the surrounding reality, are effective human creations, then we must consider the relationship between human consciousness and reality. Undoubtedly, the existing unprecedented scientific and technological progress will continue. However, if there is a limit to this progress, the rate of discovery will slow down. This remark is especially important for artificial intelligence, which seeks to create a truly super intelligent machine.
My dissertation thesis is Perceptions of mathematical ceonceptions depending on Myers-Briggs Type Indicator personalities. It can includes methods of solving, understanding, imaginations of math issues, etc. I would like to know if there is any research connects MBTI with solving university mathematics for example methods of solving Linear algebra depending on MBTI, because I was struggling to find something similar.
I am interested in the practical uses of mathematical minimal surfaces in engineering design (such as gyroids, Schwarz surfaces, etc.). Can anyone provide good examples, particularly in art and design (or nature-based design)? I am particularly interested in examples that have been physically created and used for some practical purpose or as art. However, I am having a hard time finding published work on this topic that is not purely computational.
If anyone has published work in this area, please consider sharing it in this thread so we can get a good discussion going.
Information processing and communication are described by a tri-state system, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by Verilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffers. This should be more evident in quantum computing, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
I wish to develop a peer mentoring model based on the content knowledge and pedagogical content knowledge knowledge in Mathematics and technological skills. Unfortunately I cant find a standardized test/ assessment tool to determine their competence level on each domain. Hopefully some of you can help me find a link or way to find an assessment tool ? Thank you in advance
I am asking to know by you, expert in cybersecurity and mathematics, if Computer virology (the one of Cohen and Andler) is still an active field of research. That research made in France at Inria at 2000-2010 . I do not see any prosecutor and I do not understand if this is a dead branch or not. I am interested in it in order to understand malware and detect behavior.
Thank you very much for your precious help!
I have written two articles about a generalization of Multiple zeta values and Multiple zeta star values. I also presented applications for this generalization including partition identities, polynomial identities, a generalization of the Faulhaber formula, as well as MZV identities. If you are intrested check them out on my profile and give me your opinion.
Math relates numbers to other numbers. But this is insufficient for physics. Physics models which include a cause-and-effect are more useful and result in bettter human understanding. For example, current models are time reversable. If cause-and-effect is part of the calculation, the model would not be time reversable without another equation showing how energy (entropy) is expended. Further, any proposed model that was only mathematical manupliation would not be considered physics.
The parameters are:
1. Stator phase resistance (Rs).
2. Inductances (Ld & Lq).
3. Flux linkage established by magnets (V.s)
4. Inertia (j)
5. Viscous damping coefficient (F).
6. No. of pole pairs
7. Generator speed (wm)
In this link shifted functions are defined as r*(x-o)/100 (where r is the original range)to keep the range between 100. But for optimizing the functions should I generate the values of X between [-100,100] or between [o-100,o+100]? If the function is shifted by o vector then the respective ranges should also change. Because if for 0<-100 or o>100, the global optimum won't fall in the range. And even if I generate the o values between [-100,100] then the function would be shifted in the range rather than being in the range where it is well defined.
- An understanding of theories about how people learn, and the ability to apply these theories in teaching mathematics; It is one of the primary requirements for effective teaching of mathematics, and a large number of scientists have studied mental development and the nature of learning in different ways, and this resulted in various theories of learning.
I would like to invite you to submit both original research and review articles to the Special Issue on "Modern Applications of Numerical Linear Algebra" organised by Mathematics (IF=1.747) ISSN 2227-7390. For more details see https://mdpi.com/si/74727.
This is my very first encounter with functional equation of this kind, and methods of series solution and differential equations are of not much help. The solution to this problem, or at least the mathematical prerequisites to understand solution to this problem is asked.
This function is asymptotically zero at both +_ infinity, positive otherwise, and has a peak near zero.
If I am correct, then this is the frequency distribution of numbers fed to the sequence xn+1=a ln (xn2) as long as the sequence generated is chaotic and sufficiently large in number (value of a is usually limited to 0.2 to 1.3, Positive and negative signs of a are essentially immaterial except for the sequence is negated after first term) .The sequence is initiated or seeded with number roughly as the same order of magnitude of 1 in both positive and negative side, except for the values that eventually lead to zero or infinity in the sequence. The sequence is allowed to proceed, and frequency distribution of numbers in the sequence are noted, from which a continuous probability distribution may be numerically guessed but not analytically found. The expression to find out formula of the continuous probability distribution comes to me from the following reasoning
- Suppose, the probability distribution is given by y=f(x). Now, if I consider a "dx" (infinitesimally) thin strip around x, then I come up with f(x) dx fraction of all points required to construct the probability distribution. When this fraction of all points are passed through yet another sequence of transformation through the recurrence xn+1 =a ln (xn)2 , the fraction of points involved must be unchanged. That is, when x is substituted with a ln x2 , the infinitesimal strip area, which changes to f( a ln x^2) d (a ln x^2), must be numerically equal to f(x) dx, thus the functional equation is postulated
- I am not entirely sure about this reasoning, and experts are welcome to point out my fault of reasoning and show the correct equation , if I am mistaken.
Please see my related question https://www.researchgate.net/post/Can-you-figure-out-Chaos-of-the-recurrence-x-n-1lnx-n2 for further details.
Considering the majority of the population, as if, they do not know as to what is their strength. This perhaps is due to their considerably lesser exposure to various possibilities, as a result under exploitation of visible potential indicators being observed perceivably reflective. This is somewhat mathematically to be calculated based on the following generic formula which could further be deepened by incorporating a finer level of referrals and parameters as per the identified essentials during the study.
Average calculated potential vs actual harvested potential vs differential potential = under or over harnessed potential
Over harnessed potential to be analyzed in terms of negative or positive impact in achieving socioeconomic equilibrium, so should be recommended to calculate in case of under harvested potential as well.
The above study should reflect socioeconomic loss vs gain due to under or overutilization of the human resource.
I would highly appreciate your view on the above.
For a function, usually sign of second derivative (and if it is zero, even/odd index of higher order derivative whose numerical value is zero) is enough to detect whether the extreme point is maximum/minimum/saddle point, if first derivative is zero. For a functional (NOT A FUNCTION), Euler-Lagrange equation plays the role of first "derivative" of Functional. However, it the RHS of Euler-Lagrange equation is set to zero and the resultant differential equation is solved, then how to find whether this function (as solution to differential equation) corresponds to minimum, maximum or saddle "point" of functional?
Unfortunately, the nature of extremum of a functional is usually declared to be "beyond the scope" of most preliminary/introductory functional analysis resources (I have not checked all). How difficult is that mathematics and what are the prerequisites to understand the mathematics involved in finding the nature of functional extremum?
Please note my knowledge on variational calculus, integral equations and transformations as well as group theory and advanced differential geometry is rudimentary.
I am in process of writing a paper for publication on behavioral mathematics. Would highly appreciate it if you may kindly help provide the links to some related publication/research work. This is needed to synchronize different entities based on behavioral inclusivity and exclusivity.
Let T denote the circle group, that is, the multiplicative group of all complex numbers with absolute value 1. Let f : T → T be a (sequentially) continuous map, and such that f(z2 ) = f2 (z) for all z ∈ T. Then there is an integer k such that f(z) = zk for all z ∈ T.
I am in process of writing a paper for publication on evolving mathematics. Would highly appreciate it if you may kindly help provide the links to some related publication/research work. It will have different mutually inclusive models migrations as one based on exclusive triggers of the two or many to be evolved to facilitate merge/migration.
I am in process of writing a paper for publication on behavioral mathematics. Would highly appreciate it if you may kindly help provide the links to some related publication/research work. It will have a top-down study approach based on the hypothetical application.
How to predict remaining useful life (RUL) on used aeroengine and its components level?
Any standard mathematical relations for RUL?
I am on a quest to solve how a cell repairs itself through encoding-decoding of proteins. Is there any link to genetic algorithms to solve age old questions such as aging and how we heal?
I have calculated EVI2 using landsat 7 surface reflectance images and I am getting values above 1.25 (mathematical maximum for EVI2 based on the formula) in my study area (heavily vegetated). I get a range between 0.2 and 1.8. Many publications stipulate -1 to 1, especially based on MODIS data. I also did a check with Landsat 7 TOA images, and I get ranges from -1 to 1, as the publications say. Does this mean something is wrong with the Landsat 7 surface reflectance images, or should values above 1.25 still be okay?
Computer Aided Design (Cad) subject deals with the backend mathematical calculation that happens in a 3D design.
I want to ask if I can get good resources that can explain the mathematical approach behind the Adaptive Model Predictive Control AMPC MATLAB toolbox?
am not be able to find the mathematical analysis behind this toolbox even on the MathWorks webpage.
Mathematical programming is the best optimization tool with many years of strong theoretical background. Also, it is demonstrated that it can solve complex optimization problems on the scale of one million design variables, efficiently. Also, the methods are so reliable! Besides, there is mathematical proof for the existence of the solution and the globality of the optimum.
However, in some cases in which there are discontinuities in the objective function, there would be some problems due to non-differentiable problem. Some methods such as sub-gradients are proposed to solve such problems. However, I cannot find many papers in the state-of-the-art of engineering optimization of discontinuous optimization using mathematical programming. Engineers mostly use metaheuristics for such cases.
Can all problems with discontinuities be solved with mathematical programming? Is it easy to implement sub-gradients for large scale industrial problems? Do they work in non-convex problems?
A simple simple example of such a function is attached here.
Problem: 5 minutes of play are worth more than an hour of study
Knowing that: G = Game S = Gtudy 1 hour = 60 min
The mathematical formula that defines the statement is: 5 x G> 60 x S The quantitative ratio of the minutes expressed in the mathematical formula can be simplified: 60: 5 = 12, therefore the simplified mathematical formula is: G> 12 x S
So, 1 minute of play is worth more than 12 minutes of study Or it can be said that: game G is worth more than 12 times than study S.
Therefore, the quantitative value of physical objects (or of spatial and / or temporal quantities) must be calculated differently from the qualitative value of human life experiences.
Explain why it is possible___________________________________________________________________
(Exercise based on Fausto Presutti's Model of PsychoMathematics).
I only have one sample size. I want to find if there is a significance difference between BSED-Math Students' Perceptions on Face-to-Face and Online Mathematics Learning.
If it makes easier, assume that f is continuous on [0,∞).
Specifically, I know that there are discontinuous everywhere solutions f of the given equation. I also know how to prove that, if f is continuous at 0, then f(x)=0 for all x∈ℝ. I don't know what gives the assumption of continuity of f at a non-zero point?
In several discussions, I have often come across a question on the 'mathematical meaning of the various signal processing techniques' such as Fourier transform, short-term fourier transform, stockwell transform, wavelet transform, etc. - as to what is the real reason for choosing one technique over the other for certain applications.
Apparently, the ability of these techniques to overcome the shortcomings of each other in terms of time-frequency resolution, noise immunity, etc. is not the perfect answer.
I would like to know the opinion of experts in this field.
In recent years, many new heuristic algorithms are proposed in the community. However, it seems that they are already following a similar concept and they have similar benefits and drawbacks. Also, for large scale problems, with higher computational cost (real-world problems), it would be inefficient to use an evolutionary algorithm. These algorithms present different designs in single runs. So they look to be unreliable. Besides, heuristics have no mathematical background.
I think that the hybridization of mathematical algorithms and heuristics will help to handle real-world problems. They may be effective in cases in which the analytical gradient is unavailable and the finite difference is the only way to take the gradients (the gradient information may contain noise due to simulation error). So we can benefit from gradient information, while having a global search in the design domain.
There are some hybrid papers in the state-of-the-art. However, some people think that hybridization is the loss of the benefits of both methods. What do you think? Can it be beneficial? Should we improve heuristics with mathematics?
In the lands with ancient plain sediments, the courses of rivers change dramatically over time for easy movement and the arrival of rivers to an advanced geomorphic stage.
Are there mathematical arrays that achieve digital processing such as spectral or spatial improvements or special filters to detect buried historical rivers?
- How was the importance of the zeta function discovered ?
- why do zeta function contain so much information ?
- What other areas of mathematics does it relate to ?
- Are there any books on the RH ?
- I've heard something about a connection with quantum physics – what's that about?
- Isn't there a connection with cryptography? Would a proof compromise the security of Internet communications and financial transactions?
- What are the Extended Riemann Hypothesis, Generalised RH?
In the definition of a group, several authors include the Closure Axiom but several others drop it. What is the real picture? Does the Closure Axiom still have importance once it is given that 'o' is a binary operation on the set G?
I am considering to send my research about Sophie Germain primes and it´s relation with primes of the form prime(a)+prime(b)+1= prime(c) and prime(b)-prime(a)-1= prime(c)
Mainly you have to send a mathematic research but others science researchs are accepted too. I don´t know the level of the contest but my chance is that my research i´ts have a deep relation with the work of Sophie Germain.
Do you have any recomendation of the form to present my work and the form of write to the responsables of the prize?
For any given function f : [a, b] → R, there exists a sequence of polynomial functions converging to f at each point where f is continuous. (Note that we did not ask the convergence to be uniform).
This paper studies the proof of Collatz conjecture for some set of sequence of odd numbers with infinite number of elements. These set generalized to the set which contains all positive odd integers. This extension assumed to be the proof of the full conjecture, using the concept of mathematical induction.
You can find the paper here:
Preprint Collatz Theorem(PDF) Collatz Theorem. Available from: https://www.researchgate.net/publication/330358533_Collatz_Theorem [accessed Dec 21 2020].
I am looking for a research paper about the mathematical or computational modelling of protein oxidation (caused by reactive oxygen species).. I would really appreciate that if someone helps me with this.
I am looking for any book/article reference about the mathematical description of zero normal flux boundary condition for shallow water equations. My concern is that for a near-shore case how it is obvious to have zero normal flux. Physically, it does make sense that we have a near-shore case and on the boundary, there is no flow in the normal direction. How to mathematical explain it using the continuity equation in the case when there is a steady flow? The continuity equation suggests that $\partial h / \partial t + u. \partial h/ partial x = 0$. If we take steady flow then it is clear to me to get zero normal flux condition. But what if the first term is not zero? or do we say that at the boundary the flow is always steady?
Question about the loss of hyperbolicity in nonlinear PDE: when complex eigenvalues appear, what is the effect on flow? I understand that we do not have general results on existence in this case, but is it only the mathematical tools that are lacking where can we show physical phenomena of instability?
Given the presented scatter-plot, it is looking like that there is a relationship between X and Y in my data. Unfortunately, the simple nonlinear curves can not describe this relationship. However, I guessed some equations like Y= aX^b + c and Y= a*exp(b*lnX) that can describe the relationship but it seems that they are not the perfect ones.
I am able to do the analysis in MATLAB, SPSS and Excel if you have any suggestion to solve the problem.
I want to determine the success rate of an personnel selection instrument (interview, assessment center...) depending on the validity of the instrument itself, the selection rate and the base rate.
Thanks in avance for your answers!
If we are given that (x-2)(x-3)=0 and 0.0=0, then we can conclude that both x=2 and x=3 simultaneously. This is because x-2=0 and x-3=0, simultaneously, is consistent with 0.0=0. However, this leads to a contradiction, namely, x=2=3. So, generally we exclude this option while finding roots of an equation and consider that only one of the factors can be zero at a time i.e. all the roots are mutually exclusive. In other words, we consider 0.0 to be not equal to 0.
Now, if we are given that x=0 and asked to find out what x^2 is, then certainly we conclude that x^2=0. It is trivial to observe that this conclusion is made through the following process: x^2=x.x=0.0=0. That is, we need to consider 0.0=0 to make this simple conclusion.
Therefore, while in the first case we have to consider 0.0 not equal to 0 to avoid contradiction, in the second case we have to consider 0.0=0 to reach the conclusion. So, the question arises whether 0.0 is equal to 0 or not. As far as I know, mathematical truths are considered to be universal. However, in the present discussion it appears to me that whether 0.0 is 0 or not, is used as par requirement. Is that legitimate in mathematics?
My dear friends, I am asking if some of your students are interested in applying a postdocotor position in China with me, here is the link and details!!!
How does one get access to the Mizar Mathematical Library (MML) ? This refers
to the Mizar system for the formalisation and automatic checking of mathematical proofs based
on Tarski-Grothendieck Set Theory (mizar.org).
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
I am now struggling on a question.
Let's assume that there is a given line or a given arbitrary function defined on a z=0 plane. Now I twist the plane into a non-linear 3D surface that can be represented by any given continuous and differentiable equations. How could I represent this line or function in analytical equations now.
You could think this like "a straight line on a waving flag".
Much appreciated if you have any idea or suggested publications.
Can you help me create a source of type sinc in ADS
I found a mathematical function that plays the role (picture 1) but I do not know how to use it ?
thank you in advance
NO. No one on Earth can claim to "own the truth" -- not even the natural sciences. And mathematics has no anchor on Nature.
With physics, the elusive truth becomes the object itself, which physics trusts using the scientific method, as fairly as humanly possible and as objectively (friend and foe) as possible.
With mathematics, on the other hand, one must trust using only logic, and the most amazing thing has been how much the Nature as seen by physics (the Wirklichkeit) follows the logic as seen by mathematics (without necessarily using Wirklichkeit) -- and vice-versa. This implies that something is true in Wirklichkeit iff (if and only if) it is logical.
Also, any true rebuffing of a "fake controversy" (i.e., fake because it was created by the reader willingly or not, and not in the data itself) risks coming across as sharply negative. Thus, rebuffing of truth-deniers leads to ...affirming truth-deniers. The semantic principle is: before facing the night, one should not counter the darkness but create light. When faced with a "stone thrown by an enemy" one should see it as a construction stone offered by a colleague.
But everyone helps. The noise defines the signal. The signal is what the noise is not. To further put the question in perspective, in terms of fault-tolerant design and CS, consensus (aka,"Byzantine agreement") is a design protocol to bring processors to agreement on a bit despite a fraction of bad processors behaving to disrupt the outcome. The disruption is modeled as noise and can come from any source --- attackers or faults, even hardware faults.
Arguing, in turn, would risk creating a fat target for bad-faith or for just misleading references, exaggerations, and pseudo-works. As we see rampant on RG, even on porous publications cited as if they were valid.
Finally, arguing may bring in the ego, which is not rational and may tend to strengthen the position of a truth-denier. Following Pascal, people tend to be convinced better by their own-found arguments, from the angle that they see (and there are many angles to every question). Pascal thought that the best way to defeat the erroneous views of others was not by facing it but by slipping in through the backdoor of their beliefs. And trust is higher as self-trust -- everyone tends to trust themselves better and faster, than to trust someone else.
What is your qualified opinion? This question considered various options and offers a NO as the best answer. Here, to be clear, "truth-denial" is to be understood as one's own "truth" -- which can be another's "falsity", or not. An impasse is created, how to best solve it?
somebody, please elaborate on how to calculate exergy destruction in kW units. from Aspen HYSYS I found mass exergy with kJ/kg unit and i don't know how to calculate it by using Aspen HYSYS and if somebody has mathematical calculation with example please share with me. I know how to calculate by aspen plus but I need a mathematical or Aspen HYSYS solution.
thanks in anticipation
As we know, computational complexity of an algorithm is the amount of resources (time and memory) required to run it.
If I have algorithm that represents mathematical equations , how can estimate or calculate the computational complexity of these equations, the number of computation operations, and the space of memory that are used.
the VPL formula can be given here as,
Can anyone please explain if the value from the term (Pmin-P) in the above formula is in degrees or radians?
I have a data in which the relationship between two parameters seems to fit to a model that has two oblique asymptotes. Does any one have any idea about what type of function I should use? Please find attached a screenshot of the data. I appreciate any help.
In the education discipline, several leadership theory has been discussed but no such mathematical foundations are available to estimate them. More, especially how can I differentiate( in terms of Mathematical expressions ) the several leadership styles in decision making problems so that I could get the better one; and the decision maker would comfort to apply their industrial/ managerial/ organizational situation ? We may assume that, the problem is a part of fuzzy decision making/ intelligent system / artificial intelligent system/ soft system.
The leaders are manager of an industry/ organization/ corporate house, the ministry of a Government / the agents of a marketing system, the representatives of customers of a particular product in a supply chain management problem.
Beside rigorous proofs of Fermat's last theorem, there are relatively simple approaches to arrive at the same conclusion. One of the simple proofs is by Pogorsky, available at http://vixra.org/abs/1209.0099.
There is also a website called www.fermatproof.com which gives an alternative proof, and also a review paper by P. Schrorer at : http://www.occampress.com/fermat.pdf.
Another numerical experiment was performed by me around eight years ago (2006), which showed that if we define k=(a^n+b^n)/c^n, where a,b,c are triplets corresponding to Pythagorean triangle (like 3,4,5 or 6,8,10), then k=1 if only if n=2. It seems that we can generalize the Fermat's last theorem not only for n>2 but also for n<2. But of course my numerical experiment is not intended to be a rigorous proof. Our paper is available at http://vixra.org/pdf/1404.0402v1.pdf, based on 2006 version article.
So, do you know other simple proofs of Fermat's last theorem? Your comments are welcome.
What is the importance of Golden Ratio in nature and mathematics? Why the golden ratio is sometimes called the "divine proportion," by mathematicians?
Six Nobel Prizes are awarded each year, one in each of the following categories: literature, physics, chemistry, peace, economics, and physiology & medicine. However Mathematics a subject mankind cannot do without is a strange omission and has remained excluded until today. Same with accounting. From 1901 doyens such as Albert Einstein, Marie Curie, Earnest Hemingway were honored with the prestigious Nobel. Do you think it’s time to rethink ?
Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)?
I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)?
Let me give my answer to start this debate:
Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms.
Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed.
Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional.
This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline.
I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence.
If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles.
Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence).
I am doing linear regression research assignment where I have to research how does mathematical scores and gender (independent variables) affect to natural history scores (dependent variable). I am not sure am I interpreting gender's dummy variable (female = 1, male = 0) right in the coefficients table.
Am I right by interpreting that females are getting on average 10.9 points less natural history scores than male?
Thank you in advance.
In fact, it is the fundamental defects in the work of “quantitative cognition to infinite things” that have been troubling people for thousands of years. But I am going on a different way from many people.
1, I analysis and study the defects in existing classical infinite theory system disclosed by the suspended "infinite paradox symptom clusters" in analysis and set theory from different perspectives with different conclusion: to abandon the unscientific (mistaken) "potential infinite and actual infinite" concepts in existing classical infinite theory system and discover the new concepts of "abstract infinite and the carriers of abstract infinite", especially to replace the unscientific (mistaken) "actual infinite" concept in existing classical infinite theory by the new concept of “carriers of abstract infinite" and develop a new infinite theory system with “mathematical carriers of abstract infinite and their related quantitative cognizing operation theory system ". From now on, human beings are no longer entangled in "potential infinite -- actual infinite", but can spare no effort to develop "infinite carrier theory", and develop comprehensive and scientific cognition of various contents related to "mathematical carrier of abstract infinite concept".
2, Abstract concept - abstract concept carrier theory, new infinite theory system, carrier theory, infinite mathematical carrier gene, infinite mathematical carrier scale,...The development of basic theory determines the construction of "quantum mathematics" based on the new infinite theory system.
3, I have up loaded 《On the Quantitative Cognitions to “Infinite Things” (IX) ------- "The Infinite Carrier Gene”, "The Infinite Carrier Measure" And "Quantum Mathematics”》2 days ago onto RG introducing " Quantum Mathematics". My work is not fixing here and there for those tiny defects (such as the CH theory above) but carrying out quantitative cognitions to all kinds of infinite mathematical things with "quantum mathematics" basing on new infinite theory system.
According to my studies (have been presented in some of my papers), Harmonic Series is a vivid modern example of Zeno's Paradox. It is really an important case in the researches of infinite related paradoxes syndrome in present set theory and analysis basing on unscientific classical infinite theory system.
All the existing (suspending) infinite related paradoxes in present set theory and analysis are typical logical contradictions.
The revolution in the foundation of infinite theory system determines the construction of "Quantum Mathematics" based on the new contents discovered in new infinite theory system: infinite mathematical carrier, infinite mathematical carrier gene, infinite mathematical carrier measure,... in new infinite carrier theory. So, the "Quantum Mathematics" mentioned in my paper is different from Quantum Logic and Quantum Algebras;
According to my studies (have been presented in some of my papers), “Non-Standard Analysis and Transfinite numbers” is all the infinite related things in unscientific classical infinite theory system based on the trouble making "potential infinite and actual infinite" --------- Non-Standard Analysis is equivalence with Standard Analysis while Transfinite is an odd idea of “more infinite, more more infinite, more more more infinite, more more more more infinite,…”).
Mathematics differs from sensory science in that it draws its subject from structural construction to abstract abstraction of quantitative quantities, while other sciences rely on the description of actual sensory objects already in existence.
What do you think?
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
I am interested in the personalization of learning based on profiles, more specifically in mathematics.
Do you know any relevant references?
The fact that , electron can have only discrete energy level is obtained by solving schrodinger equation with boundary conditions, which is a mathematical derivation .
Physically, What makes the electron possess only certain energies ?
Or is there any physical insight or explanation or physical intution which can arrive at same conclusion(without math) that electron can have only discrete energy levels inside potential well