Questions related to Mathematics
Pursuing research in mathematical science through physical problem-solving is an art. More students of science and engineering can be attracted to learn mathematics in the process and apply the mathematical tools in solving real-world problems.
I mean something strictly mathematical and not an algorithmic routine.
The function f(n) produces
I need the function f(n) remove the zeros and produces:
1, 2, 5, 1, 3, 8, 9, ...
Fermat's last theorem was finally solved by Wiles using mathematical tools that were wholly unavailable to Fermat.
Do you believe
A) That we have actually not solved Fermat's theorem the way it was supposed to be solved, and that we must still look for Fermat's original solution, still undiscovered,
B) That Fermat actually made a mistake, and that his 'wonderful' proof -which he did not have the necessary space to fully set forth - was in fact mistaken or flawed, and that we were obsessed for centuries with his last "theorem" when in fact he himself had not really proved it at all?
Why does the absorbance vary in a linear fashion, whereas the transmittance varies in an exponential fashion?
Can this be explained mathematically/intuitively or are these results based on experimentation?
how can i prove that mathematical proof about feasible solution is found at boundary point?
also want to ask about why feasible region must be composed of a convex set. is there any mathematical proof about this?
- Hello. I am struggling with the problem. I can measure two ratios of three independent normal random values with their means and variances (not zero): Z1=V1/V0, Z2=V2/V0, V0~N(m0,s0), V1~N(m1,s1), V2~N(m2,s2). These are measurements of the speeds of the vehicle. Now I should estimate the means and the variances of these rations. We can see it is Cauchy distribution with no mean and variance. But it has analogs in the form of location and scale. Are there mathematical relations between mean and location, variance and scale? Can we approximate Cauchy by Normal? I have heard if we limit the estimated value we can obtain the mean and variance.
At this moment, I have a nonlinear dataset that has X and Y-axis values. I want to come up with a nonlinear equation that represents the most values of this function without using a moving average filter.
- Asked 3 minutes ago
I have a problem. Measurements show the opposite of what convention assumes. It occurred in geotechnics, but could affect all material modelling branches.
I tested soil specimens. Convention interprets materials as things where deformation is created (output) and force is applied (input). So, our task is: decode how much deformation a certain loading (force) will generate.
After 6 years of testing, I noticed the convention is misleading. Reaction force behaves as a function of deformation. Not the other way round. Stiffness hysteresis loop shape, size and position stabilizes within deformation amplitudes. You can control the shape, size and position of stiffness loops - using deformation amplitude. All applied deformation values - have finite answers, unlike the "infinite displacement" paradox...
It's a big problem. All software is designed to model deformation as a function of force applied. But in reality, force (reaction) behaves as a function of deformation. It could be we are stuck in a paradigm, where deformation is modelled as a function of force. But in reality, the reaction force is a function of deformation. F=f(U) not U=f(F).
The observations (empirical evidence) pointed me to an abandoned theory from 40 years ago (strain-space plasticity, by P. J. Yoder 1980). His theory seems to be not only compatible with the observed physical properties, but also GPU - parallel computation compatible (there were no GPU units in 80's... so "parallel spring systems" in FEM caught no one's attention)
So, we have something that is both:
1. Potentially more physically correct
2. For the first time makes elasto-plastic FEM is super computer compatible.
I am stuck building robots for industrial projects at the faculty. For tests which are meant to provide "quick profit" to faculty. "fundamental" research is not funded. Tried applying for radical research EU grant... the topic is way too radical for them.
All observations were made in spare time. Evenings, weekends, at times - using life savings... I tried showing test results to renown experts. They become red in the face, angry, and say "I have not seen anything like it". After an hour of questions - they find no flaws in the testing machines. And.. Leave. Never to hear from again.
The theory of P. J. Yoder was defended in public defenses multiple times in the 80's. It seems "mathematically equivalent". As in - proven able to do "the same" as convention does. Without anyone ever testing what such "reversal of coordinate space" (strain instead of stress envelopes) would imply regarding interpretation of material properties. No one found flaws in it "mathematically". Never proves it wrong. But... Forgot, ignored and abandoned.
I tried asking industries for opinion too. Industry asks for code compatible with existing software (return of investment). And I alone can not code a full software package. Frankly, I would rather keep testing, try to prove my assumptions wrong. But the more I test, the more anomalies and paradoxes are observed, exposed and resolved on the topic..
What is the "antidote" in such situation? Tests showing convention wrong. Nobody find any mistakes. Which leads to silence and being ignored.
I have encountered a trouble about Zernik index when studing Zernik polynomials. This problem is equivalent mathematically as follows,
With each given n, let m=-n,-n+2,-n+4,...,n
With a given j, for example j=16, how to calculate n and m?
I was working to check the validity of a mathematical equation. In doing so I have obtained a large data set values >50,000 experimentally. As the mathematical equation gives only a single value I was wondering is there any way to compare that data set to mathematical equation. Based on that comparison I am willing to assign a constant that is when operated (+,-,x,/) to mathematical equation results in values justifying the experimental data set. The constant could be more than one as the range of the data set is quite large compared to empirically obtained values through a mathematical equation.
I have a basic mathematical question.
I designed a controller that can guarantee that variable z can remain zero or bounded for all time. Where z=s-r. Frome being zero or boundedness property of z can I conclude that s and r are also bounded?
Any help will be appreciated.
Knowing orthometric height, latitude, longitude of a point and reduced level, latitude, longitude of a second point, what's the mathematical expression to compute for the orthometric correction to be applied to the reduced level of the 2nd point to get its corresponding orthometric Height?
Currently the only proof of Fermat's Last Theorem is very complex and certainly not the proof that Fermat had in mind.
I wonder if it is possible to use a method that drastically
simplifies Wiles' theory, a theory that has received much honors from the entire mathematical community.
I wanna solve partial differential equation in terms of x and t (spatial and time), As I know one of the most useful way for solving pde is variable separation. well explained examples about mentioned way are wave equation, heat equation, diffusion....
wave equation is Utt=C^2 .Uxx
in other word; derivatives of displacement to time, equals to derivatives of displacement to spatial multiplied by constant or vice versa.
however my equation is not like that and derivatives are multiplied to each other.for example : Uxx=(1+Ux)*Utt
Im wondering how to solve this equation.
I will be thankful to hear any idea.
How can we determine the mathematical property of a high-dimensional benchmark problem?
Take, for example, the n-dimension Rosebrock equation. It is a unimodal equation with localized behaviour in 2 dimensions (We can tell this from the 2D plot). However, if the dimension is taken greater than 3, the equation shows multimodal behaviour. In short, the mathematical characteristic may change according to the dimension change. How can we determine the mathematical characteristic of an equation with different dimensions using a small number of analysis results obtained in the design space?
The n-dimensional Rosebrock equation is shared from the link below.
The French mathematician Pierre de Fermat (1601-1665), conjectured that the equation
x^n +y^n =z^n has no solution in positive integers x, y and z if n is a positive integer >= 3.
He wrote in the margin of his personal copy of Brachet’s translation of Diophantus’ Arithmetica:”I have discovered a truly marvellous demonstration of this proposition that this margin is too narrow to contain”.
Many researchers believe that Fermat does not find a demonstration of his proposition but some others think there is a proof and Fermat’s claim seems right.
The search of a solution of equation x^n + y^n =z^n are splitted in two directions.
The first one is oriented to search a solution for a specific value of the exponent n and the second is more general, oriented to find a solution for any value of the exponent n.
- Babylonian (570,495 BC) studied the equation x^2+y^2=z^2 and found the solution (3,4,5).
- Arabic mathematician Al-Khazin studied the equation x^3+y^3=z^3 in the X century and his work mentioned in a philosophic book by Avicenne in the XI century.
- A defective proof of FLT was given before 972 by the Arab Alkhodjandi
- The Arab Mohamed Beha Eddin ben Alhossain (1547-1622) listed among the problems remaining unsolved from former times that to divide a cube into two cubes.(refer Image of Arabic manuscript from British Museum. Problem N4 Red color at line 8 from top).
- Fermat (1601, 1665), Euler (1707, 1783) and Dirichlet (around 1825) solved the equation for n=3, 4 and 5.
- In 1753, Leonhard Euler presented a proof for x^3+y^3=z^3
- Fermat found a proof of x^4 +y^4 =z^4 using his famous “infinite descente”. This method combines proof by contradiction and proof by backward induction.
- Dirichlet (in 1825) solved the equation x^5+y^5=z^5 .
- Sophie Germain (in 1823) generalized the result of Dirichlet for prime p if 2p+1 is prime..
Let p prime, x^p +y^p =z^p has no solution in positive integers if 2p+1 is prime.
- In XIX century E.Kummer continued the work of Gauss and innovated by using numbers of cyclotomic field and introduced the concept of “prime factor ideal”.
-Andrew Wiles, a professor at Princeton University, provided an indirect proof of Fermat’s Conjecture in two articles published in the May 1995 issue of Annals of Mathematics.
Andrew Wiles solved a high level problem in modular forms about elliptic curves and the consequence is a solution for FLT. Thanks to the results of Andrew Wiles, we know that Fermat’s Last Theorem is true.
I think he opens a space for mathematicians to search proofs for FLT comprehensible by a normal student in mathematics and may be to find new concepts or ideas. This result should imply a direct proof of FLT.
In this paper, I would like to suggest a direct proof of FLT using mathematical concepts (Parity Even or Odd of Numbers, forward and backward Induction,....) and tools of the Fermat‘s era.
This direct proof is comprehensible for a student in mathematics and lovers of mathematics.
Ablation studies is quite common in machine learning literature, especially those on neural network and deep learning. When a new architecture or method is proposed, the authors often perform an ablation study where they remove sub-components of their methods that they feel are important one at a time, and study how the performance of said method changes, to learn the actual importance of each of the sub-components.
I find this approach quite useful for research on optimization algorithms, specifically heuristical algorithms, whose properties may not be easy or even possible to prove mathematically.
However, I have only seen ablation study or the term "ablation study" (might be called something else in other fields) in machine learning research. Would it be appropriate to include in papers on mathematical optimization algorithms or other areas of research in general?
Physical insight can suggest a principle which in turn leads to mathematical modeling. Example might include Newton extending the falling of an apple to towards Earth’s center, to the Moon falling around the Earth, and Einstein’s free falling elevator compared to the effect of gravity.
But mathematics can also give a powerful clue to a physical principle. Well known examples include Planck's 1900 or so work on energy packets implying discontinuous amounts of energy, and Dirac’s prediction of the positron.
From 2005 to 2008 studying networks, I found that the mean path length successfully scaled a lexical network, and the mathematical result implied some physical principles about the distribution of energy in a network, which by a circuitous route led to the idea of the principle of dimensional capacity around 2019.
History, consistent with my own experience, suggests physical principles can imply the mathematics and vice versa.
Your views? Do you have examples of each?
Need a journal published in German language in the fields of mathematics and programming indexed in scoups Q1?
I am interested in test anxiety and would like to dig into the seminal meta-analysis by Ray Hembree published in 1988. Does anyone has access to the bibliography file containing the list of studies included in the meta-analysis?
Any help would be greatly appreciated!
Meta-analysis (I have access)
Hembree, R. (1988). Correlates, Causes, Effects, and Treatment of Test Anxiety. In Review of Educational Research (Vol. 58, Issue 1). https://doi.org/10.3102/00346543058001047
Bibliography (I do not have access)
Hembree, R. (1987). A bibliography of research on academic test anxiety. Adrian, MI: Adrian College, Department of Mathematics. (ERIC Document Reproduction Service No. ED 281 779)
Was Heisenberg a Third-Rate Natural Philosopher because he denied the reality of micro-objects that cannot be tracked by humans? Has this misled physics for 100+ years?
Surely, because we have created a whole civilization from the manipulations of electrons, especially digital electronics, then "LOOKING" at an object is NOT a requirement for existence.? Electrons interact with everything; so surely quite sufficient, eh?
Heisesenberg was educated in the Classical Philosophy of Aristotelian Classicism in the archaic German Education system. He failed to think for himself, substituting a Platonic idealist view of mathematics, as being superior to our imaginative/operational view of reality.
Some, including Morris Kline, in a book dealing with uncertainty in the basis of mathematics,
indicates that eachh definition in mathematics references some other or other definitions;
but that this leads backwards to undefined terms, which must be there. Perhaps there are other problems, that whatever you are defining may not even exist.
Given the historical roots of math, it is not so clear. That math ought to be based also on other disciplines, counting, land measurement and so on.
Hence that defenitions may better hinge on a descriptive intuitive account that earn us the work of looking up every other definition made in mathematics, This is a great barrier in understanding a lot of work these days, the huge number of definitions one must absorb.
This would best guide our intuition to see if the results are logical. But the comon practice these days are far from it, strait into having to prove everything, and know everything about very narrow specialties, meaningless to most outsiders.
What is your opinion? Should we reform this?
The temptation in all the remodeling of the study plans to train teachers that I have experienced, faced with a new strong point of interest, has been to introduce a subject that caters for it. Thus, environmental education, for example, has been the curriculum of Spain for more than 25 years.
Now the SDGs are the focus of attention, and the university is committed to developing them. the challenge is to do it from the established curricular subjects
Educational mathematics is developed in three or four subjects of each training curriculum, and has to embrace achieving the teacher's competence development to teach school mathematics and also, the purpose of working on the SDG, how can we do it?
I post this publication
The 3x + 1 Problem and its Generalizations
by Jeffrey C. Lagarias
The American Mathematical Monthly, Volume 92, 1985 - Issue 1, pp. 3 - 23
and the following introductory video by Veritasium (which in my view is extremely well-produced -- in particular the adopted graphics are superlative)
as I consider the links between this mathematical problem (the Collatz Conjecture) to physics to be noteworthy.
Here we discuss about one of the famous unsolved problems in mathematics, the Riemann hypothesis. We construct a vision from a student about this hypothesis, we ask a questions maybe it will give a help for researchers and scientist.
I am getting immersed in The Kervaire-Milnor Formula, where each term of which connects with a different field in mathematics: "The number of differential structures on the 4k − 1-dimensional sphere is given by a quantity that is the product of three quantities: Elementary factor × “Non − J Classes” × Numerator of B2k/2k". Information read in https://people.math.harvard.edu/~mazur/papers/slides.Bartlett.pdf
My research is focused on how to connect the Turán moments and coefficients of Jensen polynomials for the entire function Xi as I have noticed some valuable results. I would like to count with more articles or information about the Bernoulli numbers visible in topics of topology which could involve the Turán moments and Jensen coefficients as well.
Thanks in advance!
Is there a free academic online resource that can be used to look up and reference basic concepts and theorems in mathematics? I am esentially looking for a mathematical analogue to the Stanford Encyclopedia of Philosophy or the APA dictionary of Psychology, that can be cited in thesis work and the like, and that is suitable for researchers from other fields. (So basic information, not cutting-edge detailed research or whole discussions on current questions.)
That is, it should be trustworthy and considered "respectable" enough to be cited in academic work. This would rule out Wikipedia - even though its articles on the subject are usually trustworthy - or things like github. But it should also be a freely accessible source. (Alternatively, if publishing sites like Springer host something similar which is accessible through university subscriptions, that might work for my purpose, too.)
I know you can often find the correct answers to basic questions by googling, but the issue is it's a bit difficult to gauge sources and content as a non-mathematician. Thanks in advance!
I want to compute Ornstein-Uhlenbeck models using the OUwie() function in R. For this I'm using a set of 100 phylogenetic trees, so that each tree goes into one OUwie model, resulting in 100 models.
The output of the OUwie gives estimates for a trait optimum and according SE.
Now I want to describe this output over all 100 trees. For the estimates it's easy to simply give the average value, but as the standard error depends on the sample, I'm not sure if I can give the mean, or if I should give a range or if there is another (mathematically correct) way to communicate the information I get out of these models.
I am a homeschooling parent who came to realize that my young children (in grades 1 and 3 this year) are into mathematics for its beauty. They are amazed at the fact that adding two whole numbers in any order yields the same result (commutativity), grouping when adding may be helpful (associativity), that for particular numbers n, x, and y, we have that n = x^2 = y^3: 1 = 1^2 = 1^3; 64 = 8^2 = 4^3, etc. They are mesmerized by the fact that 1 = 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, ...; special sequences of numbers such as (in this case) triangular numbers (which they know as "step shapes"). They don't take for granted that 1n = n for any integer (I don't think they are ready for real numbers in general). They can see that division is akin to factoring: 12/4 = 3 because 3 and 4 are factors of 12 (though they are not too up on the vocabulary yet)... They can see informally that a - b = - (b - a): 8 - 5 = 3 implies that 5 - 8 = -3.
I hope I am fortunate enough to see them through their entire grade school through high school learning.
My concern is: why do educators/teachers and parents a debate so much about basic skills versus critical reasoning as if these things removed from context of the ACTUAL beautiful ideas in mathematics will pull school children into understanding mathematics any more than knowing words alone is enough to make a great writer? Why do we either (a) think that its all basic skills, or (b) word problems about so-called everyday stuff that are more social studies than mathematics? How about focusing on mathematics as subject like we do Language Arts? We don't reduced Language Arts to writing letters and business contracts in which grammar and syntax are important? We want children to enjoy the beauty of language; shouldn't we do the same with mathematics?
We use ALEKS-PPL for placement and co-requisite support for our math courses. Our institution is interested in using something similar for English composition: placement + adaptive learning co-requisite support. What is out there for us to review?
With the purpose of, to find out the pedagogical skills mathematics teachers use to identify and address students mathematics anxiety
we substitute y=e^rx while solving constant coefficient linear differential equation. what is the reason behind this?. is there any proof?
Good research is based on good relationship between the mentor or supervisor and the scholar. What are the qualities a supervisor or mentor must have to have a healthy and friendly environment in the laboratory?
In Mathematics, some proofs are not convincing since the assumption fails in the Proofs.
If equality is Changed to "approximately equal to" then the Proof becomes more Perfect. But Uniqueness cannot be guaranteed.
In Page number 291, Introduction to Real Analysis Fourth Edition authored by Bartle and Sherbert, the Proof of Uniqueness theorem is explained.
That Proof is not perfect.
Reason : Initially epsilon is assumed as positive and so not equal to zero. Before Conclusion of the Proof, epsilon is considered as zero and written as two limits are equal.
The equality cannot hold since epsilon is not zero. Only the possibility to write is Two limits are approximately equal.
Since Epsilon is arbitrary, never imply epsilon is zero.
I hope the authors and other mathematicians will notice this error and will change in new editions.
What research is there (if any), about gifted students frequently getting easy questions on academic tests/exams INCORRECT and harder more challenging ones CORRECT – particularly in mathematics? Does anyone have any empirical evidence as to why this might occur? Thank you.
Every integer can be expressed as a sum of three triangular numbers.
Given two integers N=a+b+c and M=x+y+z , how to express the product N*M as a sum of three triangular numbers. Example:
11=0+1+10 and 13=0+3+10 ; the product 11*13=143=10+28+105
How to write the product N*M=F(a,b,c,x,y,z) +G(a,b,c,x,y,z) +H(a,b,c,x,y,z) as three triangular numbers?
I have digged into a theory established by D. Loeb from University of Bordeaux, and found an interesting way to represent the things you have (classical set) and the things you do not have and might need, want, be ready to acquire (new forms of set with negative number of elements)
This fits very well in the extension of the matrices of sets which I needed to develop, se why and how here (see references below).
I would be thrilled to know if you have use cases where this model of classical sets of what you have, and new negative sets of what you do not have, my help.
Please share your input!
Presentation Matrices of Sets, complete tutorial with use cases
 D. Loeb, Sets with a negative number of elements, Advances in Mathematics, 91 (1992), 64–74.https://core.ac.uk/download/pdf/82474206.pdf
In a mathematical analysis of piezoelectric models, we use the terms "the electrically open and short circuit case" for which we have some mathematical definition as well.
What is the physical significance of these boundary conditions in making sensors and other devices?
It is well confirmed that the nonlinear evolution equation (NLEE) is widely used as a mathematical leading equation for describing the physical issues in many branches of physics. But it is very difficult to solve. Recently, many researchers have proposed various types of solution methods. Unfortunately, most of the solutions methods lead to additional free parameters. In my opinion, the solutions with additional free parameters of NLEEs are not useful for further verification in the laboratory. That is why my discussion topic is "What is the best solution method for nonlinear evolution equations?".
Although Bashkar II, in his Siddhantha Shiromani, critisized a Buddhist school of astrology who held the Earth is moving, I have not seen any mention about texts on Buddhist astrological or mathematical tradition in recent studies. Vedic and Jain sources are well known.
Are there any primary textual resources on Buddhist astrology/mathematics?
Dear experts and colleagues,
Hello, all! I recently received a reviewer's comment stating that the proposed network measures of graph theoretical analyses could be correlated by mathematical definition.
So I ran the correlational analyses and global efficiency, characteristic path length, mean clustering coefficient (global measures) showed correlated to each other.
I understand that Global efficiency is inversely related to characteristic path length, but I am quite confused about how characteristic path length is inversely correlated with mean clustering coefficient, while mean clustering coefficient is positively correlated with global efficiency.
Or does my data is wrongly suggested?
Briefly explaining my study, it is a neurological study using MR images (diffusion tensor imaging) to explore the structural networks.
Any feedback and discussion are welcomed here!
hi, I have to get an inverse matrix of a square matrix (20*20) on excel , to make sure that calculation is right I Knew that multiplying the matrix with its inverse should yields a unity matrix which have a diagonal of ones and other cells are zeros but when i multiply the matrix with the inverse i got the result is a matrix with adiagonal of ones but not all other cells are zeros (some have values like 1,2,etc ... is this ok or what often is my mistake? thanks in advance
In classical times there was a concept of pure science that was produced entirely in the intellect.
More recently sciences were developed by testing the intellectual product with empirical data. These sciences are not regarded as being pure.
Mathematics, often regarded as pure science, has for most of history been based on postulates of geometry that could not be proven. Then came relativity and other geometries. In the past century there was considerable effort to reformulate mathematics on a firmer basis of conditional sets. Math is now regarded as being somewhat more pure than before, while producing two generations of graduating students in some countries who are not able to do simple arithmetic.
Fortunately I had some excellent teachers who explained the two systems and why they were both needed. Other teachers displayed the Gödel's incompleteness theorems.
In academic settings there seems to be a difference of opinions about whether or not math is a science, and whether or not it is pure.
Is Mathematics A Science?
About two years ago, I have submitted a manuscript to a reputed journal. After a couple of months of the peer review process, the response was “major revision has been requested”. I made the necessary adjustments and resubmitted it again. The Journal's editor responded that my manuscript requires minor revision. Well, the decision was <<"Revise for Editor Only'' he claimed that revision should be quick and it will not undergo the entire review process>>. Again, I made the required edits in order to make the manuscript acceptable for publication. Afterward, I resubmitted it. It is the day 120 and the status is "With Editor". In fact, I did send two emails to the editorial team to update the status. Their response to both emails was the same, saying that they contacted the editor to accelerate the process.
Dear readers, I need to have a piece of advice: what to do as a next step?
Thank you in advance,
I am combining fish abundance and biomass data collected from five locations which have utilised different transect dimensions:
15 x 4 m (60 m2; from one location)
30 x 5 m (150 m2; from three locations)
50 x 4 m (200 m2; from one location)
When pooling data collected from transects of differing sizes, should you standardise your data down (to the smallest dimensions) or up (to the largest)? Or in this case, perhaps standardise according to the most commonly used dimensions (30 x 5 m/150 m2)?
To me it seems as though there would be no difference, but I was wondering if there was some sort of mathematical explanation for why one way might be preferable over the others?
the VPL formula can be given here as,
Can anyone please explain if the value from the term (Pmin-P) in the above formula is in degrees or radians?
In engineering design, there are usually only a few data points or low order moments, so it is meaningful to fit a relatively accurate probability density function to guide engineering design. What are the methods of fitting probability density functions through small amounts of data or low order statistical moments?
Hello. I want to ask how can I translate weightages of "yes" or "no" of a criterion in terms of numbers in AHP calculations?
for examples, my goal is to select most suitable construction material vendor (Alternative) according to the set criteria including "ASTM Standards", with 4 other criteria.
This criterion means "Is the material vendor giving the material according to the Construction Standard ASTM, or not?" So the answer given to this by the alternative (vendor) would be a "Yes" or a "No", but not a value.
First, I've sent a questionnaire to the experts to give me pair wise comparison weightages for one criteria vs the other. And I've got great response with consistency of 0.9. Acceptable!
But when I ask the vendors to provide their responses, they put numerical values to the first four criteria, and I find the comparison matrix for these vendors (alternatives) for each criteria using simple maths division (value of one vendor divided by second vendor). It gives me the exact zero consistency, because I've found the alternatives weightages using exact math comparison. But I get numerical values of weightages to find ranks, so it works fine for me.
But when it comes to the 5th criterion (ASTM Standards), the responses are in "Yes" or "No" terms! My question is that what numerical value should I give to a Yes and a No so that I can get their comparison by dividing with each other?
for instance, if I use Yes =1 and No = 0, the system hangs because anything divided by zero gives an infinity answer, and it doesn't proceed further.
If I want to use a scale of 1 to 9, saying Yes = 9, and No = 1, it gives me good results. But I've no reason to use 1 or 9, for it could be any numerical value say 1000 for Yes, and 1 for No.
Please suggest me a reasonable value with a citation if possible.
Thanks in advance.
Sometimes , we use term of "zero time" in a formulation but are we sure it is really "0" ? maybe it is 0,000......1 and is there a "zero" time(can we stop the time?), or sometimes, we say v=0 are we sure?
On the other hand
1/0 = infinity. Well then, what's "infinity"? How does it work in all the other equations?
infinity - infinity = 0?
1 + infinity = infinity?
If we use closest number to zero-monad (basic thing that constitutes the universe-everything-)Gottfreid Leibniz, in his essay “Monadology,” suggested that the fundamental unit of all things is the monad. He intended the monad to have some of the attributes of the atom, but with important differences. The monads Leibniz proposed are indivisible, indissoluble, and have no extension or shape, yet from them all things were made. He called them “the true atoms of nature.” At the same time, each monad mirrored the universe. If we use monad instead of zero, every equations work
I think Science says "Every Thing had originated from a basic thing"
Boolean and ZFC logic are constructed from fundamentally different foundations. Modern (20th century) mathematics appears to be dominated by the ZFC approach. I am not sure it is understood the two approaches are incompatible. Is this incompatibility studied?
I have done cropping attacks in visual (pictorial) form but, I want to do this test in mathematical form, i.e., I want to calculate values of cropping attack analysis in MATLAB.
I am trying to write the motivation part for linearization of a non-linear mathematical formulation. I claim the mathematical model will be more simple and easier to solve. But, of course, I need a reference for that! Any suggested book chapters or articles that discuss this point please?
Thanks in advance...
STEM fue el tema principal de la conferencia internacional ASTE 2019, con al menos 8 pósteres, 27 presentaciones orales y 3 talleres que promovieron las aulas STEM, la instrucción/enseñanza STEM, las lecciones STEM, los campamentos de verano STEM, los clubes STEM y las escuelas STEM sin proporcionar una conceptualización o definición operativa de lo que es STEM. Algunas presentaciones defendían la integración de las disciplinas, pero el ejemplo proporcionado fue principalmente prácticas "indagatorias" y de "diseño de ingeniería" que de hecho no diferían del tipo de actividades en el aula hands-on/minds-off mal conceptualizadas y epistemológicamente incongruentes.
Por lo tanto, vale la pena considerar:
(1) ¿Por qué lo llamamos STEM si no difiere de las prácticas aplicadas durante décadas (por ejemplo, indagación, actividades hands-on)?
(2) ¿Qué beneficios (si los hubiere) puede aportar esta mentalidad/tendencia de STEMinificación a la educación científica y su investigación?
Dear colleagues, please tell me a link to materials about mathematics of Dropout regularization in ANN. I perfectly understand how this algorithm works in practice, but I don't quite understand the mathematics.
I know that an infant's brain can repair itself when damaged but why doesn't the same happen in adults after stroke or brain injuries?
I would really appreciate it if anyone could give some idea of some mathematical techniques for developing correlation for a dependent variable, W which has to be defined as W=f(x,y,z) where the individual functions are in form of power-law, W=a(x^b), W=c(y^d) and W=e(z^f).
In general in mathematics we work with groups that includ numbers, generally we work with constants.
So for exempel: For any x from R we can choose any element from R we will find it a constant, the same thing for any complex number, for any z from C we will find it a constant.
My question say: Can we find a group or groups that includ variabels not constants for exemple, we named G a group for any x from G we will find an x' it's not a constant it's another variabel ?
and if it exisit can you give me exempels ?
Thank you !
While working in both the software, after loading the training and validation data for the prediction of a single output using several input variables (say 10), it skips some of the inputs and delivered an explicit mathematical equation for future prediction of the specific parameter but it skips some of the input variables (say 2 or 3 or maybe greater). What criteria are these software uses in the back for picking the most influential parameters while providing a mathematical predictive model?
Austrian-born mathematician, logician, and philosopher Kurt Gödel created in 1931 one of the most stunning intellectual achievements in history. His shocking incompleteness theorems, published when he was just 25, proved that within any axiomatic mathematical system there are propositions that cannot be proved or disproved from the axioms within the system. Such a system cannot be both complete and consistent.
The understanding of Gödel’s proof requires advanced knowledge of symbolic logic, as well as Hilbert and Peano mathematics. Hilbert’s Program was a proposal by German mathematician David Hilbert to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. Gödel’s 1931 paper proved that Hilbert’s Program is unattainable.
The book Gödel’s Proof by Ernest Nagel and James Newman provides a readable and accessible explanation of the main ideas and broad implications of Gödel's discovery.
Mathematicians, scholars and non-specialist readers are invited to offer their interpretations of Gödel's theory.
When I observed mathematics education in preservice teachers at my university, I found that What students learn in mathematics classes does not match what is needed in school mathematics classes. That led to my initiation to do design research, a Hypothetical Learning Trajectory that contains a sequence of learning processes related to elementary school mathematics learning. I consider starting with observations in elementary math class to identify misconceptions, then jointly reviewing issues on campus, conducting literature reviews, designing lessons to address identified problems, implementing them in class and evaluating the results. The final result is a Local Instructional Theory on Learning about Teaching Mathematics for prospective teachers to overcome misconceptions that occur in elementary mathematics classes. I need so many suggestions to do this research.
In 2010, Dr. Khmelnik has found the suitable method of resolving of the Navier-Stokes equations and published his results in a book. In 2021, already the sixth edition of his book was released that is attached to this question for downloading. Here it is worce to mention that the Clay Mathematics Institute has included this problem of resolving of the Navier-Stokes equations in the list of seven important millennium problems. Why the Navier-Stokes equations are very important?