Questions related to Mathematics
Need a good weather probability calculator. Would like to calculate the probability of e.g. 10 degrees Celsius on a day above the average. Has anybody got good research/formulas?
Which distribution is assumed in the probability calculation? Normal one?
I am thinking of the vector as a point in multidimensional space. The Mean would be the location of a vector point with the minimum squared distances from all of the other vector points in the sample. Similarly, the Median would be the location of the vector point with the minimum absolute distance from all the other vector points.
Conventional thinking would have me calculate the Mean vector as the vector formed from the arithmetic mean of all the vector elements. However, there is a problem with this method. If we are working with a set of unit vectors the result of this method would not be a unit vector. So conventional thinking would have me normalize the result into a unit vector. But how would that method apply to other, non-unit, vectors? Should we divide by the arithmetic mean of the vector magnitudes? When calculating the Median, should we divide by the median of the vector magnitudes?
Do these methods produce a result that is mathematically correct? If not, what is the correct method?
I have no mathematical experience and no statistician to help me.
Is it possible to decompose a conditional probability with three or more elements (i.e. events) into conditional probability of only two elements or the marginal probability of one element? Knowing this decomposition, it would help to solve higher order Markov Chain mathematically. I also know that this decomposition can be solved if we add assumption of conditional independent.
To make it concrete here is a negative example:
Notice that the RHS still contains a conditional probability with three elements P(a,b│c).
Assuming conditional independent on c, we have P(a,b│c)=P(a│c)∙P(b│c). Thus, the conditional probability decomposition becomes
My question is whether this type of conditional probability decomposition into one or two element is possible without making assumption. If it is really unsolvable problem, then at least we know that the assumption of conditional independent is a must.
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Provide me the detail mathematical calculation for the measurement of uranium, thorium and potassium with their daughter progenies.
For correlation analysis between body composition variables and blood hormone levels,
I have data from two different blood analysis methods but the same units; ECLIA & CLIA.
I wonder if there is any statistical and mathematical error when I run correlation analysis using the data together.
If it cannot,
Is there any suggestion for using these data together for statistical analysis?
Thank you very much.
Famous mathematicians are failing each day to prove the Riemann's Hypothesis even if Clay Mathematics Institute proposes a prize of One Million Dollars for the proof.
The proof of Riemann's Hypothesis would allow us to understand better the distribution of prime numbers between all numbers and would also allow its official application in Quantics. However, many famous scientists still refuse the use of Riemann's Hypothesis in Quantics as I read in an article of Quanta Magazine.
Why is this Hypothesis so difficult to prove? And is the Zeta extension really useful for Physics and especially for Quantics ? Are Quantics scientists using the wrong mathematical tools when applying Riemann's Hypothesis ? Is Riemann's Hypothesis announcing "the schism" between abstract mathematics and Physics ? Can anyone propose a disproof of Riemann's Hypothesis based on Physics facts?
Here is the link to the article of Natalie Wolchover:
The zeros of the Riemann zeta function can also be caused by the use of rearrangements when trying to find an image by the extension since the Lévy–Steinitz theorem can happen when fixing a and b.
Suppositions or axioms should be made before trying to use the extension depending on the scientific field where it is demanded, and we should be sure if all the possible methods (rearrangements of series terms) can give the same image for a known s=a+ib.
You should also know that the Lévy–Steinitz theorem was formulated in 1905 and 1913, whereas, the Riemann's Hypothesis was formulated in 1859. This means that Riemann who died in 1866 and even the famous Euler never knew the Lévy–Steinitz theorem.
Everybody is eager to see the New winner of the Millenium Prize. Please share all your incomplete works about the Millenium Prize problems of Clay Mathematics Institute in order to collaborate for the solutions.
I am actually using the different nabla operator which I demonstrated mathematically in my published work: " A thesis about Newtonian mechanics rotations and about differential operators ".
This demonstrated differential tool enables to deal differently with the millenium problem about Navier-Stokes equation if the coordinates are not in the Cartesian. Furthermore, I proved that the field of velocities is not equiprojective.
I also suggest many ideas about complexity of P problems with my article that ends with a contradiction..
I will be waiting for your collaborations.
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Any idea why the solution of the attached equation is always zero at r=0? It seems simple at first look, however, when you start solving, you will see a black hole-like sink which makes the solution zero at r=0 (should not be). I used the variable separation method, I will be happy if you suggest another method or discuss the reasons.
I also attached the graph of the solution, showing the black hole-like sink.
Hello Friends and Colleagues,
Can anyone suggest a mathematical book that helps me to build my own mathematical equations and functions? I want to convert real-life problems(natural sciences) into mathematical formulations.
Note that I have basic knowledge of mathematics.
Thanks in advance,
I would like to know how to locate mathematically a damage on a blade.
Usually a frequency analysis of the blade and a comparison with a similar healthy one helps to determine a default or damage on a blade. However, how could someone locate that damage?
Thank you for your time.
I'm struggling to understand the method followed in the following analysis. Can someone please explain how the author got the values of Δ_1 and K_1 that justify its analysis?
I have tried to isolate "Δ" and "K" by setting Equation (B8) equal to zero. but I have failed to get similar conditions.
P.S: I'm new to mathematical modelling, so I really need to understand what's going on here. Thanks
When solving mathematical equations or system of equations there are things that are considerable; such as the Universe or domains...
Because, some equations may have no solutions or impossible to solve., at all. If a solution or some solutions exist for an equation or systems of equations, there are regions or intervals containing the solution(s). Such considerations may be important when applying numerical methods.
What are basins of attractions?
The dimensioned physical constants (G, h, c, e, me, kB ...), can be considered fundamental only if the units they are measured in (kg, m, s ...) are independent. The 2019 redefinition of SI base units resulted in 4 physical constants assigned exact values, and this confirmed the independence of their associated SI units. However there are anomalies which occur in certain combinations of these constants which suggest a mathematical (unit number) relationship (kg -> 15, m -> -13, s -> -30, A -> 3, K -> 20) and as these are embedded in the constants, they are easy to test, the results are consistent with CODATA precision. Statistically therefore, can these anomalies be dismissed as coincidence?
We begin by assigning geometrical objects instead of numerical values to the Planck units, these objects can then be combined Lego-style to form more complex objects, from electrons to planets, while still retaining the underlying attritbutes (of mass, length, time...). Solving the constants using this approach provides evidence for a unit relationship.
For convenience, the article has been transcribed to this wiki site.
Some general background to the physical constants.
I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.
Follow this question on the given link
I am trying to model a business scenario mathematically for my research paper but I do not have the required skillset. What is a legitimate way to find and get help. Are there any online sources or paid services. Do I need to add the expert a co-author? What type of solutions exists.
Dear Colleagues, a recent trend in Fractional Calculus is in introducing more and more new fractional derivatives and integrals and considering classical equations and models with these operators. Thus, we have to think about and to answer questions like “What are the fractional integrals and derivatives?”, “What are their decisive mathematical properties?”, “What fractional operators make sense in applications and why?’’, etc. These and similar questions have remained mostly unanswered until now. To provide an independent platform for discussion of these trends in the current development of FC, the SI “Fractional Integrals and Derivatives: “True” versus “False””( https://www.mdpi.com/journal/mathematics/special_issues/Fractional_Integrals_Derivatives2021) has been initiated. In this SI, some important papers have been already published. However, you are welcome to share with the scientific community your viewpoint. Contributions to this SI devoted both to the new fractional integrals and derivatives and their justification and those containing constructive criticism of these concepts are welcome.
If interested you can send different articles from Piaget theory in the learning of mathematics through play.
my article is about the relationship between playing and increasing intelligence via mathematics. this a longitudinal research which has done since 4 years ago until to year. it is a pre-test- post-test research with control group. results are wonderful. assessment tool in this research was Stanford-Binet test. I am following to a journal in grade Q1. I would be grateful if someone could help me.
A study by Po-Shen Loh shows that quadratic equations can be solved using a clever trick that reduces guessing and cramming formula. Kindly follow this link to see examples of quadratic equations solve by Po-Shen Loh https://www.youtube.com/results?search_query=po+shen+loh+quadratic .
Could this idea be applied in class? if so, could it be generalized for other topics in mathematics?
I have formulated the mathematical equation of the vibration problem. The resulting equation is coupled nonlinear differential equation of 2nd order ODE. Please, anyone, suggest to me how to solve it using MATLAB.
I am working on RIS aided communication. Whichever paper I go though, they cook up some complicated mathematics specially optimization problem, which seems to be unsolvable at first. But, then I see they are using some techniques which I have never seen anywhere. Can anyone from wireless comm background tell me, how you people proceed and get those sort of maths?
I'm currently working on a Data Science project for optimizing the prices of the products one of the biggest supermarket chains in Mexico.
One of the things that we are working on, is finding the price elasticity of demand of such products. What we usually do, is that, apart from fitting an XGBoost model for predicting sales, we fit a linear regression, and we get the elasticity from the coefficient corresponding to the price (the slope).
However, it is abvious that linear regression is sometimes a poor fit for the data, not to mention that the execution times are way longer since it requires to run separately XGBoost and LR (which is not good considering that there are thousands of products to model).
Because of this, it ocurred to me that we could use numerical differentiation for finding the price elasticity. At last, calculating a numerical derivative is way faster than fitting another model.
However, I'm not sure if this is mathematically correct, since the data does not come from a function.
So the question would be, is this mathematically correct? Does it make sense?
I am researching in a topic related to philosophy and teaching methods. Please, could you point out me if are there any sources on dialectics or contradiction in mathematics education?
Thank you for your answers.
Please look at the text of the section on random walk from page 9 to formula 4.7, where you will find mathematical calculations justifying the probabilistic interpretation of the Riemann zeta function.
Preprint Chaotic dynamics of an electron
I will be glad if researchers and professors answer my question with mathematical formulas or explanations. Thank you so much.
Hello? Good evening, I would like to ask if there is a questionnaire aligned to the set of indicators on Mathematical Competence given by Sir Turner?
I want to understand the mathematics of fluorescence process in terms of excitation and emission wavelengths. I want to develop a general mathematical model with certain specific parameters and without employing a spectrometer, I want to see the emission spectrum mathematically.
Hello, this is my first post on this site. I'm an undergraduate student doing some Raman spectroscopy of CVD-grown graphene strained on silicon dioxide nanospheres. I notice that D and G' peaks show up in some measurements. The transfer process of the graphene to the silicon dioxide nanosphere-coated silicon chips I would think is far from perfect, as in certainly interferes with the structure of the graphene as there are rips and tears across the sample, as well as impurities and other things. You do however have "pristine" regions that only show G and 2D peaks.
On a loosely related tangent, I'm interested in how the molecular symmetry of graphene plays a role in its Raman spectra, and how that can be expressed mathematically. I wonder if perhaps the mathematical description of graphene in terms of group theory can possibly help explain the redshifts that occur in strained graphene versus unstrained graphene. If anyone has some advice or things to read about that, please let me know!
I have myself tried using the basic method but including incomplete ionization to figure out the depletion width however I failed miserably because of many mathematical roadblocks. I was wondering if this had been done in the literature before and I just missed it.
If anyone can help me in this regard then I would be very grateful. Thanks.
please is there any mathematical function that relates the SMD (D32) to the mean diameter (D50)? I actually understand what each of them represent.
I am primarily interested in 2-player combinatorial games with perfect information. Useful wiki links are below.
In statistics, Cramér's V is a measure of association between two nominal variables, giving a value between 0 and 1 (inclusive). It was first proposed by Harald Cramér (1946).
It is actually considered in many papers I came accross that a threshold value of 0.15 (sometimes even 0.1) can be considered as meaningful, hence giving hints of a low association between the variables being tested. Do you have any reference, mathematical foundation or explanation on why this threshold is relevant ?
Is there a way to model the influence of pH on the electroosmotic EDL potential and velocity field inside the flat microchannel mathematically?
I'm solving nonlinear second order equation by using finite difference method . finally for calculating value at any desired node, knowing three preceding nodes is required however by knowing boundary condition just one of these nodes becomes obvious and still knowing two other values is necessary. it must be noted there are plenty of guesses for values of these nodes which lead to compatible response.
The 2023 ranking is available through the following link:
QS ranking is relatively familiar in scientific circles. It ranks universities based on the following criteria:
1- Academic Reputation
2- Employer Reputation
3- Citations per Faculty
4- Faculty Student Ratio
5- International Students Ratio
6- International Faculty Ratio
7- International Research Network
8- Employment Outcomes
- Are these parameters enough to measure the superiority of a university?
- What other factors should also be taken into account?
Please share your personal experience with these criteria.
Here I attached the journal and the mathematical solution of the Rayleigh number. I need to plot graph exactly in Figure 3 from the journal but I didn't know the command. I have tried using this command,
plot(subs(N1 = 0.5, N3 = 2, N5 = 1.5, a1 = 1, a2 = 3, a3 = 1, Q = 10, R), a = 0 .. 10, 11 .. 22);
but the results show only a single line 2D graph. Hopefully anyone could help me with this. Thank you in advanced.
I am currently working on the use of metacognitive abilities to improve teacher proficiency of teaching mathematics in Primary schools. I am looking for international collaborators from Japan, Germany, Singapore, Netherlands, USA, Canada and Australia.
I plan to divide my long research article(simulation + mathematical) into two parts, but I am clueless about how to do this. (I can not separate the simulation and mathematical analysis)
I have a few questions regarding the same
1) Do I need to show the common mathematics in both parts?
2) Can the introduction be the same?
3) Can some explanations remain the same in both parts?
Can someone give me the reference of any article divided into two parts?
I asked people about their knowledge & Interest.
Most had no knowledge but high interest. Now I want to describe this relationship using statistical methods.
I used a likert scale so, the data is ordinal.
Using a Spearman's rank test I get a positive correlation with good significance.
But I don't really understand this result. I expected a negative correlation, since interest and knowledge frequencies have contrary slopes. Does the test look at individual pairs, where those with high knowledge may also have high interest?
PS: How could I form a mathematical equation to describe the relationship (regression)?
In my previous question I suggested using the Research Gate platform to launch large-scale spatio temporal comparative researches.
The following is the description of one of the problems of pressing importance for humanitarian and educational sectors.
For the last several decades there has been a gradual loss in quality of education on all its levels . We can observe that our universities are progressively turning into entertaining institutions, where students parties, musical and sport activities are valued higher than studying in a library or working on painstaking calculations.
In 1998 Vladimir Arnold (1937 – 2010), one of the greatest mathematicians of our times, in his article “Mathematical Innumeracy Scarier Than Inquisition Fires” (newspaper “Izvestia”, Moscow) stated that the power players didn’t need all the people to be able to think and analyze, only “cogs in machines,” serving their interests and business processes. He also wrote that American students didn’t know how to sum up simple fractions. Most of them sum up numerator and denominators of one simple fraction with the ones of the other, i.e. as they did it, 1/2+ 1/3 according to their understand is equal to 2/5 . Vladimir Arnold pointed out that with this kind of education, students can’t think, prove and reason – they are easy to turn into a crowd, to be easily manipulated by cunning politicians because they don’t usually understand causes and effects of political acts. I would add, for myself, that this process is quite understandable and expected because computers, internet and consumer society lifestyle (with its continuous rush for more and newer commodities we are induced to regard as a healthy behavior) have wiped off young people’s skills in elementary logic and eagerness to study hard. And this is exactly what the consumer economics and its bosses, the owners of international businesses and local magnates, need.
I recall a funny incident that happened in Kharkov (Ukraine). One Biology student was asked what “two squared” was. He answered that it was the number 2 inscribed into a square.
The level and the scale of education and intellectual decline described can be easily measured with the help of the Research Gate platform. It could be appropriate to test students’ logic abilities, instead of guess-the-answer tests which have taken over all the universities within the framework of Bologna Process which victorious march on the territories of former Soviet states. Many people can remember the fact that Soviet education system was one of the best in the world. I have therefore suggested the following tests:
1. In a Nikolai Bogdanov-Belsky (1868-1945) painting “Oral accounting at Rachinsky's People's school”(1895) one could see boys in a village school at a mental arithmetic lesson. Their teacher, Sergei Rachinsky (1833-1902), the school headmaster and also a professor at the Moscow University in the 1860s, offered the children the following exercise to do a mental calculation (http://commons.wikimedia.org/wiki/File:BogdanovBelsky_UstnySchet.jpg?uselang=ru):
(10 х 10 + 11 х 11 + 12 х 12 + 13 х 13 + 14 х 14) / 365 = ?
(there is no provision here on Research Gate to write square of the numbers,thats why I have writen through multiplication of the numbers )
19th century peasant children with basted shoes (“lapti”) were able to solve such task mentally. This year, in September, this very exercise was given to the senior high school pupils and the first year students of a university with major in Physics and Technology in Kyiv (the capital of Ukraine) and no one could solve it.
2. Exercise of a famous mathematician Johann Carl Friedrich Gauss (1777–1855): to calculate mentally the sum of the first one hundred positive integers:
1+2+3+4+…+100 = ?
3. Albrecht Dürer’s (1471-1528) magic square (http://en.wikipedia.org/wiki/Magic_square)
The German Renaissance painter was amazed by the mathematical properties of the magic square, which were described in Europe firstly in Spanish (the 1280s) and Italian (14th century) manuscripts. He used the image of the square as a detail for in his Melancholia I painting , which was drawn in 1514, and included the numbers 15 and 14 in his magic square:
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
Ask your students to find regularities in this magic square. In case this exercise seems hard, you can offer them Lo Shu (2200 BC) square, a simpler variant of magic square of the third order (minimal non-trivial case):
4 9 2
3 5 7
8 1 6
4. Summing up of simple fractions.
According to Vladimir Arnold’s popular articles, in the era of computers and Internet, this test becomes an absolute obstacle for more and more students all over the world. Any exercises of the following type will be appropriate at this part:
3/7 + 7/3 = ? and 5/6 + 7/15=?
I think these four tests will be enough. All of them are for logical skills, unlike the tests created under Bologna Process.
Dear colleagues, professors and teachers,
You can offer these tasks to the students at your colleges and universities and share the results here, at the Research Gate platform, so that we all can see the landscape of the wretchedness and misery resulted from neoliberal economics and globalization.
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
Philosophers of science typically recognize two kinds of values in scientific practice: (1) epistemic (or theoretical, or cognitive) virtues, like accuracy, testability, empirical support, etc, and (2) ethical (or social, or regulative) norms, like justice, egalitarianism, openness, etc. Of course, the strict separation of these categories is open to disagreement.
Are there values or norms (of either kind) that are unique to mathematics? Rigour (or provability) is one possibility; computability is another. Can you think of others? Do values play the same kind of role in math as in the natural sciences?
There are instructions on the steps of running ‘align’ in a command-line way as shown in the following explanation found online (https://pymol.org/dokuwiki/doku.php?id=command:align). However, I didn’t found the mathematics or details for PyMOL running ‘align’ through the click-button way in GUI interface. Anyone know the mathematics or details behind it? Thanks a lot!
One of my research questions is about the level of democratic practices in the mathematics high school classroom. When examining the normality of the the demoratic practices, it turned out that they are not normal. In normal samples, we use one-sample t-tests with critical values that fit our assumptions of the level of the democratic practice. For example if we want to take into consideration three levels, we divide (5-1)/3=1.33. This enables us to consider what is below the grade 2.33 weak, what is between 2.33 and 3.66 midium, and what is abover 3.66 high. This serves us in doing the one-sample t-test, where the critical value is 2.33 or 3.66 to verify the level of the variable.
My question is: How can we do that in case of non-normal distribution?
I would like to know the effect of temperature and pressure on the density of Fe2O3 nanoparticles, and if there are some tables or equations that describe the change mathematically.
Combining more than one higher quality has less quality. So how do we express this mathematically? E.g; Let 2x'2 - 4x'2 +6x'2. If it is noticed, one of them has 1 term and the other has 2 terms. So what do we get from here? it looks like 4x'2 =8x 6x'2=12 x sums will be 20x when the derivative is taken. However, if we take their exponent without the differentiation, it will be 4x'2 =16 6x'2 =36. So when more than one higher attribute comes together, it only has less attributes when deriving. THIS IS ANOTHER DEFINITION OF THE DERIVATIVE. THANK YOU. I didn't actually get this out of the way. THERE WAS A LOT OF STARS IN THE PARIS SAINT GERMAIN FOOTBALL TEAM; HOWEVER, WHEN PLAYING TOGETHER, MANY QUALITIES TURNED INTO FEW QUALITIES. THANKS.
How can all these discrepancies be explained mathematically?
I am working with a scale that could be considered to have 11 dichotomous (0-1) and polytomous (0-2, 0-3, and 0-5) items OR 20 sub-items (dichotomous and some polytomous). The sample size is sound (> 700 subjects).
A) If I do an exploratory analysis* with 11 items on half of the (randomized) sample, I get 2 factors. The confirmatory analysis** with the other half confirms the two factors, presenting good adequacy indices values. Some authors also obtained 2 factors, either with sim