Questions related to Mathematics
My dear friends, I am asking if some of your students are interested in applying a postdocotor position in China with me, here is the link and details!!!
I am looking for an method / algorithm/ or logic which can help to figure out numerically whether the function is differentiable at a given point.
To give a more clear perspective, let's say while solving a fluid flow problem using CFD, I obtain some scalar field along some line with graph similar to y = |x|, ( assume x axis to be the line along which scalar field is drawn and origin is grid point, say P)
So I know that at grid point P, the function is not differentiable. But how can I check it using numeric. I thought of using directional derivative but couldn't get along which direction to compare ( the line given in example is just for explaining).
Ideally when surrounded by 8 grid points , i may be differentiable along certain direction and may not be along other. Any suggestions?
Take, for example, such a concept as a minimum flow, that is, a gradient vector field, the level surfaces of which are the minimum surfaces. Then the globally minimal flow, evolving to an absolutely minimal state, could be compared with a quantum vacuum, and the locally minimal flow could be compared with fields and particles. At the same time, it is clear that it is necessary to correctly choose the space in which this minimum flow moves.
I searched a lot in googl and youtube for a step by step explanation for the Finite Elements Method. All throw a bunch of equations and mathematical terms without explaining why or where they came from.
Would you please suggest a good book or an article that clearly explains FEM?
Electromagnetic (EM) waves have invoked a lot of interest among scientists and engineers over centuries. And this interest seems to be on the rise, in view of new applications of EM waves being explored and developed, particularly at newer and higher frequencies.
Propagation characteristics of EM wave depend on its frequency (or wavelength), to a large extent. And when an EM wave interacts with an object/material, it undergoes reflection, refraction, scattering, attenuation, diffraction, and/or absorption. Each of these effects are dependent on the frequency of the EM wave(s) because the size of wavelength (relative to the object/material) assumes great significance.
And due to the huge range of frequencies of EM waves employed in various applications these days, they undergo a variety of different effects. This confuses the scientific community sometimes as it is often unclear as to which effect is more dominant at what frequency.
Thus a single mathematical formula (or a small set of formulae) would/could be of great help if different effects (as listed above) and their relative weights can be known at different frequencies. This may be of great boon to young scientists and engineers as it would simplify things particularly for those who are mathematically minded.
I hope for a global overview on mathematical giftedness and its support in school and/or on an extracurricular level. What programmes/opportunities are offered?
By dynamical systems, I mean systems that can be modeled by ODEs.
For linear ODEs, we can investigate the stability by eigenvalues, and for nonlinear systems as well as linear systems we can use the Lyapunov stability theory.
I want to know is there any other method to investigate the stability of dynamical systems?
1. The nearest neighbor of 𝑝𝑖 then 𝑝𝑖-𝑝𝑗 is a Delaunay edge.
2. In a 3D set of points, if we know that consecutive points ie... 𝑝𝑖-𝑝i+1 are nearest neighbors.
3. The 3D points do not form a straight line
Each Delaunay tesselation (3D) has at least 2 nearest neighbor edges.
Is my assumption true? If not can you please explain to me the possible exceptions?
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
I am currently studying the effect of atrophy of a muscle on the clinical outcome of joint injury. There is actually another muscle that was previously well established to have an effect on clinical outcome, and both these 2 muscles are closely related. The aim of the study was to shed some light on the previously ignored muscle to see if there is anything that can be done to help improve clinical outcomes in that aspect.
While doing univariate analysis, i wasnt sure if i should include the previously established muscle as well and when i included it into the multi-linear regression model, the initially significant primary variable became insignificant. I was thinking if this could be due to co-linearity but the VIF value was not high enough to show significant co-linearity in the two variables. (GVIF ^(1/(2*Df))=1.359987)
My question is, should these 2 variables be included in the same model if they are both highly correlated (clinically and mathematically) but was not determined to have co-linearity, or should these 2 variables be evaluated separately?
Good evening all;
We are looking for literature on the mixed integer formulation of water distribution problems using Multi objective optimization methods.
I know lots of composers have created works around mathematical constructs such as the Fibonacci sequence. I would like to learn if any composers have used mathematical constructs in their music to represent journeys.
I am trying S parameter measurement _transmission_using TEKTRONIX DSA8300 oscilloscope. Initially, S parameters files are generated in LINEAR _magnitude format. Now S parameters transmission files are appearing in dB format from oscilloscope. Perhaps machine settings seem to be changed.
1)Kindly guide for appropriate setting button in TEKTRONIX DSA8300 oscilloscope, so as to receive the data from dB to linear magnitude format.
2) Also, alternative mathematical ways to receive data in LINEAR magnitude format are appreciated as well , kindly.
Charles Sanders Peirce regarded mathematics as “the only one of the sciences which does not concern itself to inquire what the actual facts are, but studies hypotheses exclusively” (RLT, 114). Since, by contrast, “[w]e must begin with all the prejudices which we actually have when we enter upon the study of philosophy” (CP 5.265), the presuppositionless status of mathematics makes it more primitive than anything found in philosophy. Given that phenomenology falls under philosophy (CP 1.280), we get the result that mathematics is prior to phenomenology.
Yet, Peirce also held that “every deductive inference is performed, and can only be performed, by imagining an instance in which the premises are true and observing by contemplation of the image that the conclusion is true” (NEM III/2, 968).
We thus have two conflicting arguments:
On the one hand, one could argue that mathematics is prior to phenomenology because mathematics makes even less presuppositions than phenomenology.
On the other hand, one could argue that phenomenology is prior to mathematics because whatever happens during mathematical inquiry must perforce appear before (some)one.
Peirce's pronouncements notwithstanding, it is not obvious to me why the first argument should trump the second. In fact, I find considerations about the inevitability of appearing in mathematics to be decisive.
What do you think?
Hi researchers, I have a problem with the mathematical formulation of the multi objectives model for solving the RFID planning problem network. Do you have any courses or documents or information that can help me achieve my mathematical model of RFID network optimization deployed in a body network. i didn't choose the approach and the algorithme of multi optimization yet, I am formulating my problem mathematically
how I do obtain in the mathematical expression "limiting current density used to reduce Fe+3(A/m2)"? actually how i find the i (Fe)?
i (c)= i (cu)+i (Fe)
Is there an encyclopedia of all the branching mathematical axioms, together with various ways of proving different theorems based on those axioms?
As you may be knowing that there are different mathematical tools and techniques which we can combine or hybridize with heuristic techniques to solve their entrapment in local minima and convergence issues. I know two techniques namely Chaos theory and Levy distribution as I have used them for increasing convergence speed of Gravitational Search Algorithm (GSA). So, my question is: can you name and briefly explain other mathematical techniques which we can combine with optimization algorithms in order to make them fit for solving complex real world problems.
The master Paul Erdos said "Mathematical may not be ready for such problem"
Terence Tao recently proposed a new and advanced approach for this conjecture and concluded: "Almost all orbits of the Collatz map attain almost bounded values".
The Collatz 's conjecture is infamous and very hard to solve
Take any positive integer, if it is even divide it by 2. If it is odd , multiply the number with 3 and add 1. Whatever the answer , repeat the same operations on the result.
Suppose the number is 5 then the operations wil be as follows: 5, 16, 8, 4, 2,1,4,2,1
Suppose the number is 7 then the operations will be as follows:7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1,4,2,1
The conjecture has been verified by computer for number as big as 10^18 and respects all the powers of 2. This is easely checked: 128, 64, 32, 16, 8, 4, 2, 1, 4, 2, 1.
How any positive integer reach some power of 2 in order to reach the loop of 4, 2, 1.?
We claim that any positive integer has a special numbber equal to a multiple of the positive integer. When the operation of 3n+1 is performed on that multiple it leads to some power of 2 .
N=1 gives special multiple 5=5*1.
N=3 gives special multiple 21=3*7
N=5 gives special multiple 85=17*5
The set (1, 5, 21, 85, 341.....) are called Collatz Numbers.
So we can claim that the Collatz's conjecture is almost solved.
A careful reading of THE ABSOLUTE DIFFERENTIAL CALCULUS, by Tullio Levi-Civita published by Blackie & Son Limited 50 Old bailey London 1927 together Plato's cosmology strongly suggest that gravity is actually a real world mathematics or in another words is gravitation a pure experimental mathematics?
In the preprint
W.-H. Li and F. Qi, A further generalization of the Catalan numbers and its explicit formula and integral representation, Authorea Preprints (2020), available online at https://doi.org/10.22541/au.159844115.58373405
I concluded two integral formulas indicated in the picture.
(1) Do you know the existence of these two integral formulas? Please give concrete and explicit references containing these two integral formulas.
(2) Can you find direct and elementary proofs for these two integral formulas?
why we plot absorbance vs wavelength although there is no direct formula between them and I also want to know that their is any direct or indirect relation between molar extinction coefficient and wavelength. I am trying to generate a theoretical plot between absorbance vs wavelength of single layer MoS2 by using python program,so I need mathematical formula for calculation .
The literature on public (and some school students') understanding of science and mathematics shows many have problems decoding relatively simple information, concepts and data such as from graphs. In the UK, and many other countries, the public have been exposed to unprecedented amounts of information, ideas, scientific findings, formulae, graphs and so on that purport to provide understanding of the global COVID-19 pandemic, so as to presumably advise on risk and guide personal decisions and influence behaviour. But what are the implications of this massive shift in communication for public understanding in general and for future science and mathematics education in schools?
FTIR technology is considered the most advance for the detection of adulterants in milk. Is there any mathematical relation that can describe the relationship between the amount of adulterants in milk using the absorbance from the FTIR? Please suggest any research articles that describe this or related areas.
Hello fellow scientists,
I wish to determine the Dissociation Constant (KD) of a DNA polymerase binding dsDNA. I won't disclose what the DNA polymerase is because it is unpublished work. I have done some binding assays in Agarose gels, but due to the poor sensitivity of the available dyes I had to visualize the relative binding stoichiometrically, and I could not simply just set the protein or DNA concentration around the expected KD.
Previous work in our lab has determined a KD = 20 nm for our DNA polymerase binding a 33mer locked double stranded DNA hairpin.The purpose of using something so complicated was for kinetics assays.
However, I am using a 13-mer dsDNA construct because my goal is to crystallize the DNA complex and a 33-mer is just way too large! My supervisor has advised that I don't believe that my KD is actually 20 nM for my small dsDNA construct.
I am interested in using Isothermal Titration Calorimetry mainly to calculate the KD of my protein to binding this 13mer dsDNA construct. I would titrate my dsDNA into a fixed concentration of protein. I could guess that the KD is 20 nM, but I actually don't know for sure.
I have heard that when you determine the KD you have to have some estimate of the KD and then scan ligand concentrations above and below the KD, measure the response to get a curve of response vs ligand concentration and the KD is mathematically fit or basically it is just the inflection point of the binding curve.
However that advice doesn't tell me if the KD is say 20 nM, what should fixed concentration of my protein be? (I have appreciable amounts of 100 µM protein because I am a crystallographer so excessive protein isn't an issue.). What is the max and min range that I should scan the ligand concentrations? What if the KD is way worse than we predicted and it is actually 1 µM? What fixed concentration of protein should I use and what min and max concentrations of ligand should I use?
Is there a way that I can measure the KD with a certain fixed concentration of protein, and a huge range of ligand concentrations regardless of if the KD is 20 nM or 1 µM? Is that possible?
NO. No one on Earth can claim to "own the truth" -- not even the natural sciences. And mathematics has no anchor on Nature.
With physics, the elusive truth becomes the object itself, which physics trusts using the scientific method, as fairly as humanly possible and as objectively (friend and foe) as possible.
With mathematics, on the other hand, one must trust using only logic, and the most amazing thing has been how much the Nature as seen by physics (the Wirklichkeit) follows the logic as seen by mathematics (without necessarily using Wirklichkeit) -- and vice-versa. This implies that something is true in Wirklichkeit iff (if and only if) it is logical.
Also, any true rebuffing of a "fake controversy" (i.e., fake because it was created by the reader willingly or not, and not in the data itself) risks coming across as sharply negative. Thus, rebuffing of truth-deniers leads to ...affirming truth-deniers. The semantic principle is: before facing the night, one should not counter the darkness but create light. When faced with a "stone thrown by an enemy" one should see it as a construction stone offered by a colleague.
But everyone helps. The noise defines the signal. The signal is what the noise is not. To further put the question in perspective, in terms of fault-tolerant design and CS, consensus (aka,"Byzantine agreement") is a design protocol to bring processors to agreement on a bit despite a fraction of bad processors behaving to disrupt the outcome. The disruption is modeled as noise and can come from any source --- attackers or faults, even hardware faults.
Arguing, in turn, would risk creating a fat target for bad-faith or for just misleading references, exaggerations, and pseudo-works. As we see rampant on RG, even on porous publications cited as if they were valid.
Finally, arguing may bring in the ego, which is not rational and may tend to strengthen the position of a truth-denier. Following Pascal, people tend to be convinced better by their own-found arguments, from the angle that they see (and there are many angles to every question). Pascal thought that the best way to defeat the erroneous views of others was not by facing it but by slipping in through the backdoor of their beliefs. And trust is higher as self-trust -- everyone tends to trust themselves better and faster, than to trust someone else.
What is your qualified opinion? This question considered various options and offers a NO as the best answer. Here, to be clear, "truth-denial" is to be understood as one's own "truth" -- which can be another's "falsity", or not. An impasse is created, how to best solve it?
My question is: Is there any mathematical or empirical way to prove that given a dataset containing noisy signals y(t) [Y = X +N] and another dataset containing noise N and we want generator to generate clean signals X ̂ . How to prove other than experiments that generator will be able to generate clean signals from random noise vector z.
I have a query regarding data transformation if anyone can provide any guidance please?
I was wondering if, generally, it is possible to transform a variable's raw data twice, using 2 different methods, for the purpose of 2 different tests? I will provide you with a little background to my study first. I have a variable for 'Adverse Childhood Experiences' containing 1 score per participant. N = 113; however, 65 of these are 0 values and 3 are missing data - which I believe is disrupting my data considerably. I understand that it is not advised to simply remove the cases that read 0 just because there are many (however, if you recommend otherwise please let me know if so and why).
Useful to note here is that this variable has a skewness of 1.943, and because of this, I have made the decision to transform it.
I am carrying out a path analysis with 1 IV, one DV and 2 mediators. In the first instance I am carrying out a t-test (IV - gender, DV - ACE score) and then in the second instance I am carrying out a linear regression (IV age, DV - ACE score), to understand whether age and gender need to be included in my path analysis as covariates. In order to meet the assumptions of the t-test (namely, normal distribution across both levels of the IV: male and female) I have transformed the raw ACE data this using Tukey's formula, which brought the skewness to < 1 for each IV level - great. But then when I go to carry out the linear regression, and aim to meet the assumption of approx. normal distribution of residuals, the assumption is not met on the Tukey transformed ACE data. I have carried out a number of other transformations on the raw ACE data and the only one where the residuals are normally distributed for the regression is through a Log10 transformation.
My question is this: am I able to carry out the t-test with the Tukey transformed variable data, and then the linear regression with the Log10 transformed data? Or is it the case that I need to use the same transformed data for each stage of the analysis (ie. both Tukey or both Log10 for t-test and linear regression and then the same onward path analyses?)
If it is the case that I will need to use the Log10 ACE data to go back and carry out the gender t-test, it is useful to note here that I have done this already and when inspecting the Log10 transformed ACE data across the gender variable descriptives table the results come out very strange - for example, N for males goes down from 15 to 6, and N for females goes down from 115 to 59, and there are outliers, where there were none in the Tukey transformed data descriptives, so it is confusing me a little.
Any guidance welcome!
In mathematical ecology, the recent trend is predator induced fear to prey which is an indirect effect of predator to prey. My question is how the prey populace are afraid of infected predator? Are they capable in inducing same level of fear as of healthy predator? Any efforts regarding this will be appreciated.
I am dealing with data with sevaral features and many of them are highly correlated with each other as well as with dependent variable.
In my research on this topics, I found that multicolinearity is harmful for regression problem and may not end up with good model. I got some suggestion that if the features are highly correlated then we have to remove them using VIF criterion.
But, logically when I think of removing correlated features from my analysis how can expect better model as I am not considering all the available information.
Is there any logical explaination or mathematical explaination is available for the above question?
Also, I am thinking that each features are somehow related to any of the other (May be nonlinearly) in that case, do we have problem of multicolinearity ?
The new Education Act (LOMLOE) that is now being prepared in Spain intends to make Mathematics an optional subject. The Mathematics Institute has issued a manifest that argues about the importance of Mathematics in society, and in favour of keeping Mathematics as a compulsory subject in high school. If you agree with this, please sign the manifest at the link below (the manifest is in Spanish; I don't remember if there is an English version):
There is also a petition at change.org:
Thank you very much in advance.
In the midst of or post Covid-19, any suggestion(s) or (articles) on how best to implement blended teaching to optimize teaching and learning of mathematics
For typical dose-response assays, our lab usually uses steady state intervals for defining the difference between control and tested compound. For those assays, we typically use the angular coefficient or end-point value of given curve (within steady state) to estimate percentage of inhibition, or even kinetic constants
Now we have being working with an enzyme with strong 'sigmoidal' time curse reaction (hill n=3). How can I mathematically compare curves between control and inhibited reactions, or calculate constants?
If anyone could please point me to a good theorical reference or literature examples, I will be very thankful
Stay all safe
Many informal settlements have insufficient capacity to forecast, check, handle and reduce disaster risk. These communities face a growing range of challenges including economic hardship, technological and social impediments, urbanisation, under-development, wildfire, climate change, flooding, drought, geological hazards and the impact of epidemics such as HIV/AIDS and COVID-19, sometimes termed ‘the burden of disease’. The inability of these communities to withstand adversities affects the sustainability of initiatives to develop them.
This is a question I would have asked during my masters degree research on Resilience in Disasters. I would like to know the opinions of other researchers as I would like to properly answer this question in a different research-related topic.
I am studying mass-spring-damper systems with coulombs friction. There are multiple discussions on simulating such systems using numerical methods and the problems that arise due the discontinuous excitation but I wanted to know if an analytical solution exists. To be mathematically clear about the problem, I am trying to analytically solve the following.
m*(d2x/dt2) + c*(dx/dt) + k*x = F*sign(dx/dt)
where the sign function is defined as:
sign(var) = 0 if var = 0
sign(var) = 1 if var > 0
sign(var) = -1 if var < 0
Note: I am aware of treating such systems as piece-wise linear nonlinear systems but I want to know whether a general solution exists that is capable of solving the problem without breaking it to a number of mini-problems.
Much has been said about the differences between physics and mathematics, but less attention has been paid to the differences between physics and chemistry.
The question is, where does physics and chemistry work?
I am interested to solve a mathematical problem (MILP) using evolutionary algorithms but confuse about which one to choose as a beginner in the programming languages. Suggest an algorithm easy to implements with better results.
What are the mathematical equations used to assess the environmental impact using some biological criteria in green algae?
Quadratic equations with complex root were considered unsolvable in secondary schools. this limitation is due to the lack of topic to address the idea of complex number in Nigerian secondary school Mathematics curriculum.
is it Okay to introduce the idea of the complex number so as to enable the student to solve a wide range of questions?
This question was raised by a student I coach when I told him that some quadratic equations do not have solutions in the realm of real numbers!
I understand that we can produce that number in MATLAB by evaluating exp(1), or possibly using exp(sym(1)) for the exact representation. But e is a very common constant in mathematics and it is as important as pi to some scholars, so after all these many versions of MATLAB, why haven't they recognize this valuable constant yet and show some appreciation by defining it as an individual constant rather than having to use the exp function for that?
UPDATE: The values of the variables that I am currently concerned with are:
While trying to solve a circuit equation, I stumbled onto a type of Lienard Equation. But, I am unable to solve this analytically.
x'' + a(x-1)x' + x = V-------------------------(1)
where dash(') represent differentiation w.r.t time(t).
The following substitution y =x-V and w(y) = y', it gets converted into first order equation
w*w' + a(y+V-1)w + y = 0; ---------------------- (2)
here dash(') represent differentiation w.r.t y.
if I substitute z = (int)(-a*(y+V-1), (int) represent integration. The equation gets converted into Abel equation of second kind.
w*w' - w = f(z). -------------------- (3) differentiation w.r.t z.
it get complicated and complicated.
I would like to solve the equation (1) with some other method or with the method that I had started. Kindly help in solving this,
Thank you for your time.
I am interested in including the inverse piezoelectric effect into my GaN HEMT simulation. Sentaurus Device provides a special feature that allows me to update the stress field by invoking the mechanic solver (Sentaurus Interconnect). But I don't have confidence in the results I got. Because from the mathematical point of view, solving the inverse piezoelectric effect is just a simple matrix multiplication (AB = C). However, the final matrix C I got was very weird - some components in C matrix should be zero but they are not. So I was wondering if there is anyone has the same situation about this?
Heat transfer problem: what mathematical calculations apply to estimate the net current yield in a (TEG HV3-based) thermoelectric panel system where the source emits 5 Mjoule/h at 100°C, and cooling is aimed at by blowing air at 28°C as depicted in the attached drawing. How would you mathematically define a function to maximize net electricity yield by controlling blower speed? What heat exchange design tips are advisable? Thank you.
I don't find mathematical problem-solving skills which i can call them my dependent variable. can you please suggest me the mathematical problem-solving skills that work as dependent variable. My research topic is "comparison of students' mathematical problem solving skills taught by guided practice and problem-solving approach"
Many of the tools I saw and used were designed for measuring performance in a particular topic of mathematics. I am looking for a tool that can capture one's general mathematical thinking skills.
there is a new proof of the 3n+1-Problem !
The paper ist available
Perhaps there is a flaw in the proof
What is your opinon?
The Collatz-conjecture (the famous 3n+1-problem):
we construct a sequence of integers starting with integer n = a_0
If a_j is even, the next number ist a_(j+1) = a_j/2.
If a_j is odd, the next number ist a_(j+1) = 3*a_j+1.
Example n = 6:
6, 3, 10, 5, 16, 8, 4, 2, 1
The Collatz-conjecture: the sequence with a every positive starting-integer ends always in the sequence 4,2 1
Matlab based ANN toolbox provides to platform to simulate the following relationship using different algorithms (LM, SCG and so on)
Ypredicted = function(Xi) (1)
Then, a plot of Ypredicted to Yobserved to validate the ANN work.
However, is there a way to obtain the mathematical function behind on the above relationship (1)? I know, it has to be highly complex and non-linear function. But is there any specific way to get that function?
I think many knows the ideas due to Jules Henri Poincaré that the physics laws can be formally rewriten as a space-time curvature or as new geometry solely without forces. It is because the physics laws and geometry laws only together are verified in the experiment. So we can arbitrary choose the one of them.
Do you know any works, researchers who realized this idea. I understand that it is just fantasy as it is not proved in the experiment for all forces excepting gravitation.
Do you know works where three Newtons laws are rewritten as just space-time curvature or 5D space curvature or the like without FORCES. Kaluzi-Klein theory is only about electricity.
We would like to follow the progress in students’ learning through the development of their mathematical skills and are looking thus for an appropriate classification of learners related to their achievements.
Dear all :
I need to solve the following integral (attached as an image file)
The context is on the calculation of View Factors in Radiation Heat Transfer
I worked out the this expression from the general formula, working out my configuration of two bodies in cylindrical coordinates (for one of the bodies) and using spherical coordinates (for the other body).
But I'm not sure if I ended with a well defined Integral, since I used two different coordinated systems on a same problem.
I used Cylindrical Coordinates for one of the dA and Spherical Coordinates for the other dA, however both dA are part of the same integral.
Hopefully someone out there can give me some help !
Regards and Thank you !
Hi. I'm currently working on my masters' thesis. I will analyse Grade 2 and 3rd (elementary level) maths textbooks and teachers' guides to discover how teaching and learning materials promote students conceptual understanding in mathematics especially the area of number and place value concepts. This will be completely desk-based research.
In order to analyse the materials, Should I use someone's framework or is it okay that I create some kinda criteria on my own to judge. I'd prefer make my own, but I'm not sure I can do that or not.
I would be very appropriated if you could give me an advice.
Distribution dominates policy. Resources are injected at a point and distribute across communities. Can the distribution be almost instantaneous as in the heat equation? Do the resources morph and distribute, drawing in the wave equation? Where is stability (cf. Laplace equation)? These seem basic questions of public policy. Yet, the big three rarely feature in scholarship on public policy. Why?
I would like to fit a curve of my steady-state anisotropy results in Origin, which revealed weak binding between two proteins. I tried quadratic equation but it turned out that it is used for strong interction, so I would like to reevaluate my results. I tried to find information about weak binding fits, but could not find detailed infos.
Thank you in advance!
Hello, I am a beginner in PARAFAC, and I am following Murphy et al. (2013) and using “drEEM” toolbox to process my data. However, I came up against some questions while dealing with the data and applying the method, and I really hope that I can get some advice from here! Thanks!
- I am running RANDINITANAL to obtain the least-square model, but it seems like there are two components in my dataset often appear together, and PARAFAC don’t always decompose(?) them into separate components. The output of 100-run RANDINITANAL for 6 and 7-component model shows that there is a chance of 69% and 91% that PARAFAC will treat them as individual components. However, the runs that didn’t decompose them nearly always have smaller SSEs, though the relative difference in SSE is only about 1%, and will be chosen as the least-square model. I’ve read about that “There is no way to say, from the decomposition whether component one is rightfully the first, second or fifth component.” from the online PARAFAC tutorial “Interactive introduction to multi-way analysis in MATLAB (Bro, 1998)”. What about the difficulty(?) for PARAFAC to resolve a particular combination of components during the random process? And is it normal for PARAFAC to resolve a combination of components more easily(?) but with higher SSE? Should I just simply use the output “LSmodel” from RANDINITANAL?
- Some low-signal samples are included in my dataset, and the contours of the corrected EEMs, especially those with low signal, seem very fragmental. I think this is the reason why I am getting some abruptly-changing excitation and emission spectra. And I think these abruptly-changing spectra are also making my model extremely difficult to validate in split-half analysis. Since I can’t really distinguish the true fluorescence signal from the noise (blank subtraction is done in FDOMCORRECT), removing faulty parts using ZAP or SUBDATASET might not be suitable. I’ve been thinking about smoothing my dataset, however, the instructions of function EEM_SMOOTH in the R package “staRdom” mentioned that smoothing is not advised in PARAFAC analysis. I’m wondering are there any other options when processing these kind of low-signal samples?
- I’ve read about that the score (concentration) and loadings (spectra) of a component are “only determined up to a scaling (Andersen and Bro, 2003)”, for example, multiplying the excitation spectra by 2 and dividing the emission spectra by 2 at the same time doesn’t change the contribution to the model. What about the relative magnitudes between components? Do the relative magnitudes between components have any mathematical (or physical, or chemical, perhaps?) interpretation? I am asking this because those abruptly-changing spectra mentioned above sometimes feature peaks (or spikes) that have greater value than the spectra of other normal-looking components.
If any further explanation for my questions is needed, please let me know. If any of my questions is too basic, or there is any literature I need to read before continuing, please let me know, too.
Thanks for reading and I really appreciate your time!
While most governments try to hide the facts and manipulate statistics about COVOD-19 due to political/economical/stupidity reasons, many physicians and scientists are currently working on finding cures for COVOD-19. I am curious whether there is any center/platform to use experts from different areas of research in this fight.
To be clearer, let me ask this:
I work in biomedical engineering department. I, my colleagues and our students are familiar with optimization, data analysis, artificial intelligence, time-series analysis, modeling, control and …
I hope there might be a center which can provide some data, plus some tasks, so we can do some real and useful research and have a share in this fight.
Just a saying: maybe a proper deep neural network can suggest best combination of drugs according to the available history.
My question is about the direct fight. I don’t mean helping in e.g. producing masks, cloths and …
I have been receiving complaints from different universities in Germany that master students from countries like India, Nepal, Pakistan can't cope up with the standard of mathematics and the majority of students struggling to pass such courses. Since the Indian Engineering colleges/ universities are divided into IITs, NITs, Government colleges, Private Engineering colleges and so on..the differences in the quality of education are so high that it is difficult for foreign universities to define the qualification criteria in an admission process. Eg. The grades of such students look pretty good on a marks sheet but students' performance in maths exam at German University/FH is poor. Are there any criteria to decide the potential of students during the admission process? Any recommendations to tackle this problem? How other universities in Germany addressing this issue?
DIfferent Mathematical Techniques are being used for regionalization. For example in different references, the authors regionalize the area across the country under different climates.
I know quantum computers have solved problems that would take an exponential amount of time on classical computers. But have they solved a problem that would take an infinite amount of time on a classical computer?
If this has happened, did it employ quantum indeterminacy?
If this hasn't happened, is there a proof that it can't happen?
I am working on parental beliefs about mathematics and it teaching and learning and want to investigate, in which ways parents support their children with their mathematics education. Therein I am focusing on early secondary school (11-12 year old students).
In the literature, fractal derivatives provide many physical insights and geometrical interpretations, but I am wondering where we can apply this particular derivative appropriately. Please refer me to references or examples because I am very interested to learn more about new derivatives and their applications!! I greatly appreciate all the brilliant efforts in this discussion!!
I am writing a paper assessing unidimensionality of multiple-choice mathematics test items. The scoring of the test was based on right or wrong answers which imply that the set of data are in nominal scale. Some earlier research studies that have consulted used exploratory factor analysis, but with the little experience in data management, I think factor analysis may not work. This unidimensionality is one of the assumptions of dichotomously scored items in IRT. Please sirs/mas, I need professional guidance, if possible the software and the manual.
As applied to physics, the source is a mathematically described process and the target is one without a mathematically described process or without a mathematically described process known to the student. Analogy can suggest a mathematical model to a researcher. Analogy assists the student by demonstrating that knowledge already acquired can help in understanding a new subject. Thus analogy can be an investigative tool and a pedagogical tool. John Holland in his book on Emergence from Chaos to Order attributes the source-target characterization to Maxwell (p. 210) but I have not been able thus far to locate Maxwell’s employment of that characterization. Maxwell spoke about analogy as a useful pedagogical tool in an 1870 address to the Mathematical and Physical Sections of the British Association included in his collective works, volume 2, page 215. At page 219: Analogy `is not only convenient for teaching science in a pleasant and easy manner, but the recognition of the formal analogy between the two systems of ideas leads to a knowledge of both, more profound than could be obtained by studying each system separately.’
Do you know the origin of the source-target analogy?
Regarding assumptions of linear regression, can I replace scatter plots with mathematical equations in my article and claim that there is a linear relationship between two variables on the basis of equations. I want to conserve space in my article without presenting scatter plots. Please advise. Thank you.
As a founder of making dialectical logic mathematically, author has constructed a mathematical expression for dialectical logic, and proofed the formal logic is only a special case in dialectical logic, see preprint titiled "mathematical foundation for dialectical logic" in my profile, then do you admit author's viewpoints in this field?
Generally QSR is considered as the parameter for location based service. End-to-end delay, number of hops etc. are the parameters for routing. Why combination of routing and location service enhances the performances of the parameters?
There is a contradiction between the natural width of the energy transition, which is determined by the lifetime at the corresponding energy level and the spectral line width of the radiation line, which is determined by the duration of the wave train.
For example: For the Mössbauer transition, whose lifetime is of the order of 2 years, while the interaction time at the receiving end is about 10^(-10) sec.
Mathematically, this interaction is expressed by the Feynman diagram of the electron - electron interaction, which integrates over the internal photon line, which, together with the delta functions of the vertex parts, limits the photon spectrum.
By the way, the same paradox applies to any other type of collision.
So, is exist (really) the electromagnetic field?
Hi everyone! Greetings from Munich!
It appears in my mediation analysis, that X is negatively related to M, and M is positively related to Y. Also, i find a significant negative effect of X on Y through M. But since M is determined as a perceived benefit, i am currently struggling with the interpretation of this indirect effect.
Mathematically, of course, this indirect effect result makes sense since "- x + = -", but can i interpret this by saying the benefit is overridden or is it rather that the benefit "backfires" on Y and thus a negative indirect is found?
Many thanks in advance!
a transport model (e.g logit model or ...) is based on statistical data and field works or merely based on mathematical theories or both of them?
If I want to define a model ( e.g. a new model in freight transportation ), what actions should I do? what kinds of data should I gather?
I am trying to justify the use of AT instead of UTAUT for my paper on teacher challenges faced when using technology to teach mathematics...
While many modern causal models do not seem to adhere to Laplace's demon (strict determinism) which treated error factors as merely unknown causes, they do not also always address the issue of freedom and responsibility sufficiently. While it is acknowledged that the human element (as far as intervention) is concerned might involve an exogenous factor (perhaps, "transcendent cause" in neoplatonic terms), posing problem to the equilibrium of an otherwise deterministic system, the models themselves might seem relevant for systems that are independent of human intervention, e.g. artificial intelligence. But, that evokes ethical questions, especially regarding whether formalism of such models can totally ignore the question of responsibility or should they really be resolving them. In more practical terms, can such a machine be constructed based on a causal model that can correctly predict and make right moral decisions for humans?
To me it is mostly a story.
There is, at the outset, a puzzle about some natural phenomena, perhaps encountered by inadvertence.
Then some other process exhibits a similar pattern. The question becomes is there some reason, perhaps based on the thermodynamics of the two systems, that connects them?
This takes the curious inquirer into a conceptual forest, or overgrown garden, path obscured, looking for a common principle. When a principle is discerned, there are more questions.
Does the pattern appear elsewhere?
Is there a more fundamental principle underlying the first principle discerned?
Does a principle, even more fundamental, connect all the different phenomena sharing a kind of pattern? Does the same pattern appear but in subtle ways in other phenomena?
Can the phenomena be modeled? What assumptions are extraneous to arriving a model in common? What is the set of minimal assumptions?
Many more paths and tangles appear.
Can the winding path so obscure at the outset be reduced to a set of logical statements that resemble in their appearance mathematical deduction? Never finally, but at least provisionally?
But first, there is a story.
How do you regard physics?
In the litterature about quantization schemes, people tend to use Weyl ordering a lot.
Altough it enjoys some desirable properties like sending real functions into self-adjoint operators or sending Schwarz functions into trace class operators, we know that these features are not unique of Weyl ordering.
Is there any deep reason (being mathematical of physical) to prefer Weyl ordered quantizers?
I'm attempting to create nonlinear metamaterial structures in comsol and I don't know how to measure second harmonic generation.
How do I measure that frequency x goes into structure and generates frequency 2x ?
Thanks for any help.
Why many scientists use the term mathematical model ?
If you have a certain phenomenon and you want to model it, you will describe its, more or less, approximate behaviour by applying to it laws which can be physical, chemical, economical, geometrical and so on, depending on the phenomenon.
Mathematics is only a tool to describe these laws, so you should speak of physical, chemical, economical, geometrical ... models and not of mathematical ones.
Most of the models I encounter in my research are physical models because, to build them up, the laws of physics are used.
Each time I hear the term mathematical model, my nose gets wrinkled.
What is your opinion ?
I want to conduct a qualitative research about math teaching and learning at pandemic Covid-19 in several specific area in my country. But i don't have any idea to start because i'm not good in qualitative research. It's kindly opened for join research.
Saya ingin melakukan penelitian kualitatif ttg KBM Matematika selama wabah Covid-19 di beberapa daerah di Indonesia. Tapi saya bingung dalam merancangnya krn minim pengalaman dalam penelitian kualitatif. Sangat terbuka untuk penelitian bersama.
I'm a final semester student in BS Mathematics and my research interest is in Mathematical Biology. Would you like to provide me the best SEIQR ODEs model for stability and optimal control? I want to do stability and optimal control for our province's real data. So, please recommend the paper.
Thank you so much.
Is there any alternative topic/theory/mathematical foundation to compressed sensing (CS) theory?
successive to Nyquist Criterion is CS theory, is there any theory that surpasses the CS theory ?
I want to create an animation to insert it in my math presentation (e.g. a ball hitting the wall, deforming and bouncing back: just an example). Is there any free and easy to use software (preferably, for Mac OS X) to do that?
Which one is the best? I know how to create some animations in Matlab and Mathematica, but this is different: I don't want to code the whole scene as functions.
STEM fue el tema principal de la conferencia internacional ASTE 2019, con al menos 8 pósteres, 27 presentaciones orales y 3 talleres que promovieron las aulas STEM, la instrucción/enseñanza STEM, las lecciones STEM, los campamentos de verano STEM, los clubes STEM y las escuelas STEM sin proporcionar una conceptualización o definición operativa de lo que es STEM. Algunas presentaciones defendían la integración de las disciplinas, pero el ejemplo proporcionado fue principalmente prácticas "indagatorias" y de "diseño de ingeniería" que de hecho no diferían del tipo de actividades en el aula hands-on/minds-off mal conceptualizadas y epistemológicamente incongruentes.
Por lo tanto, vale la pena considerar:
(1) ¿Por qué lo llamamos STEM si no difiere de las prácticas aplicadas durante décadas (por ejemplo, indagación, actividades hands-on)?
(2) ¿Qué beneficios (si los hubiere) puede aportar esta mentalidad/tendencia de STEMinificación a la educación científica y su investigación?
I wish to shift multiple lines or curves (up to 25 lines/curves) so that they are superimposed on one another. This is to enable me see clearly the points or regions where any one of the curves deviate from the others. In this procedure I also want to be able to vary or determine the region or range of superimposition or overlay of the curves. What mathematical function or formulae can enable me do that?