Science topics: Mathematics

Science topic

# Mathematics - Science topic

Mathematics, Pure and Applied Math

Questions related to Mathematics

Mathematics differs from sensory science in that it draws its subject from structural construction to abstract abstraction of quantitative quantities, while other sciences rely on the description of actual sensory objects already in existence.

What do you think?

Computer Aided Design (Cad) subject deals with the backend mathematical calculation that happens in a 3D design.

Hello,

I am interested in the personalization of learning based on profiles, more specifically in mathematics.

Do you know any relevant references?

Thank you

Quran = Key + Message

The Key has 139 letters, 124 diacritics (harakat), and 56 letter dots.

139 + 124 = 263 = 56th prime number.

Letter diacritics and dots always existed mathematically before they were added to the Quran's Text.

عدد حروف سورة الفاتحة = 139

عدد حركات حروف سورة الفاتحة = 124

عدد نقاط حروف سورة الفاتحة = 56

139 + 124 = 263 وهو العدد الأولي السادس والخمسون (عدد النقاط)!!!

حركات ونقاط القرءان موجودة رياضياً من قبل أن تضاف الى الرسم القرءاني

The fact that , electron can have only discrete energy level is obtained by solving schrodinger equation with boundary conditions, which is a mathematical derivation .

Physically, What makes the electron possess only certain energies ?

Or is there any physical insight or explanation or physical intution which can arrive at same conclusion(without math) that electron can have only discrete energy levels inside potential well

Much has been said about the differences between physics and mathematics, but less attention has been paid to the differences between physics and chemistry.

The question is, where does physics and chemistry work?

Given a fixed volume where the relative humidity and temperature are known, how can you estimate how much water vapor will condense corresponding to a temperature decrease. I suspect it has to do with the dew point temperature but I'm having trouble finding mathematical relations.

Hi every one,

here I have a problem in MATLAB, when I want to solve the following equation, relative to PI in the photo, or tau in the code, MATLAB will send me this error:

**Warning: Unable to find explicit solution. For options, see help**.I attached the question and the code below (in code, I rewrite pi in the photo with tau).

If you have any idea to solve this problem, analytically or numerically, I will be happy to hear it out.

*NOTE:*

*> PI_0.1(X,t) = tau*

*> X = [x(t),y(t),psi(t)]^T;*

**** PROBLEM: Find tau in terms of X and t in which solve the mentioned equation.**

Thanks in advance,

Arash.

**code:**

______________________________________

______________________________________

clc;clear;

syms x y psi tau t

c1 = 1;c2 = 1.5;lambda = 0.1;

x_r(tau) = 0.8486*tau - 0.6949;

y_r(tau) = 5.866*sin(0.1257*tau + pi);

psi_r(tau) = 0.7958*sin(0.1257*tau - pi/2);

x_r_dot = 0.8486;

y_r_dot(tau) = 0.7374*cos(0.1257*tau + pi);

psi_r_dot(tau) = 0.1*cos(0.1257*tau - pi/2);

phrase1 = c1/2*(cos(psi)*(x - x_r) + sin(psi)*(y - y_r))*(cos(psi)*x_r_dot + sin(psi)*y_r_dot);

phrase2 = c1/2*(-sin(psi)*(x - x_r) + cos(psi)*(y - y_r))*(-sin(psi)*x_r_dot+cos(psi)*y_r_dot);

phrase3 = 0.5*(psi - psi_r)*psi_r_dot;

eq = -2*(1-lambda)^2*(phrase1 + phrase2 + phrase3) - 2*lambda^2*(t - tau)

sol = solve(eq == 0 , tau , 'IgnoreAnalyticConstraints',1)

______________________________________

______________________________________

Hello,

I am doing research on HVLD detection capability.

From your experience, is there some mathematical formula to prove that HVLD machines can detect holes regardless of size or some other ways to prove it?

Thanks in advance !

A question related to our cultural indebtedness to our mathematical forbears.

I am doing a research proposal i need answers on my topic. information must be from 2015-2020. relevant articles

Any bibliographic recommendations on the problem of routing vehicles with multiple deposits, homogeneous capacities? less than 10 nodes

L'Huillier's theorem or calculation of spherical excess of "spherical triangle" formed between the unit vectors on unit sphere can find out the area, but how to explain this formula from purely plane trigonometry standpoint (i.e. without assuming any pre-requisite knowledge on spherical trigonometry)? The solid angle can be found by spherical trigonometry rules, and I am well aware of it. I want to introduce this problem to anyone with knowledge of plane trigonometry, but no knowledge of spherical trigonometry.

According to a report published by UNESCO, 0.1% of the global population (in 2013) were researchers? Does anybody know the current numbers?

what is the mathematical expressions and equations used for the designing of antipodal structure of an antenna.

I hope for a global overview on mathematical giftedness and its support in school and/or on an extracurricular level. What programmes/opportunities are offered?

Quantum computing is the field that focuses on quantum computation/information processing, the mathematical and physical theory for which as well as the engineering required to realize different circuits and algorithms into hardware performance, as well as other contingent issues such as the whole “compute chain” (from software engineering to quantum machine code and then further on to the physical architecture) and device/hardware issues such as thermal, electrooptical and nanoengineering.

My question is how quantum computing is related to artificial intelligence?

Can any one suggest application(s) for

*$R_{\alpha}, R_{\beta}$ and $R_{m}$ -functions*in mathematical or applied sciences; which is recently introduced in following research paper;H. M. Srivastava et al.

*A family of theta*-*function identities based upon combinatorial partition identities and related to Jacobi’s triple-product identity,*Mathematics**8**(6)(2020), Article ID 918, 1-14.Here I just want to know about the actual parameters to measure the content of happiness in a person. With the help of these parameters a neural network can be generated and maintained to achieve the maximum happiness. I am also expecting some better approach from the scholars.

Dear colleagues,

I am looking for a practical guide presenting the non-parametric tests intended for students without mathematical background (or very little) with if possible the codes SAS or R.

Thank you.

Good research is based on good relationship between the mentor or supervisor and the scholar. What are the qualities a supervisor or mentor must have to have a healthy and friendly environment in the laboratory?

I have found a beautiful technique to solve math problems such as:

- Goldbach’s conjecture
- Riemann hypothesis

The technique uses the notions of regular languages. The complexity class that contains all the regular languages is REG. Moreover, these mathematical proofs are based on if some unary language belongs to NSPACE(S(log n)), then the binary version of that language belongs to NSPACE(S(n)) and vice versa. The complexity class NSPACE(f(n)) is the set of decision problems that can be solved by a nondeterministic Turing machine M, using space f(n), where n is the length of the input.

We prove there are non-regular languages that define mathematical problems. Indeed, if those math problems are not true, then they have a finite or infinite number of counterexamples (the complement languages contain the counterexample elements). However, we know every finite language is regular. Therefore, those languages are true or they have an infinite number of counterexamples, because if they have a finite number of counterexamples, then the complement language should be in REG, that is, this complement must be a regular language. Indeed, we show some mathematical problems cannot have a finite number of counterexamples using the complexity result, that is, we demonstrate their complement languages cannot be regular. In this way, we prove these problems should be true or they have an infinite number of counterexamples as the remaining only option.

See more in my notions:

Take, for example, such a concept as a minimum flow, that is, a gradient vector field, the level surfaces of which are the minimum surfaces. Then the globally minimal flow, evolving to an absolutely minimal state, could be compared with a quantum vacuum, and the locally minimal flow could be compared with fields and particles. At the same time, it is clear that it is necessary to correctly choose the space in which this minimum flow moves.

My dear friends, I am asking if some of your students are interested in applying a postdocotor position in China with me, here is the link and details!!!

Hello all,

I am looking for an method / algorithm/ or logic which can help to figure out numerically whether the function is differentiable at a given point.

To give a more clear perspective, let's say while solving a fluid flow problem using CFD, I obtain some scalar field along some line with graph similar to y = |x|, ( assume x axis to be the line along which scalar field is drawn and origin is grid point, say P)

So I know that at grid point P, the function is not differentiable. But how can I check it using numeric. I thought of using directional derivative but couldn't get along which direction to compare ( the line given in example is just for explaining).

Ideally when surrounded by 8 grid points , i may be differentiable along certain direction and may not be along other. Any suggestions?

Thanks

the types of board game for mathematical literacy to make the learning and teaching fun

I searched a lot in googl and youtube for a step by step explanation for the Finite Elements Method. All throw a bunch of equations and mathematical terms without explaining why or where they came from.

Would you please suggest a good book or an article that clearly explains FEM?

Thanks

Electromagnetic (EM) waves have invoked a lot of interest among scientists and engineers over centuries. And this interest seems to be on the rise, in view of new applications of EM waves being explored and developed, particularly at newer and higher frequencies.

Propagation characteristics of EM wave depend on its frequency (or wavelength), to a large extent. And when an EM wave interacts with an object/material, it undergoes reflection, refraction, scattering, attenuation, diffraction, and/or absorption. Each of these effects are dependent on the frequency of the EM wave(s) because the size of wavelength (relative to the object/material) assumes great significance.

And due to the huge range of frequencies of EM waves employed in various applications these days, they undergo a variety of different effects. This confuses the scientific community sometimes as it is often unclear as to which effect is more dominant at what frequency.

Thus a single mathematical formula (or a small set of formulae) would/could be of great help if different effects (as listed above) and their relative weights can be known at different frequencies. This may be of great boon to young scientists and engineers as it would simplify things particularly for those who are mathematically minded.

By dynamical systems, I mean systems that can be modeled by ODEs.

For linear ODEs, we can investigate the stability by eigenvalues, and for nonlinear systems as well as linear systems we can use the Lyapunov stability theory.

I want to know is there any other method to investigate the stability of dynamical systems?

Given:

1. The nearest neighbor of 𝑝𝑖 then 𝑝𝑖-𝑝𝑗 is a Delaunay edge.

2. In a 3D set of points, if we know that consecutive points ie... 𝑝𝑖-𝑝i+1 are nearest neighbors.

3. The 3D points do not form a straight line

Assumption:

Each Delaunay tesselation (3D) has at least 2 nearest neighbor edges.

Is my assumption true? If not can you please explain to me the possible exceptions?

Thanks,

Pranav

Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.

1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?

2. Then I thought well

*then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.***if we assumed that the answer to 1 is yes**I'm looking foe different insights :)

Thanks.

I am currently studying the effect of atrophy of a muscle on the clinical outcome of joint injury. There is actually another muscle that was previously well established to have an effect on clinical outcome, and both these 2 muscles are closely related. The aim of the study was to shed some light on the previously ignored muscle to see if there is anything that can be done to help improve clinical outcomes in that aspect.

While doing univariate analysis, i wasnt sure if i should include the previously established muscle as well and when i included it into the multi-linear regression model, the initially significant primary variable became insignificant. I was thinking if this could be due to co-linearity but the VIF value was not high enough to show significant co-linearity in the two variables. (GVIF ^(1/(2*Df))=1.359987)

My question is, should these 2 variables be included in the same model if they are both highly correlated (clinically and mathematically) but was not determined to have co-linearity, or should these 2 variables be evaluated separately?

Good evening all;

We are looking for literature on the mixed integer formulation of water distribution problems using Multi objective optimization methods.

Thanks

Nasiru Abdullahi

Mathematics Department

NDA Kaduna

I know lots of composers have created works around mathematical constructs such as the Fibonacci sequence. I would like to learn if any composers have used mathematical constructs in their music to represent journeys.

Dear all,

I am trying S parameter measurement _transmission_using TEKTRONIX DSA8300 oscilloscope. Initially, S parameters files are generated in LINEAR _magnitude format. Now S parameters transmission files are appearing in dB format from oscilloscope. Perhaps machine settings seem to be changed.

1)Kindly guide for appropriate setting button in TEKTRONIX DSA8300 oscilloscope, so as to receive the data from dB to linear magnitude format.

2) Also, alternative mathematical ways to receive data in LINEAR magnitude format are appreciated as well , kindly.

best thanks

Charles Sanders Peirce regarded mathematics as “the only one of the sciences which does not concern itself to inquire what the actual facts are, but studies hypotheses exclusively” (RLT, 114). Since, by contrast, “[w]e must begin with all the prejudices which we actually have when we enter upon the study of philosophy” (CP 5.265), the presuppositionless status of mathematics makes it more primitive than anything found in philosophy. Given that phenomenology falls under philosophy (CP 1.280), we get the result that mathematics is prior to phenomenology.

Yet, Peirce also held that “every deductive inference is performed, and can only be performed, by imagining an instance in which the premises are true and

*observing*by contemplation of the image that the conclusion is true” (NEM III/2, 968).We thus have two conflicting arguments:

On the one hand, one could argue that mathematics is prior to phenomenology because mathematics makes even less presuppositions than phenomenology.

On the other hand, one could argue that phenomenology is prior to mathematics because whatever happens during mathematical inquiry must perforce appear before (some)one.

Peirce's pronouncements notwithstanding, it is not obvious to me why the first argument should trump the second. In fact, I find considerations about the inevitability of appearing in mathematics to be decisive.

What do you think?

344/5000

Hi researchers, I have a problem with the mathematical formulation of the multi objectives model for solving the RFID planning problem network. Do you have any courses or documents or information that can help me achieve my mathematical model of RFID network optimization deployed in a body network. i didn't choose the approach and the algorithme of multi optimization yet, I am formulating my problem mathematically

how I do obtain in the mathematical expression "limiting current density used to reduce Fe+3(A/m2)"? actually how i find the i (Fe)?

i (c)= i (cu)+i (Fe)

Is there an encyclopedia of all the branching mathematical axioms, together with various ways of proving different theorems based on those axioms?

As you may be knowing that there are different mathematical tools and techniques which we can combine or hybridize with heuristic techniques to solve their entrapment in local minima and convergence issues. I know two techniques namely Chaos theory and Levy distribution as I have used them for increasing convergence speed of Gravitational Search Algorithm (GSA). So, my question is: can you name and briefly explain other mathematical techniques which we can combine with optimization algorithms in order to make them fit for solving complex real world problems.

Thank you.

The master Paul Erdos said "Mathematical may not be ready for such problem"

Terence Tao recently proposed a new and advanced approach for this conjecture and concluded: "Almost all orbits of the Collatz map attain almost bounded values".

The Collatz 's conjecture is infamous and very hard to solve

Take any positive integer, if it is even divide it by 2. If it is odd , multiply the number with 3 and add 1. Whatever the answer , repeat the same operations on the result.

Suppose the number is 5 then the operations wil be as follows: 5, 16, 8, 4, 2,1,4,2,1

Suppose the number is 7 then the operations will be as follows:7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1,4,2,1

The conjecture has been verified by computer for number as big as 10^18 and respects all the powers of 2. This is easely checked: 128, 64, 32, 16, 8, 4, 2, 1, 4, 2, 1.

How any positive integer reach some power of 2 in order to reach the loop of 4, 2, 1.?

We claim that any positive integer has a special numbber equal to a multiple of the positive integer. When the operation of 3n+1 is performed on that multiple it leads to some power of 2 .

N=1 gives special multiple 5=5*1.

3*5+1=16=2^4

N=3 gives special multiple 21=3*7

3*21+1=64=2^6

N=5 gives special multiple 85=17*5

3*85+1=256=2^8

The set (1, 5, 21, 85, 341.....) are called Collatz Numbers.

So we can claim that the Collatz's conjecture is almost solved.

A careful reading of THE ABSOLUTE DIFFERENTIAL CALCULUS, by Tullio Levi-Civita published by Blackie & Son Limited 50 Old bailey London 1927 together Plato's cosmology strongly suggest that gravity is actually a real world mathematics or in another words is gravitation a pure experimental mathematics?

In the preprint

W.-H. Li and F. Qi,

*A further generalization of the Catalan numbers and its explicit formula and integral representation*, Authorea Preprints (2020), available online at https://doi.org/10.22541/au.159844115.58373405I concluded two integral formulas indicated in the picture.

(1) Do you know the existence of these two integral formulas? Please give concrete and explicit references containing these two integral formulas.

(2) Can you find direct and elementary proofs for these two integral formulas?

why we plot absorbance vs wavelength although there is no direct formula between them and I also want to know that their is any direct or indirect relation between molar extinction coefficient and wavelength. I am trying to generate a theoretical plot between absorbance vs wavelength of single layer MoS2 by using python program,so I need mathematical formula for calculation .

The literature on public (and some school students') understanding of science and mathematics shows many have problems decoding relatively simple information, concepts and data such as from graphs. In the UK, and many other countries, the public have been exposed to unprecedented amounts of information, ideas, scientific findings, formulae, graphs and so on that purport to provide understanding of the global COVID-19 pandemic, so as to presumably advise on risk and guide personal decisions and influence behaviour. But what are the implications of this massive shift in communication for public understanding in general and for future science and mathematics education in schools?

FTIR technology is considered the most advance for the detection of adulterants in milk. Is there any mathematical relation that can describe the relationship between the amount of adulterants in milk using the absorbance from the FTIR? Please suggest any research articles that describe this or related areas.

NO. No one on Earth can claim to "own the truth" -- not even the natural sciences. And mathematics has no anchor on Nature.

With physics, the elusive truth becomes the object itself, which physics trusts using the scientific method, as fairly as humanly possible and as objectively (friend and foe) as possible.

With mathematics, on the other hand, one must trust using only logic, and the most amazing thing has been how much the Nature as seen by physics (the Wirklichkeit) follows the logic as seen by mathematics (without necessarily using Wirklichkeit) -- and vice-versa. This implies that something is true in Wirklichkeit iff (if and only if) it is logical.

Also, any true rebuffing of a "fake controversy" (i.e., fake because it was created by the reader willingly or not, and not in the data itself) risks coming across as sharply negative. Thus, rebuffing of truth-deniers leads to ...affirming truth-deniers. The semantic principle is: before facing the night, one should not counter the darkness but create light. When faced with a "stone thrown by an enemy" one should see it as a construction stone offered by a colleague.

But everyone helps. The noise defines the signal. The signal is what the noise is not. To further put the question in perspective, in terms of fault-tolerant design and CS, consensus (aka,"Byzantine agreement") is a design protocol to bring processors to agreement on a bit despite a fraction of bad processors behaving to disrupt the outcome. The disruption is modeled as noise and can come from any source --- attackers or faults, even hardware faults.

Arguing, in turn, would risk creating a fat target for bad-faith or for just misleading references, exaggerations, and pseudo-works. As we see rampant on RG, even on porous publications cited as if they were valid.

Finally, arguing may bring in the ego, which is not rational and may tend to strengthen the position of a truth-denier. Following Pascal, people tend to be convinced better by their own-found arguments, from the angle that they see (and there are many angles to every question). Pascal thought that the best way to defeat the erroneous views of others was not by facing it but by slipping in through the backdoor of their beliefs. And trust is higher as self-trust -- everyone tends to trust themselves better and faster, than to trust someone else.

What is your qualified opinion? This question considered various options and offers a NO as the best answer. Here, to be clear, "truth-denial" is to be understood as one's own "truth" -- which can be another's "falsity", or not. An impasse is created, how to best solve it?

Hello fellow scientists,

I wish to determine the Dissociation Constant (K

_{D}) of a DNA polymerase binding dsDNA. I won't disclose what the DNA polymerase is because it is unpublished work. I have done some binding assays in Agarose gels, but due to the poor sensitivity of the available dyes I had to visualize the relative binding stoichiometrically, and I could not simply just set the protein or DNA concentration around the expected K_{D}.Previous work in our lab has determined a K

_{D}= 20 nm for our DNA polymerase binding a 33mer locked double stranded DNA hairpin.The purpose of using something so complicated was for kinetics assays. However, I am using a 13-mer dsDNA construct because my goal is to crystallize the DNA complex and a 33-mer is just way too large! My supervisor has advised that I don't believe that my K

_{D}is actually 20 nM for my small dsDNA construct.I am interested in using Isothermal Titration Calorimetry mainly to calculate the K

_{D}of my protein to binding this 13mer dsDNA construct. I would titrate my dsDNA into a fixed concentration of protein. I could guess that the K_{D}is 20 nM, but I actually don't know for sure.I have heard that when you determine the K

_{D}you have to have some estimate of the K_{D}and then scan ligand concentrations above and below the K_{D,}measure the response to get a curve of response vs ligand concentration and the K_{D}is mathematically fit or basically it is just the inflection point of the binding curve.However that advice doesn't tell me if the K

_{D}is say 20 nM, what should fixed concentration of my protein be? (I have appreciable amounts of 100 µM protein because I am a crystallographer so excessive protein isn't an issue.). What is the max and min range that I should scan the ligand concentrations? What if the K_{D}is way worse than we predicted and it is actually 1 µM? What fixed concentration of protein should I use and what min and max concentrations of ligand should I use?Is there a way that I can measure the K

_{D}with a certain fixed concentration of protein, and a huge range of ligand concentrations regardless of if the K_{D }is 20 nM or 1 µM? Is that possible?The acceptable practice is to validate a model used under any study. What validation methods are available and would we confirm one superior over others?

My question is: Is there any mathematical or empirical way to prove that given a dataset containing noisy signals y(t) [Y = X +N] and another dataset containing noise N and we want generator to generate clean signals X ̂ . How to prove other than experiments that generator will be able to generate clean signals from random noise vector z.

I have a query regarding data transformation if anyone can provide any guidance please?

I was wondering if, generally, it is possible to transform a variable's raw data twice, using 2 different methods, for the purpose of 2 different tests? I will provide you with a little background to my study first. I have a variable for 'Adverse Childhood Experiences' containing 1 score per participant. N = 113; however, 65 of these are 0 values and 3 are missing data - which I believe is disrupting my data considerably. I understand that it is not advised to simply remove the cases that read 0 just because there are many (however, if you recommend otherwise please let me know if so and why).

Useful to note here is that this variable has a skewness of 1.943, and because of this, I have made the decision to transform it.

I am carrying out a path analysis with 1 IV, one DV and 2 mediators. In the first instance I am carrying out a t-test (IV - gender, DV - ACE score) and then in the second instance I am carrying out a linear regression (IV age, DV - ACE score), to understand whether age and gender need to be included in my path analysis as covariates. In order to meet the assumptions of the t-test (namely, normal distribution across both levels of the IV: male and female) I have transformed the raw ACE data this using Tukey's formula, which brought the skewness to < 1 for each IV level - great. But then when I go to carry out the linear regression, and aim to meet the assumption of approx. normal distribution of residuals, the assumption is not met on the Tukey transformed ACE data. I have carried out a number of other transformations on the raw ACE data and the only one where the residuals are normally distributed for the regression is through a Log10 transformation.

My question is this: am I able to carry out the t-test with the Tukey transformed variable data, and then the linear regression with the Log10 transformed data? Or is it the case that I need to use the same transformed data for each stage of the analysis (ie. both Tukey or both Log10 for t-test and linear regression and then the same onward path analyses?)

If it is the case that I will need to use the Log10 ACE data to go back and carry out the gender t-test, it is useful to note here that I have done this already and when inspecting the Log10 transformed ACE data across the gender variable descriptives table the results come out very strange - for example, N for males goes down from 15 to 6, and N for females goes down from 115 to 59, and there are outliers, where there were none in the Tukey transformed data descriptives, so it is confusing me a little.

Any guidance welcome!

Thank you

In mathematical ecology, the recent trend is predator induced fear to prey which is an indirect effect of predator to prey. My question is how the prey populace are afraid of infected predator? Are they capable in inducing same level of fear as of healthy predator? Any efforts regarding this will be appreciated.

Hi all,

I am dealing with data with sevaral features and many of them are highly correlated with each other as well as with dependent variable.

In my research on this topics, I found that multicolinearity is harmful for regression problem and may not end up with good model. I got some suggestion that if the features are highly correlated then we have to remove them using VIF criterion.

But, logically when I think of removing correlated features from my analysis how can expect better model as I am not considering all the available information.

Is there any logical explaination or mathematical explaination is available for the above question?

Also, I am thinking that each features are somehow related to any of the other (May be nonlinearly) in that case, do we have problem of multicolinearity ?

The new Education Act (LOMLOE) that is now being prepared in Spain intends to make Mathematics an optional subject. The Mathematics Institute has issued a manifest that argues about the importance of Mathematics in society, and in favour of keeping Mathematics as a compulsory subject in high school. If you agree with this, please sign the manifest at the link below (the manifest is in Spanish; I don't remember if there is an English version):

There is also a petition at change.org:

Thank you very much in advance.

Hebert

In the midst of or post Covid-19, any suggestion(s) or (articles) on how best to implement blended teaching to optimize teaching and learning of mathematics

I need a documented answer with a mathematical derivation, please.

Hello

For typical dose-response assays, our lab usually uses steady state intervals for defining the difference between control and tested compound. For those assays, we typically use the angular coefficient or end-point value of given curve (within steady state) to estimate percentage of inhibition, or even kinetic constants

Now we have being working with an enzyme with strong 'sigmoidal' time curse reaction (hill n=3). How can I mathematically compare curves between control and inhibited reactions, or calculate constants?

If anyone could please point me to a good theorical reference or literature examples, I will be very thankful

Stay all safe

Many informal settlements have insufficient capacity to forecast, check, handle and reduce disaster risk. These communities face a growing range of challenges including economic hardship, technological and social impediments, urbanisation, under-development, wildfire, climate change, flooding, drought, geological hazards and the impact of epidemics such as HIV/AIDS and COVID-19, sometimes termed ‘the burden of disease’. The inability of these communities to withstand adversities affects the sustainability of initiatives to develop them.

This is a question I would have asked during my masters degree research on Resilience in Disasters. I would like to know the opinions of other researchers as I would like to properly answer this question in a different research-related topic.

I am studying mass-spring-damper systems with coulombs friction. There are multiple discussions on simulating such systems using numerical methods and the problems that arise due the discontinuous excitation but I wanted to know if an analytical solution exists. To be mathematically clear about the problem, I am trying to analytically solve the following.

m*(d2x/dt2) + c*(dx/dt) + k*x = F*sign(dx/dt)

where the sign function is defined as:

sign(var) = 0 if var = 0

sign(var) = 1 if var > 0

sign(var) = -1 if var < 0

Note: I am aware of treating such systems as piece-wise linear nonlinear systems but I want to know whether a general solution exists that is capable of solving the problem without breaking it to a number of mini-problems.

I am interested to solve a mathematical problem (MILP) using evolutionary algorithms but confuse about which one to choose as a beginner in the programming languages. Suggest an algorithm easy to implements with better results.

Thanks

What are the mathematical equations used to assess the environmental impact using some biological criteria in green algae?

Quadratic equations with complex root were considered unsolvable in secondary schools. this limitation is due to the lack of topic to address the idea of complex number in Nigerian secondary school Mathematics curriculum.

is it Okay to introduce the idea of the complex number so as to enable the student to solve a wide range of questions?

This question was raised by a student I coach when I told him that some quadratic equations do not have solutions in the realm of real numbers!

I understand that we can produce that number in MATLAB by evaluating exp(1), or possibly using exp(sym(1)) for the exact representation. But e is a very common constant in mathematics and it is as important as pi to some scholars, so after all these many versions of MATLAB, why haven't they recognize this valuable constant yet and show some appreciation by defining it as an individual constant rather than having to use the exp function for that?

UPDATE: The values of the variables that I am currently concerned with are:

a~65

V~3.887

While trying to solve a circuit equation, I stumbled onto a type of Lienard Equation. But, I am unable to solve this analytically.

x'' + a(x-1)x' + x = V-------------------------(1)

where dash(') represent differentiation w.r.t time(t).

The following substitution y =x-V and w(y) = y', it gets converted into first order equation

w*w' + a(y+V-1)w + y = 0; ---------------------- (2)

here dash(') represent differentiation w.r.t y.

if I substitute z = (int)(-a*(y+V-1), (int) represent integration. The equation gets converted into Abel equation of second kind.

w*w' - w = f(z). -------------------- (3) differentiation w.r.t z.

it get complicated and complicated.

I would like to solve the equation (1) with some other method or with the method that I had started. Kindly help in solving this,

Thank you for your time.

I am interested in including the inverse piezoelectric effect into my GaN HEMT simulation. Sentaurus Device provides a special feature that allows me to update the stress field by invoking the mechanic solver (Sentaurus Interconnect). But I don't have confidence in the results I got. Because from the mathematical point of view, solving the inverse piezoelectric effect is just a simple matrix multiplication (AB = C). However, the final matrix C I got was very weird - some components in C matrix should be zero but they are not. So I was wondering if there is anyone has the same situation about this?

Heat transfer problem: what mathematical calculations apply to estimate the net current yield in a (TEG HV3-based) thermoelectric panel system where the source emits 5 Mjoule/h at 100°C, and cooling is aimed at by blowing air at 28°C as depicted in the attached drawing. How would you mathematically define a function to maximize net electricity yield by controlling blower speed? What heat exchange design tips are advisable? Thank you.

I don't find mathematical problem-solving skills which i can call them my dependent variable. can you please suggest me the mathematical problem-solving skills that work as dependent variable. My research topic is "comparison of students' mathematical problem solving skills taught by guided practice and problem-solving approach"

Many of the tools I saw and used were designed for measuring performance in a particular topic of mathematics. I am looking for a tool that can capture one's general mathematical thinking skills.

Hallo,

there is a new proof of the 3n+1-Problem !

The paper ist available

Perhaps there is a flaw in the proof

What is your opinon?

The Collatz-conjecture (the famous 3n+1-problem):

we construct a sequence of integers starting with integer n = a_0

If a_j is even, the next number ist a_(j+1) = a_j/2.

If a_j is odd, the next number ist a_(j+1) = 3*a_j+1.

Example n = 6:

6, 3, 10, 5, 16, 8, 4, 2, 1

The Collatz-conjecture: the sequence with a every positive starting-integer ends always in the sequence 4,2 1

Matlab based ANN toolbox provides to platform to simulate the following relationship using different algorithms (LM, SCG and so on)

**Y**predicted = function(

**X**i) (1)

Then, a plot of

**Y**predicted to**Y**observed to validate the ANN work.However, is there a way to obtain the mathematical function behind on the above relationship (1)? I know, it has to be highly complex and non-linear function. But is there any specific way to get that function?

I want to correlate real time application to I newly defined Topology that named Tiny Topology.

Dear Sirs,

I think many knows the ideas due to Jules Henri Poincaré that the physics laws can be formally rewriten as a space-time curvature or as new geometry solely without forces. It is because the physics laws and geometry laws only together are verified in the experiment. So we can arbitrary choose the one of them.

Do you know any works, researchers who realized this idea. I understand that it is just fantasy as it is not proved in the experiment for all forces excepting gravitation.

Do you know works where three Newtons laws are rewritten as just space-time curvature or 5D space curvature or the like without FORCES. Kaluzi-Klein theory is only about electricity.

I realized that students do not understand integral conceptually.

We would like to follow the progress in students’ learning through the development of their mathematical skills and are looking thus for an appropriate classification of learners related to their achievements.

Perhaps one of the hardest subjects in distance education applications is mathematics courses. If we dont have an IPAD everthing is very hard. Do you have suggestions for online question solving platforms that I can use in Distance Education Courses?

Dear all :

I

**need to solve the following integral**(attached as an image file)The context is on the calculation of View Factors in Radiation Heat Transfer

I worked out the this expression from the general formula, working out my configuration of two bodies in cylindrical coordinates (for one of the bodies) and using spherical coordinates (for the other body).

But I'm not sure if I ended with a well defined Integral, since I used two different coordinated systems on a same problem.

I used Cylindrical Coordinates for one of the dA and Spherical Coordinates for the other dA, however both dA are part of the same integral.

Hopefully someone out there can give me some help !

Regards and Thank you !

Hi. I'm currently working on my masters' thesis. I will analyse Grade 2 and 3rd (elementary level) maths textbooks and teachers' guides to discover how teaching and learning materials promote students conceptual understanding in mathematics especially the area of number and place value concepts. This will be completely desk-based research.

In order to analyse the materials, Should I use someone's framework or is it okay that I create some kinda criteria on my own to judge. I'd prefer make my own, but I'm not sure I can do that or not.

I would be very appropriated if you could give me an advice.

Distribution dominates policy. Resources are injected at a point and distribute across communities. Can the distribution be almost instantaneous as in the heat equation? Do the resources morph and distribute, drawing in the wave equation? Where is stability (cf. Laplace equation)? These seem basic questions of public policy. Yet, the big three rarely feature in scholarship on public policy. Why?

I would like to fit a curve of my steady-state anisotropy results in Origin, which revealed weak binding between two proteins. I tried quadratic equation but it turned out that it is used for strong interction, so I would like to reevaluate my results. I tried to find information about weak binding fits, but could not find detailed infos.

Thank you in advance!

Hello, I am a beginner in PARAFAC, and I am following Murphy et al. (2013) and using “drEEM” toolbox to process my data. However, I came up against some questions while dealing with the data and applying the method, and I really hope that I can get some advice from here! Thanks!

- I am running RANDINITANAL to obtain the least-square model, but it seems like there are two components in my dataset often appear together, and PARAFAC don’t always decompose(?) them into separate components. The output of 100-run RANDINITANAL for 6 and 7-component model shows that there is a chance of 69% and 91% that PARAFAC will treat them as individual components. However, the runs that didn’t decompose them nearly always have smaller SSEs, though the relative difference in SSE is only about 1%, and will be chosen as the least-square model. I’ve read about that “There is no way to say, from the decomposition whether component one is rightfully the first, second or fifth component.” from the online PARAFAC tutorial “Interactive introduction to multi-way analysis in MATLAB (Bro, 1998)”. What about the difficulty(?) for PARAFAC to resolve a particular combination of components during the random process? And is it normal for PARAFAC to resolve a combination of components more easily(?) but with higher SSE? Should I just simply use the output “LSmodel” from RANDINITANAL?
- Some low-signal samples are included in my dataset, and the contours of the corrected EEMs, especially those with low signal, seem very fragmental. I think this is the reason why I am getting some abruptly-changing excitation and emission spectra. And I think these abruptly-changing spectra are also making my model extremely difficult to validate in split-half analysis. Since I can’t really distinguish the true fluorescence signal from the noise (blank subtraction is done in FDOMCORRECT), removing faulty parts using ZAP or SUBDATASET might not be suitable. I’ve been thinking about smoothing my dataset, however, the instructions of function EEM_SMOOTH in the R package “staRdom” mentioned that smoothing is not advised in PARAFAC analysis. I’m wondering are there any other options when processing these kind of low-signal samples?
- I’ve read about that the score (concentration) and loadings (spectra) of a component are “only determined up to a scaling (Andersen and Bro, 2003)”, for example, multiplying the excitation spectra by 2 and dividing the emission spectra by 2 at the same time doesn’t change the contribution to the model. What about the relative magnitudes between components? Do the relative magnitudes between components have any mathematical (or physical, or chemical, perhaps?) interpretation? I am asking this because those abruptly-changing spectra mentioned above sometimes feature peaks (or spikes) that have greater value than the spectra of other normal-looking components.

If any further explanation for my questions is needed, please let me know. If any of my questions is too basic, or there is any literature I need to read before continuing, please let me know, too.

Thanks for reading and I really appreciate your time!

is there any criteria for evaluation or content analysis in early childhood's mathematics textbook?

While most governments try to hide the facts and manipulate statistics about COVOD-19 due to political/economical/stupidity reasons, many physicians and scientists are currently working on finding cures for COVOD-19. I am curious whether there is any center/platform to use experts from different areas of research in this fight.

To be clearer, let me ask this:

I work in biomedical engineering department. I, my colleagues and our students are familiar with optimization, data analysis, artificial intelligence, time-series analysis, modeling, control and …

I hope there might be a center which can provide some data, plus some tasks, so we can do some real and useful research and have a share in this fight.

Just a saying: maybe a proper deep neural network can suggest best combination of drugs according to the available history.

-----------------------

P.S.

My question is about the direct fight. I don’t mean helping in e.g. producing masks, cloths and …

I have been receiving complaints from different universities in Germany that master students from countries like India, Nepal, Pakistan can't cope up with the standard of mathematics and the majority of students struggling to pass such courses. Since the Indian Engineering colleges/ universities are divided into IITs, NITs, Government colleges, Private Engineering colleges and so on..the differences in the quality of education are so high that it is difficult for foreign universities to define the qualification criteria in an admission process. Eg. The grades of such students look pretty good on a marks sheet but students' performance in maths exam at German University/FH is poor. Are there any criteria to decide the potential of students during the admission process? Any recommendations to tackle this problem? How other universities in Germany addressing this issue?

DIfferent Mathematical Techniques are being used for regionalization. For example in different references, the authors regionalize the area across the country under different climates.

Is Euclid's Elements still worth studying, or has modern geometry progressed far enough to render the text a historical work and nothing more?

I am currently doing a research project on this topic. Any suggestions on academic articles and research papers are welcome.

I know quantum computers have solved problems that would take an

*exponential*amount of time on classical computers. But have they solved a problem that would take an*infinite*amount of time on a classical computer?If this has happened, did it employ quantum indeterminacy?

If this hasn't happened, is there a proof that it can't happen?

I am working on parental beliefs about mathematics and it teaching and learning and want to investigate, in which ways parents support their children with their mathematics education. Therein I am focusing on early secondary school (11-12 year old students).

In the literature, fractal derivatives provide many physical insights and geometrical interpretations, but I am wondering where we can apply this particular derivative appropriately. Please refer me to references or examples because I am very interested to learn more about new derivatives and their applications!! I greatly appreciate all the brilliant efforts in this discussion!!

I am writing a paper assessing unidimensionality of multiple-choice mathematics test items. The scoring of the test was based on right or wrong answers which imply that the set of data are in nominal scale. Some earlier research studies that have consulted used exploratory factor analysis, but with the little experience in data management, I think factor analysis may not work. This unidimensionality is one of the assumptions of dichotomously scored items in IRT. Please sirs/mas, I need professional guidance, if possible the software and the manual.

As applied to physics, the source is a mathematically described process and the target is one without a mathematically described process or without a mathematically described process known to the student. Analogy can suggest a mathematical model to a researcher. Analogy assists the student by demonstrating that knowledge already acquired can help in understanding a new subject. Thus analogy can be an investigative tool and a pedagogical tool. John Holland in his book on Emergence from Chaos to Order attributes the source-target characterization to Maxwell (p. 210) but I have not been able thus far to locate Maxwell’s employment of that characterization. Maxwell spoke about analogy as a useful pedagogical tool in an 1870 address to the Mathematical and Physical Sections of the British Association included in his collective works, volume 2, page 215. At page 219: Analogy `is not only convenient for teaching science in a pleasant and easy manner, but the recognition of the formal analogy between the two systems of ideas leads to a knowledge of both, more profound than could be obtained by studying each system separately.’

Do you know the origin of the source-target analogy?

Dear researchers,

Regarding assumptions of linear regression, can I replace scatter plots with mathematical equations in my article and claim that there is a linear relationship between two variables on the basis of equations. I want to conserve space in my article without presenting scatter plots. Please advise. Thank you.

I want to construct a model for the world COVID-19 data