Science topics: Mathematical SciencesApplied Mathematics

Science topic

# Applied Mathematics - Science topic

Applied Mathematics are a common forum for all branches of applicable Mathematics

Questions related to Applied Mathematics

I am interested to compare two time varying correlations series. Is there any statistically appropriate method to make this comparison.

Thank You

Assume we have a program with different instructions. Due to some limitations in the field, it is not possible to test all the instructions. Instead, assume we have tested 4 instructions and calculated their rank for a particular problem.

the rank of Instruction 1 = 0.52

the rank of Instruction 2 = 0.23

the rank of Instruction 3 = 0.41

the rank of Instruction 4 = 0.19

Then we calculated the similarity between the tested instructions using

**cosine similarity**(after converting the instructions from text form to vectors- machine learning instruction embedding).Question ... is it possible to create a mathematical formula considering the values of rank and the similarity between instructions, so that .... given an

**un-tested instruction**... is it possible to calculate, estimate, or predict the rank of the new un-tested instruction based on its similarity with a tested instruction?For example, we measure the similarity between instruction 5 and instruction 1. Is it possible to calculate the rank of instruction 5 based on its similarity with instruction 1? is it possible to create a model or mathematical formula? if yes, then how?

Hi all, I want to solve a system of simultaneous equations in which some equations are cubic and some are quadratic. How to solve such a system. The solution should consist of combinations of all solutions i.e., the positive as well as the negative solutions.

During the lecture, the lecturer mentioned the properties of Frequentist. As following

**Unbiasedness**is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).

There are however many others, including:

1.

**Bias-variance trade-off**: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;2.

**Consistency**: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).3.

**Efficiency**: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.

**Applying mathematical knowledge in research models:**This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.

Please, I need, if available, some important research papers which relate the theory of dynamical systems to climate change. Also, in general, I know there are a lot of published research articles that relate dynamical systems to many applications. But, are there papers that research centers and governments depend on that before taking any procedures? I mean, are there papers, especially on climate change and the environment, which are not only in theory but have practical applications?

I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.

Follow this question on the given link

Well,

I am a very curious person. During Covid-19 in 2020, I through coded data and taking only the last name, noticed in my country that people with certain surnames were more likely to die than others (and this pattern has remained unchanged over time). Through mathematical ratio and proportion, inconsistencies were found by performing a "conversion" so that all surnames had the same weighting. The rest, simple exercise of probability and statistics revealed this controversial fact.

Of course, what I did was a shallow study, just a data mining exercise, but it has been something that caught my attention, even more so when talking to an Indian researcher who found similar patterns within his country about another disease.

In the context of pandemics (for the end of these and others that may come)

I think it would be interesting to have a line of research involving different professionals such as data scientists; statisticians/mathematicians; sociology and demographics; human sciences; biological sciences to compose a more refined study on this premise.

Some questions still remain:

What if we could have such answers? How should Research Ethics be handled? Could we warn people about care? How would people with certain last names considered at risk react? And the other way around? From a sociological point of view, could such a recommendation divide society into "superior" or "inferior" genes?

What do you think about it?

A few years ago, in a conversation that remained unfinished, a statistics expert told me that percentage variables should not be taken as a response variable in an ANOVA. Does anyone know if this is true, and why?

Dear All,

I am planning to do Ph.D in Applied mathematics but not able to decide on the area to be dealt in. Can anyone suggest any good option to go with ?

Dear researchers

Do you know a journal in the field of applied mathematics or chemistry-mathematics, in which the publication of the article is free and the answer to the review of the article is announced within 3 months at the most?

please contact me by the following:

Thank you very much.

Best regards

Hi

I have a huge dataset for which I'd like to assess the independence of two categorical variables (x,y) given a third categorical variable (z).

My assumption: I have to do the independence tests per each unique "z" and even if one of these experiments shows the rejection of null hypothesis (independence), it would be rejected for the whole data.

Results: I have done Chi-Sq, Chi with Yates correction, Monte Carlo and Fisher.

- Chi-Sq is not a good method for my data due to sparse contingency table

- Yates and Monte carlo show the rejection of null hypothesis

- For Fisher, all the p values are equal to 1

1) I would like to know if there is something I'm missing or not.

2) I have already discarded the "z"s that have DOF = 0. If I keep them how could I interpret the independence?

3) Why do Fisher result in pval=1 all the time?

4) Any suggestion?

#### Apply Fisher exact test

fish = fisher.test(cont_table,workspace = 6e8,simulate.p.value=T)

#### Apply Chi^2 method

chi_cor = chisq.test(cont_table,correct=T); ### Yates correction of the Chi^2

chi = chisq.test(cont_table,correct=F);

chi_monte = chisq.test(cont_table,simulate.p.value=T, B=3000);

**Dear colleagues, we know that getting a new research paper published can be a challenge for a new researcher. It is even more challenging when considering the risk of refusal that comes from submitting a new paper to a journal that is not the right fit. we can also mention that some journals require an article processing charge (APC) but also have a policy allowing them to waive fees on request at the discretion of the editor, howover we underline that we want to publish a new research paper without APC!**

**So, what do you suggest?**

We are certainly grateful for your recommendations.
Kind regards!

*------------------------------------------------------------------------------**Abdelaziz Hellal Mohamed Boudiaf M'sila, University, Algeria.*

I have registered for conference well held in Singapore from 9-10 September 2019

I want to ask if this event is a real event.

Name of event: International Conference on Applied Mathematics and Science (ICAMS-19)

Organization: WRF CONFERENCE

Date: 9th-10th SEP 2019

Best regards

I have previously conducted laboratory experiments on a photovoltaic panel under the influence of artificial soiling in order to be able to obtain the short circuit current and the open-circuit voltage data, which I analyzed later using statistical methods to draw a performance coefficient specific to this panel that expresses the percentage of the decrease in the power produced from the panel with the increase of accumulating dust. Are there any similar studies that relied on statistical analysis to measure this dust effect?

I hope I can find researchers interested in this line of research and that we can do joint work together!

**Article link:**

A tunable clock source will consist of a PLL circuit like the Si5319, configured by a microcontroller. The input frequency is fixed, e.g. 100 MHz. The user selects an output frequency with a resolution of, say, 1 Hz. The output frequency will always be lower than the input frequency.

The problem: The two registers of the PLL circuit which determine the ratio "output frequency/input frequency" are only 23 bit wide, i.e. the upper limit of both numerator and denominator is 8,388,607. As a consequence, when the user sets the frequency to x, the rational number x/10

^{8}has to be reduced or approximated.If the greatest common divider (GCD) of x and 10

^{8}>= 12 then the solution is obvious. If not, the task is to find the element in the Farey sequence F_{8388607}that is closest to x/10^{8}. This can be done by descending from the root along the left half of the Stern-Brocot tree. However, this tree, with all elements beyond F_{8388607}pruned away, is far from balanced, resulting in a maximum number of descending steps in excess of 4 million; no problem on a desktop computer but a bit slow on an ordinary microcontroller.F

_{8388607}has about 21*10^{12}elements, so a balanced binary tree with these elements as leaves would have a depth of about 45. But since such a tree cannot be stored in the memory of a microcontroller, numerator and denominator of the searched Farey element have to be calculated somehow during the descent. This task is basically simple in the Stern-Brocot tree but I don't know of any solution in any other tree.Do you know of a fast algorithm for this problem, maybe working along entirely different lines?

Many thanks in advance for any suggestions!

**Dear researchers**

**As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.**

**The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.**

**Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?**

**If you would like to collaborate with me, please contact me by the following:**

**Thank you very much.**

**Best regards**

**Sina Etemad, PhD**

Dears,

What can you say about the journal " Italian journal of pure and applied Mathematics"?

I am studying integral transforms (Fourier, Laplace, etc), to apply them in physics problems. However, it is difficult to get books that have enough exercises and their answers. I have found that in particular the Russian authors have excellent books where there are a lot of exercises and their solutions.

Greetings,

Ender

Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.

*Why is a Proof to Fermat's Last Theorem so Important?*

*I have been observing an obsession in mathematicians. logicians and number theorists with providing a "Proof for Fermat's Last Theorem". Many intend to publish these papers in peer reviewed journal. Publishing your findings is good but the problem is that a lot of the papers aimed at providing a proof for Fermat's Last Theorem are erroneous and the authors don't seem to realize that.*

*So*

*Why is the Proof of Fermat's Last Theorem so much important that a huge chunk of mathematicians are obsessed with providing the proof and failing miserably?*

*What are the practical application's of this theorem?*

*Note: I am not against the theorem or the research that is going on the theorem but it seems to be an addiction. That is why I thought of asking this question.*Hello everyone,

Could you recommend courses, papers, books or websites about modeling language and formalization?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

Dear collegues.

I would like to ask,if anybody works with neural networks,to check my loop for the test sample.

I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.

It means, I need to shift each time by one month with 5 elements:

train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.

The loop is:

shift <- 4

number_forecasts <- 1

d <- nrow(maxmindf)

k <- number_forecasts

for (i in 1:(d - shift + 1))

{

The code:

require(quantmod)

require(nnet)

require(caret)

prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)

temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)

soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)

rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)

df=data.frame(prov,temp,soil,rain)

mydata<-df

attach(mydata)

mi<-mydata

scaleddata<-scale(mi$prov)

normalize <- function(x) {

return ((x - min(x)) / (max(x) - min(x)))

}

maxmindf <- as.data.frame(lapply(mydata, normalize))

go<-maxmindf

forecasts <- NULL

forecasts$prov <- 1:22

forecasts$predictions <- NA

forecasts <- data.frame(forecasts)

# Training and Test Data

trainset <- maxmindf()

testset <- maxmindf()

#Neural Network

library(neuralnet)

nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)

nn$result.matrix

plot(nn)

#Test the resulting output

#Test the resulting output

temp_test <- subset(testset, select = c("temp","soil", "rain"))

head(temp_test)

nn.results <- compute(nn, temp_test)

results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)

}

minval<-min(x)

maxval<-max(x)

minvec <- sapply(mydata,min)

maxvec <- sapply(mydata,max)

denormalize <- function(x,minval,maxval) {

x*(maxval-minval) + minval

}

as.data.frame(Map(denormalize,results,minvec,maxvec))

Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?

I am very grateful for your answers

I want to develop a Hybrid SARIMA-GARCH for forecasting monthly rainfall data. The 100% of data is split into 80% for training and 20% for testing the data. I initially fit a SARIMA model for rainfall and found the residual of the SARIMA model is heteroscedastic in nature. To capture the information left in the SARIMA residual, GARCH is applied to model the residual part. The model order (p=1,q=1) of GARCH is applied. But when the data is forecasted I am getting constant value. I tried applying different model orders for GARCH, still, I am getting a constant value. I have attached my code, kindly help me resolve it? Where have I made mistake in coding? or is some other CRAN package has to be used?

library(“tseries”)

library(“forecast”)

library(“fgarch”)

setwd("C:/Users/Desktop")

**# Setting of the work directory**data<-read.table("data.txt")

**# Importing data**datats<-ts(data,frequency=12,start=c(1982,4))

**# Converting data set into time series**plot.ts(datats)

**# Plot of the data set**adf.test(datats)

**# Test for stationarity**diffdatats<-diff(datats,differences=1)

**# Differencing the series**datatsacf<-acf(datats,lag.max=12)

**# Obtaining the ACF plot**datapacf<-pacf(datats,lag.max=12)

**# Obtaining the PACF plot**auto.arima(diffdatats)

**# Finding the order of ARIMA model**datatsarima<-arima(diffdatats,order=c(1,0,1),include.mean=TRUE)

**# Fitting of ARIMA**modelforearimadatats<-forecast.Arima(datatsarima,h=12)

**# Forecasting using ARIMA model**plot.forecast(forearimadatats)

**# Plot of the forecast**residualarima<-resid(datatsarima)

**# Obtaining residuals**archTest(residualarima,lag=12)

**# Test for heteroscedascity****# Fitting of ARIMA-GARCH model**

garchdatats<-garchFit(formula = ~ arma(2)+garch(1, 1), data = datats, cond.dist = c("norm"), include.mean = TRUE, include.delta = NULL, include.skew = NULL, include.shape = NULL, leverage = NULL, trace = TRUE,algorithm = c("nlminb"))

**# Forecasting using ARIMA-GARCH model**

forecastgarch<-predict(garchdatats, n.ahead = 12, trace = FALSE, mse = c("uncond"), plot=FALSE, nx=NULL, crit_val=NULL, conf=NULL)

plot.ts(forecastgarch)

**# Plot of the forecast**A comprehensive way to find the concentration of random solutions would enhance benefits related with health, industry, technology and commercial aspects. Although beer lambert law is a solution, there are some cases where Epsilon is unknown (Example: A Coca-Cola drink or a cup of coffee). In this cases, proper alternative ways of determining concentration should be suggested.

I am trying to solve the differential equation. I was able to solve it when the function P is constant and independent of r and z. But I am not able to solve it further when P is a function of r and z or function of r only (IMAGE 1).

Any general solution for IMAGE 2?

Kindly help me with this. Thanks

Multinomial or crdered choice. Which one is applicable?

The complete flow equations for a third grade flow can be derived from the differential representation of the stress tensor. Has anyone ever obtained any results, experimentally or otherwise, that indicate the space-invariance (constancy) of the velocity gradient, especially for 1D shear flow in the presence of constant wall-suction velocity? Under what conditions were the results obtained?

Grubbs's test and Dixon's test are widely applied in the field of Hydrology to detect outliers, but the drawback of these statistical tests is that it needs the dataset to be approximately normally distributed? I have rainfall data for 113 years and the dataset is non-normally distributed. What are the statistical tests for finding outliers in non-normally distributed datasets & what values should we replace in the place of Outliers?

Consider the powerful central role of differential equations in physics and applied mathematics.

In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.

I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?

What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?

It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.

Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?

Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.

Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?

Space = regular points, Matter and Energy = singular points ?

Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line?

If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure

*$\mu(t)$*in the integral representation in the Bernstein--Widder theorem for*$f(x)=*\frac{1}{\arctan x}*$*?These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090

Hello Researchers,

Say that I have

**'p'**number of variables and**'m'**number of*constraint equations*between these variables. Therefore, I must have**'p - m'**independent variables, and the remaining variables can be related to the independent ones through the*constraint equations*. Is there any rationale for selecting these**'p - m'**independent variables from available**'p'**variables?Uses in applied mathematics and computer sciences

Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?

The Riemannian metric g satisfies

g(P_X\,Y, W)=-g(Y, P_X\,W)

where P_X\,Y is denote tangential part of (\nabla_X\J)\,Y=P_X\,Y+Q_X\,Y.

that condition can we take on Nordan manifold?

Assuming we have a piece of timber C16 - 100x100x1000mm and we apply UDL + a point load at middle point on it (parallel to the fibre) as shown below, how much will the timber compress between the force and the concrete surface ?

I have attached a sketch as well. Please see below.

If you could show a detailed calculation would be much appreciated. Thank you!

I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.

The attached pdf file shows the results from different problems

*This paper is a project to build a new function. I will propose a form of this function and I let people help me to develop the idea of this project, and in the same time we will try to applied this function in other sciences as quantum mechanics, probability, electronics …*

The threats that global warming has recently posed to humans in many parts of the world have led us to continue this debate.

So the main question is that what actions need to be taken to reduce the risk of climate warming?

Reducing greenhouse gases now seems an inevitable necessity.

In this part in addition to the aforementioned main question, other specific well-known subjects from previous discussion are revisited. Please support or refute the following arguments in a

**scientific**manner.% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Updated Discussions of Global Warming (section 1) *** ---------------%

The rate of mean temperature of the earth has been increased almost twice with respect to 60 years ago, it is a fact (Goddard Institute for Space Studies, GISS, data). Still a few questions regarding physical processes associated with global warming remain unanswered or at least need more clarification. So the causes and prediction of this trend are open questions. The most common subjects are listed below:

1) "Greenhouse effect increases temperature of the earth, so we need to diminish emission of CO2 and other air pollutants." The logic behind this reasoning is that the effects of other factors like the sun's activity (solar wind contribution), earth rotation orbit, ocean CO2 uptake, volcanoes activities,

**etc are not as important as greenhous effect. Is the ocean passive in the aforementioned scenario?**2) Two major physical turbulent fluids, the oceans and the atmosphere, interacting with each other, each of them has different circulation timescale, for the oceans it is from year to millennia that affects heat exchange. It is not in equilibrium with sun instantaneously. For example the North Atlantic Ocean circulation is quasi-periodic with recurrence period of about 7 kyr. So the climate change always has occurred. Does the timescale of crucial players (NAO, AO, oceans, etc) affect the results?

3) Energy of the atmospheric system including absorption and re-emission is about 200 Watt/m2 ; the effect of CO2 is about how many percent to this budget ( 2% or more?), so does it have just a minor effect or not?

4) Climate system is a multi-factor process and there exists a natural modes of temperature variations. How anthropogenic CO2 emissions makes the natural temperature variations out of balance.

6) Some weather and climate models that are based on primitive equations are able to reproduce reliable results. Are the available models able to predict future decadal variability exactly? How much is the uncertainty of the results. An increase in CO2 apparently leads in higher mean temperature value due to radiative transfer.

7) How is global warming related to extreme weather events?

Some of the consequences of global warming are frequent rainfall, heat waves, and cyclones. If we accept global warming as an effect of anthropogenic fossil fuels, how can we stop the increasing trend of temperature anomaly and switching to clean energies?

8) What are the roles of sun activities coupled with Milankovitch cycles?

9) What are the roles of politicians to alarm the danger of global warming? How much are scientists sensitive to these decisions?

10) How much is the CO2’s residence time in the atmosphere? To answer this question precisely, we need to know a good understanding of CO2 cycle.

11) Clean energy reduces toxic buildups and harmful smog in air and water. So, how much building renewable energy generation and demanding for clean energy is urgent?

% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Discussions of Global Warming (section 2) *** ---------------%

Warming of the climate system in the recent decades is unequivocal; nevertheless, in addition to a few scientific articles that show the greenhouse gases and human activity as the main causes of global warming, still the debate is not over and some opponents claim that these effects have minor effects on human life. Some relevant topics/criticisms about global warming, causes, consequences, the UN’s Intergovernmental Panel on Climate Change (IPCC), etc are putting up for discussion and debate:

1) All the greenhouse gases (carbon dioxide, methane, nitrous oxide, chlorofluorocarbons (CFCs), hydro-fluorocarbons, including HCFCs and HFCs, and ozone) account for about a tenth of one percent of the atmosphere. Based on Stefan–Boltzmann law in basic physics, if you consider the earth with the earth's albedo (a measure of the reflectivity of a surface) in a thermal balance, that is: the power radiated from the earth in terms of its temperature = Solar flux at the earth's cross section, you get Te =(1-albedo)^0.25*Ts.*sqrt(Rs/(2*Rse)), where Te (Ts) is temperature at the surface of the earth (Sun), Rs: radius of the Sun, Rse: radius of the earth's orbit around the Sun. This simplified equation shows that Te depends on these four variables: albedo, Ts, Rs, Rse. Just 1% variation in the Sun's activity lead to variation of the earth's surface temperature by about half a degree.

1.1) Is the Sun's surface (photosphere layer) temperature (Ts) constant?

1.2) How much is the uncertainty in measuring the Sun's photosphere layer temperature?

1.3) Is solar irradiance spectrum universal?

1.4) Is the earth's orbit around the sun (Rse) constant?

1.5) Is the radius of the Sun (Rs) constant?

1.6) Is the largeness of albedo mostly because of clouds or the man-made greenhouse gases?

So the sensitivity of global mean temperature to variation of tracer gases is one of the main questions.

2) A favorable climate model essentially is a coupled non-linear chaotic system; that is, it is not appropriate for the long term future prediction of climate states. So which type of models are appropriate?

3) Dramatic temperature oscillations were possible within a human lifetime in the past. So there is nothing to worry about. What is wrong with the scientific method applied to extract temperature oscillations in the past from Greenland ice cores or shifts in types of pollen in lake beds?

4) IPCC Assessment Reports,

IPCC's reports are known as some of the reliable sources of climate change, although some minor shortcomings have been observed in them.

4.1) "What is Wrong With the IPCC? Proposals for a Radical Reform" (Ross McKitrick):

IPCC has provided a few climate-change Assessment Reports during last decades. Is a radical reform of IPCC necessary or we should take all the IPCC alarms seriously? What is wrong with Ross argument? The models that are used by IPCC already captured a few crudest features of climate change.

4.2) The sort of typical issues of IPCC reports:

- The summary reports focus on those findings that support the human interference theory.

- Some arguments are based on this assumption that the models account for most major sources of variation in the global mean temperature anomaly.

- "Correlation does not imply causation", in some Assessment Reports, results gained from correlation method instead of investigating the downstream effects of interventions or a double-blind controlled trial; however, the conclusions are with a level of reported uncertainty.

4.3) Nongovernmental International Panel on Climate Change (NIPCC) also has produced some massive reports to date.

4.4) Is the NIPCC a scientific or a politically biased panel? Can NIPCC climate reports be trusted?

4.5) What is wrong with their scientific methodology?

5) Changes in the earth's surface temperature cause changes in upper level cirrus and consequently radiative balance. So the climate system can increase its cooling processes by these types of feedbacks and adjust to imbalances.

6) What is your opinion about political intervention and its effect upon direction of research budget?

I really appreciate all the researchers who have had active participation with their constructive remarks in these discussion series.

% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Discussions of Global Warming (section 3) *** ---------------%

In this part other specific well-known subjects are revisited. Please support or refute the following arguments in a

**scientific**manner.1) Still there is no convincing theorem, with a "very low range of uncertainty", to calculate the response of climate system in terms of the averaged global surface temperature anomalies with respect to the total feedback factors and greenhouse gases changes. In the classical formula applied in the models a small variation in positive feedbacks leads to a considerable changes in the response (temperature anomaly) while a big variation in negative feedbacks causes just small variations in the response.

2) NASA satellite data from the years 2000 through 2011 indicate the Earth's atmosphere is allowing far more heat to be emitted into space than computer models have predicted (i.e. Spencer and Braswell, 2011, DOI: 10.3390/rs3081603). Based on this research "the response of the climate system to an imposed radiative imbalance remains the largest source of uncertainty. It is concluded that atmospheric feedback diagnosis of the climate system remains an unsolved problem, due primarily to the inability to distinguish between radiative forcing and radiative feedback in satellite radiative budget observations." So the contribution of greenhouse gases to global warming is exaggerated in the models used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). What is wrong with this argument?

3) Ocean Acidification

Ocean acidification is one of the consequences of CO2 absorption in the water and a main cause of severe destabilising the entire oceanic food-chain.

4) The IPCC reports which are based on a range of model outputs suffer somehow from a range of uncertainty because the models are not able to implement appropriately a few large scale natural oscillations such as North Atlantic Oscillation, El Nino, Southern ocean oscillation, Arctic Oscillation, Pacific decadal oscillation, deep ocean circulations, Sun's surface temperature, etc. The problem with correlation between historical observations of the global averaged surface temperature anomalies with greenhouse gases forces is that it is not compared with all other natural sources of temperature variability. Nevertheless, IPCC has provided a probability for most statements. How the models can be improved more?

5) If we look at micro-physics of carbon dioxide, theoretically a certain amount of heat can be trapped in it as increased molecular kinetic energy by increasing vibrational and rotational motions of CO2, but nothing prevents it from escaping into space. During a specific relaxation time, the energetic carbon dioxide comes back to its rest statement.

6) As some alarmists claim there exists a scientific consensus among the scientists. Nevertheless, even if this claim is true, asking the scientists to vote on global warming because of human made greenhouse gases sources does not make sense because the scientific issues are not based on the consensus; indeed, appeal to majority/authority fallacy is not a scientific approach.

% ---------------- *** Discussions of Global Warming (section 4) *** ---------------%

In this part in addition to new subjects, I have highlighted some of responses from previous sections for further discussion. Please leave you comments to support/weaken any of the following statements:

1) @Harry ten Brink recapitulated a summary of a proof that CO2 is such an important Greenhouse component/gas. Here is a summary of this argument:

"a) Satellites' instruments measure the radiation coming up from the Earth and Atmosphere.

b) The emission of CO2 at the maximum of the terrestrial radiation at 15 micrometer.

b1. The low amount of this radiation emitted upwards: means that "back-radiation" towards the Earth is high.

b2. Else said the emission is from a high altitude in the atmosphere and with more CO2 the emission is from an even higher altitude where it is cooler. That means that the emission upwards is less. This is called in meteorology a "forcing", because it implies that less radiation /energy is emitted back into space compared to the energy coming in from the sun.

The atmosphere warms so the energy out becomes equals the solar radiation coming in. Summary of the Greenhouse Effect."

At first glance, this reasoning seems plausible. It is based on these assumptions that the contribution of CO2 is not negligible and any other gases like N2O or Ozone has minor effect. The structure of this argument is supported by an article by Schmidt et al., 2010:

By using the Goddard Institute for Space Studies (GISS) ModelE radiation module, the authors claim that "water vapor is the dominant contributor (∼50% of the effect), followed by clouds (∼25%) and then CO2 with ∼20%. All other absorbers play only minor roles. In a doubled CO2 scenario, this allocation is essentially unchanged, even though the magnitude of the total greenhouse effect is significantly larger than the initial radiative forcing, underscoring the importance of feedbacks from water vapour and clouds to climate sensitivity."

The following notions probably will shed light on the aforementioned argument for better understanding the premises:

Q1) Is there any observational data to support the overall upward/downward IR radiation because of CO2?

Q2) How can we separate practically the contribution of water vapor from anthropogenic CO2?

Q3) What are the deficiencies of the (GISS) ModelE radiation module, if any?

Q4) Some facts, causes, data, etc relevant to this argument, which presented by NASA, strongly support this argument (see: https://climate.nasa.gov/evidence/)

Q5) Stebbins et al, (1994) showed that there exists "A STRONG INFRARED RADIATION FROM MOLECULAR NITROGEN IN THE NIGHT SKY" (thanks to @Brendan Godwin for mentioning about this paper). As more than 78% of the dry air contains nitrogen, so the contribution of this element is not negligible too.

2) The mean global temperature is not the best diagnostic to study the sensitivity to global forcing. Because given a change in this mean value, it is almost impossible to attribute it to global forcing. Zonal and meridional distribution of heat flux and temperature are not uniform on the earth, so although the mean temperature value is useful, we need a plausible map of spatial variation of temperature .

3) "The IPCC model outputs show that the equilibrium response of mean temperature to a doubling of CO2 is about 3C while by the other observational approaches this value is less than 1C." (R. Lindzen)

4) What is the role of the thermohaline circulation (THC) in global warming (or the other way around)? It is known that during Heinrich events and Dansgaard‐Oeschger (DO) millennial oscillations, the climate was subject to a number of rapid cooling and warming with a rate much more than what we see in recent decades. In the literature, these events were most probably associated with north-south shifts in convection location of the THC. The formation speed of North Atlantic Deep Water (NADW) affects northerly advection velocity of the warm subtropical waters that would normally heat/cool the atmosphere of Greenland and western Europe.

I really appreciate all the researchers who have participated in this discussion with their useful remarks, particularly Harry ten Brink, Filippo Maria Denaro, Tapan K. Sengupta, Jonathan David Sands, John Joseph Geibel, Aleš Kralj, Brendan Godwin, Ahmed Abdelhameed, Jorge Morales Pedraza, Amarildo de Oliveira Ferraz, Dimitris Poulos, William Sokeland, John M Wheeldon, Michael Brown, Joseph Tham, Paul Reed Hepperly, Frank Berninger, Patrice Poyet, Michael Sidiropoulos, Henrik Rasmus Andersen, and Boris Winterhalter.

%%-----------------------------------------------------------------------------------------------------------%%

Can an elliptic crack (small enough to remain a single entity, with

**no internal pressure or shear force**) inside an isotropic material (no boundary effect) be expanded in its own plane under**externally applied shearing stresses**only?If yes, how did you show that? Do we have experimental evidence for the process?

There are many numerical techniques for obtaining approximate solutions of fractional order boundary value problems in which the order of differential equation is a fractional constant number. If we assume that the order of BVP is a continuous functions with respect to the time, then is there any numerical technique to obtain approximate solutions of a variable-order fractional BVP?

In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.

However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?

Bests

Hasan

I want to work on solving differential equation using artificial neural network. I saw some paper is working on closed form solution. But will that be a good idea? But in this way it is not possible to deal with real data which may be discrete . Is there any paper which works to solve differential equation using ANN using totally numerical way?

Let T, S L(X) two non null and non compact operators.

Denote by χ ¬ the Hausdorff measure of noncompactness.

I’d like to know if this inequality is true or false.

χ(T/S)≤(χ(T))/(χ(S))

Thanks.

I am looking for some interesting areas of research in mathematics or in mathematical physics for undergradute students, I am in my 3rd year, and I've taken some basic courses such as: linear algebra, advanced calculus, mathematical methods, applied mathematics, and ODE..., What do you suggest to me?

In my modest of opinion, this paper provides a very simple proof of Fermat's Last Theorem using methods that were available in Fermat's days. If correct, it follows that this proof squashes any criticism against Fermat's claims.

It took 358 years (1637-1994) to get Professor Wiles' complicated proof, but this paper shows that one could achieve this in a simpler manner. I am sure this proof is flawless as I have gone through it very thoroughly.

I think Fermat had the proof.

It looks like we have climbed a high mountain to look for something that is right at the foot of the mountain.

**Fermat's claim:**there does no exist integers x,y,z greater than unity for any (n>2) for which the equation:

*x*

^{n}+y^{n}=z^{n}.has a solution. He went on to say:

*"I have discovered a truly marvelous proof of this, which this margin is too narrow to contain."*

Hi everyone,

In engineering design, there are usually only a few data points or low order moments, so it is meaningful to fit a relatively accurate probability density function to guide engineering design. What are the methods of fitting probability density functions through small amounts of data or low order statistical moments?

Best regards

Tao Wang

Some graphs plotted by dedicated experimental setup software need to be replot in different format, scale or for other various reasons. Many times the separate data in tabular form is not available. Can you please suggest the best tool for graph points extraction in such cases?

Some journals give reviewers 60 days, others give 40 days, 30 days, or 20 days to review a paper. MDPI journals give only 10 days, but it can be extended if the reviewer needs more time. In my opinion, 10 days might be too short, but 60 days is excessive. Allowing 60 days for a peer review is adding to the response time unnecessarily, and disadvantaging the authors. I can thoroughly review a paper in a day (if I dedicate myself to it), or two at most. A reviewer should only accept a review request if they are not too busy to do it in the next 10 to 20 days. I have encountered situations in which a reviewer agrees to the review, but does not submit the review at the end of 60 days, wasting those valuable 60 days from the author. What do you think the allowed time for reviewers should be?

Journals with review time of 2-4 weeks and publication time of <6 months.Impact factor journals >1.

I would like to do work on quantum gravity. But general relativity is not complete. So if i want to do work on GRT. I am beginner for this course. GRT fails in few aspects. Any one suggest me research papers. Please send me your answers.

In 2010, Dr. Khmelnik has found the suitable method of resolving of the Navier-Stokes equations and published his results in a book. In 2021, already the sixth edition of his book was released that is attached to this question for downloading. Here it is worce to mention that the Clay Mathematics Institute has included this problem of resolving of the Navier-Stokes equations in the list of seven important millennium problems. Why the Navier-Stokes equations are very important?

Are there any conditions under which the difference between two matrices i.e.

**A**-**B**will be invertible? In particular, I have a positive definite matrix**A**but**B**is a square matrix not necessarily symmetric. However,**B**has the form**MP**^{-1}**N**with**P**as a square invertible matrix and**M**and**N**as arbitrary matrices of appropriate dimensions.There is a reference to such an embedding in a 2005 article of Nigel Kalton on Rademacher Decoupling. I've failed to figure out how to construct this embedding and am wondering if anyone can give me a hint or a reference.

*is the basis of exact sciences. The development of mathematics consists in the fact that, among others, new phenomena of the surrounding world, which until recently were only described in the humanistic perspective, are also interpreted in mathematical terms.*

**Mathematics**However, is it possible to write down the essence of artistic creativity in mathematical models and create a pattern model for

*? If that was possible, then***creating works of art, creative solutions and innovative inventions***could be programmed to create works of art, creative solutions and innovative inventions. Will it be possible in the future?***artificial intelligence**Do you agree with my opinion on this matter?

In view of the above, I am asking you the following question:

Will mathematics help to improve

*so that it will achieve human qualities of***artificial intelligence***?***artistic creativity and innovation**Please reply

I invite you to the discussion

Best wishes

- I would like to post this question to clarify my doubts as there were two different answers seems to be correct. Two different experts (1. faculty members in applied mathematics department and (ii) and faculty in pure mathematics department ) have different opinion . Question: Find the limits of integration in the double integral over R , where R is the region in the first quadrant (i) bounded by x=1, y=1 and y^2=4x Problem 2 (ii) bounded by x=1, y=0 and y^2=4x.

I am an engineer/researcher developing code for finding feature parameters of irregular shaped 2d closed curves . For example a square has 4 corners and one width/height , circle infinite corners/radius, oval infinite corners/ 2 lengths major and minor. So I was wondering if we could have various measure for real life curves as found in US cardiac images such as width of the ventricles.Any help would be deeply appreciated.

Thanks,

Sushma

if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?

I would like to know if the SUPG method has any advantages over the least squares finite element method?

Thank you for your reply.

**Need heuristic for assignment problem**. Use it in order to allocation tasks to 2 or more vehicle. So it can work on the same network. The heuristic should be easy to implement for exempel not GA.

**NOTE**the allocation of the task can be for exmpel vehicle 1 pick a goods from nod A to B and vehicle 2 pick from C to D.

I have a massive of x, y, z coordinates of 2D surface in 3D space, which builded in Ansys HFSS or AutoCAD. How can I get the analitical aproximation of this surface by formulas?

I shared the picture of three parameters 1.Change in Temperature, 2. Change in Relative Humidity, 3. Change in Pressure and respective error value for that.

From the attached data(picture and excel file attached), I need to find the Error value for different input parameter.

If

1.Change in Temperature = 1°C

2. Change in Relative Humidity = 1%

3. Change in Pressure = 1mbar

What is the error value?

If

1.Change in Temperature = 2°C

2. Change in Relative Humidity = 2%

3. Change in Pressure = 2mbar

What is the error value?

If

1.Change in Temperature = 4°C

2. Change in Relative Humidity = 3%

3. Change in Pressure = 2mbar

What is the error value?

**Is it possible to find the error value by mathematics. Please tell the way to calculate using calculator or python programming.**

When creating & optimizing mathematical models with multivariate sensor data (i.e. 'X' matrices) to predict properties of interest (i.e. dependent variable or 'Y'), many strategies are recursively employed to reach "suitably relevant" model performance which include ::

>> preprocessing (e.g. scaling, derivatives...)

>> variable selection (e.g. penalties, optimization, distance metrics) with respect to RMSE or objective criteria

>> calibrant sampling (e.g. confidence intervals, clustering, latent space projection, optimization..)

Typically & contextually, for calibrant sampling, a

**top-down**approach is utilized, i.e., from a set of 'N' calibrants, subsets of calibrants may be added or removed depending on the "requirement" or model performance. The assumption here is that a large number of datapoints or calibrants are available to choose from (collected*a priori*).Philosophically & technically, how does the

**bottom-up**pathfinding approach for calibrant sampling or "searching for ideal calibrants" in a design space, manifest itself? This is particularly relevant in chemical & biological domains, where experimental sampling is constrained.E.g., Given smaller set of calibrants, how does one robustly approach the addition of new calibrants

*in silico*to the calibrant-space to make more "suitable" models? (simulated datapoints can then be collected experimentally for addition to calibrant-space post modelling for next iteration of modelling).:: Flow example ::

**N calibrants**-> build & compare models -> model iteration 1 ->

**addition of new calibrants (N+1)**-> build & compare models -> model iteration 2 -> so on.... ->acceptable performance ~ acceptable experimental datapoints collectable -> acceptable model performance

As we know there are many papers in literature trying to derive or explain fine structure constant from theories. Two of interesting papers are by Gilson and by Stephen Adler (see http://lss.fnal.gov/archive/1972/pub/Pub-72-059-T.pdf), other papers are mostly based on speculation or numerology.

In this regards, in December 2008 i once attended a seminar in Moscow State University, Moscow. The topic of that seminar is relation between fundamental constants. Since the seminar was presented in russian language which i don,t understand, i asked a friend about the presenter. And my friend said that the presenter was Prof. Anosov. I only had a glimpse of his ideas, he tried to describe fine structure constant from Shannon entropy. I put some of his ideas in my note book, but today that book is lost.

I have tried to search in google and arxiv.org to find out if there is paper describing similar idea, i.e. to derive fine structure constant from Shannon entropy, but i cannot find any paper. So if you know that paper by Anosov or someone else discussing relation between fine structure constant and Shannon entropy, please let me know. Or perhaps you can explain to me the basic ideas.

Aim is to find signal value at x

_{0}from signal values at x_{i}, i=1,..N using Kriging, given as Z(x_{0})=sum(w_{i}Z(x_{i})).After fitting a non-decreasing curve to empirical variogram, we solve following equation to find the weights w

_{i}'s-Aw = B,

where A is padded matrix containing Cov(x

_{i},x_{j}) terms and B is vector containing Cov(x_{i},x_{0}).In my simulation setup, weights often have negative value (which is non-intuitive). Am I missing any step? As per my understanding, choice of curve-fitting function affects A. Weights are positive only if A is positive-definite. Is there a way to ensure that A is positive-definite?

I am considering to distribute N-kinds of different parts among M-different countries and I wan to know the "most probable" pattern of distribution. My question is in fact ambiguous, because I am not very sure how I can distinguish types or patterns.

Let me give an example. If I were to distribute 3 kinds of parts to 3 countries, the set of all distribution is given by a set

{aaa, aab, aac, aba, abb, abc aca, acb, acc, baa, bab, bac, bba, bbb, bbc, bca, bcb, bcc, caa, cab, cac, cba, cbb, cbc, cca, ccb, ccc}.

The number of elements is of course 33 = 27. I may distinguish three types of patterns:

(1) One country receives all parts:

aaa, bbb, ccc 3 cases

(2) One country receives 2 parts and another country receives 1 part:

aab, aac, aba, abb, aca, acc, baa, bab, bba, bbc, bcb, caa, cac, cbb, cbc, cca, ccb 17 cases

(3) Each county rceives one part respectively:

abc, acb, bac, bca, cab, cba 6 cases

These types may correspond to a partition of integer 3 with the condition that (a) number of summands must not exceed 3 (in general M). In fact, 3 have three partitions:

3, 2+1, 1+1+1

In the above case of 3×3, the number of types was the number of partitions of 3 (which is often noted p(n)). But I have to consider the case when M is smaller than N.

If I am right, the number of "different types" of distributions is the number of partitions of N with the number of summands less than M+1. Let us denote it as

p*(N, M) = p( N | the number of summands must not exceed M. )

N.B. * is added in order to avoid confusion with p(N, M), wwhich is the number of partitions with summands smaller than M+1.

Now,

**my question is the following**:*Which type (a partition among p*(N, M)) has the greatest number of distributions?*

Are there any results already known? If so, would you kindly teach me a paper or a book that explains the results and how to approach to the question?

A typical case that I want to know is N = 100, M = 10. In this simple case, is it most probable that each country receives 10 parts? But, I am also interested to cases when M and N are small, for example when M and N is less than 10.