Science topic

# Applied Mathematics - Science topic

Applied Mathematics are a common forum for all branches of applicable Mathematics
Questions related to Applied Mathematics
Question
Dear All,
I am planning to do Ph.D in Applied mathematics but not able to decide on the area to be dealt in. Can anyone suggest any good option to go with ?
Numerical analysis or PDEs or Bio mathematics
Question
Hi
I have a huge dataset for which I'd like to assess the independence of two categorical variables (x,y) given a third categorical variable (z).
My assumption: I have to do the independence tests per each unique "z" and even if one of these experiments shows the rejection of null hypothesis (independence), it would be rejected for the whole data.
Results: I have done Chi-Sq, Chi with Yates correction, Monte Carlo and Fisher.
- Chi-Sq is not a good method for my data due to sparse contingency table
- Yates and Monte carlo show the rejection of null hypothesis
- For Fisher, all the p values are equal to 1
1) I would like to know if there is something I'm missing or not.
2) I have already discarded the "z"s that have DOF = 0. If I keep them how could I interpret the independence?
3) Why do Fisher result in pval=1 all the time?
4) Any suggestion?
#### Apply Fisher exact test
fish = fisher.test(cont_table,workspace = 6e8,simulate.p.value=T)
#### Apply Chi^2 method
chi_cor = chisq.test(cont_table,correct=T); ### Yates correction of the Chi^2
chi = chisq.test(cont_table,correct=F);
chi_monte = chisq.test(cont_table,simulate.p.value=T, B=3000);
Hello Masha,
Why not use the Mantel-Haenszel test across all the z-level 2x2 tables for which there is some data? This allows you to estimate the aggregate odds ratio (and its standard error), thus you can easily determine whether a confidence interval includes 1 (no difference in odds, and hence, no relationship between the two variables in each table) or not.
That seems simpler than having to run a bunch of tests, and by so doing, increase the aggregate risk of a type I error (false positive).
Question
Consider the powerful central role of differential equations in physics and applied mathematics.
In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.
I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?
What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?
It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.
Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?
Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.
Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?
Space = regular points, Matter and Energy = singular points ?
Question
Dear colleagues, we know that getting a new research paper published can be a challenge for a new researcher. It is even more challenging when considering the risk of refusal that comes from submitting a new paper to a journal that is not the right fit. we can also mention that some journals require an article processing charge (APC) but also have a policy allowing them to waive fees on request at the discretion of the editor, howover we underline that we want to publish a new research paper without APC!
So, what do you suggest?
We are certainly grateful for your recommendations. Kind regards! ------------------------------------------------------------------------------
Abdelaziz Hellal Mohamed Boudiaf M'sila, University, Algeria.
It depends on the value of your paper. If it contains elements of novelty, of interest for researchers in the domain of nonlinear PDE, then I recommend:
Journal of Differential Equations
Nonlinear Analysis
or
Partial Differential Equations in Applied Mathematics.
Question
I have registered for conference well held in Singapore from 9-10 September 2019
I want to ask if this event is a real event.
Name of event: International Conference on Applied Mathematics and Science (ICAMS-19)
Organization: WRF CONFERENCE
Date: 9th-10th SEP 2019
Best regards
Hello Usama,
I would not spend any time looking at that 'conference'.
I quote from that page,
"The main focus of conference is to improve the , accelerate the translation of leading edge discovery at research level"
Ungrammatical, misspelled.
But the main issue I have with it, is that there is no topic.
What on Earth is this conference about?
An event so broad in scope cannot hope to be useful in attracting an audience that would actually be beneficial to its attendees.
The link to "Upcoming conference" does not work. The telephone number is a British one (+44) but the Whats App number is not.
If you can tell me your field of interest, I might be able to suggest some useful meetings. My background is in aerospace systems, material testing for oil & gas firms, and respiratory medical devices.
Question
I have previously conducted laboratory experiments on a photovoltaic panel under the influence of artificial soiling in order to be able to obtain the short circuit current and the open-circuit voltage data, which I analyzed later using statistical methods to draw a performance coefficient specific to this panel that expresses the percentage of the decrease in the power produced from the panel with the increase of accumulating dust. Are there any similar studies that relied on statistical analysis to measure this dust effect?
I hope I can find researchers interested in this line of research and that we can do joint work together!
Dear Dr Younis
Find attached:
1-(1) (PDF) Spatial Management for Solar and Wind Energy in Kuwait (researchgate.net)
2-(1) (PDF) Cost and effect of native vegetation change on aeolian sand, dust, microclimate and sustainable energy in Kuwait (researchgate.net)
regards
Ali Al-Dousari
Question
A tunable clock source will consist of a PLL circuit like the Si5319, configured by a microcontroller. The input frequency is fixed, e.g. 100 MHz. The user selects an output frequency with a resolution of, say, 1 Hz. The output frequency will always be lower than the input frequency.
The problem: The two registers of the PLL circuit which determine the ratio "output frequency/input frequency" are only 23 bit wide, i.e. the upper limit of both numerator and denominator is 8,388,607. As a consequence, when the user sets the frequency to x, the rational number x/108 has to be reduced or approximated.
If the greatest common divider (GCD) of x and 108 >= 12 then the solution is obvious. If not, the task is to find the element in the Farey sequence F8388607 that is closest to x/108. This can be done by descending from the root along the left half of the Stern-Brocot tree. However, this tree, with all elements beyond F8388607 pruned away, is far from balanced, resulting in a maximum number of descending steps in excess of 4 million; no problem on a desktop computer but a bit slow on an ordinary microcontroller.
F8388607 has about 21*1012 elements, so a balanced binary tree with these elements as leaves would have a depth of about 45. But since such a tree cannot be stored in the memory of a microcontroller, numerator and denominator of the searched Farey element have to be calculated somehow during the descent. This task is basically simple in the Stern-Brocot tree but I don't know of any solution in any other tree.
Do you know of a fast algorithm for this problem, maybe working along entirely different lines?
Many thanks in advance for any suggestions!
Now I tested the idea of jumping downwards in the Stern-Brocot tree. This idea is based on the observation that long paths from the root to a Farey element always seem to contain long sequences of steps directed exclusively to the left or exclusively to the right. As long as the direction is constant, the values of numerator and denominator which are added to the current node are also constant, obviously. Therefore, the products of numerator and a certain jump width resp. denominator and a this jump width can be added to the node.
In order to determine the largest possible jump width, bitwise successive approximation is used in this first approach. The result is quite satisfactory:
With an input frequency of 100 MHz, and the output frequency in the range 1 Hz to 100 MHz - 1 Hz (at the extrema, the approximation by Farey elements is poor, of course), the sum of the passes through the outer loop (movement through the tree) and through the inner loop (determining the maximum jump width) never exceeds 386. Attached is my C source. Compared to the maximum number of single steps, this is an improvement of 4 orders of magnitude.
While this approach solves my practical problem, I would still be interested in other solutions because sometimes it's amazing how a problem can be tackled in different ways.
Question
Dear researchers
As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.
The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.
Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?
Thank you very much.
Best regards
Yes I am
Question
Dears,
What can you say about the journal " Italian journal of pure and applied Mathematics"?
The good news is that this journal is still indexed. Your observation has to do with the fact that:
-The journal only publishes two times a year (and the first issue in 2022 is just published)
-The journal is most likely rather slow in providing the metadata for Scopus needed for proper indexing
The bad news is that when I had a closer look they, indeed as said by Karrar Q. Al-Jubouri and yourself, take a long time to publish accepted papers. So, if you can wait patiently then this is just a fine journal but with some degree of time pressure you better go for another one.
Best regards.
Question
I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.
Euh...The closed L^2 unit ball is not totally bounded since it is closed but not compact. The open unit ball is not totally bounded either, since its closure is not compact.
Question
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
Norbert Tihanyi one little warning, if you look whether a particular journal is mentioned in the Beall’s list you should not only check the journal title in the stand-alone journal list (https://beallslist.net/standalone-journals/) but also the publisher behind it (if any). In this case the publisher is not mentioned in the Beall’s list (https://beallslist.net/). Anis Hamza I suppose you mean ISSN number, this journal with ISSN 1547-5816 and/or E-ISSN:1553-166X is mentioned in Scopus (https://www.scopus.com/sources.uri?zone=TopNavBar&origin=searchbasic) and Clarivate’s Master journal list (https://mjl.clarivate.com/home).
Back to your question, it is somewhat diffuse. There are signs that you are dealing with a questionable organization:
-Contact info renders in Google a nice residence but does not seem to correspond to an office and I quote “The American Institute of Mathematical Sciences is an international organization for the advancement and dissemination of mathematical and applied sciences.” https://www.aimsciences.org/common_news/column/aboutaims
-Both websites https://www.aimsciences.org/and http://www.aimspress.com/ function more or less okay but not flawless
-The journal “Journal of Industrial & Management Optimization (JIMO)“ is somewhat vague about the APC. It positions itself as hybrid (with an APC of 1800 USD), but all papers I checked can be read as open access (although not all have a CC etc. license). It mentions something like open access for free when an agreement is signed with your institution but how much this cost is unclear
-No problem by itself but the majority of authors are from China, makes you wonder about American Institute…
-Editing is well…sober
On the other hand it looks like and I quote “AIMS is a science organization with two independent operations: AIMS Press (www.aimspress.com) and the American Institute of Mathematical Sciences (AIMS) (www.aimsciences.org ).” AIMS Press is focused on Open Access journals while the journals published by AIMS (www.aimsciences.org) are/used to be subscription-based journals. Pretty much like Springer has there BioMed Central (BMC) journal portfolio and Bentham has their Bentham Open division.
Facts are:
-AIMS ( www.aimsciences.org ), more than 20 of their journals are indexed in SCIE and indexed in Scopus as well (under the publisher’s name: American Institute of Mathematical Sciences)
-AIMS Press (www.aimspress.com ), four journals are indexed in SCIE and thus have an impact factor and 14 journals are indexed in Clarivate’s ESCI. 7 journals are indexed in Scopus.
-AIMS Press, 20 of their journals are a member of DOAJ
-Journal of Industrial & Management Optimization (JIMO) https://www.aimsciences.org/journal/1547-5816 is indexed in Clarivate’s SCIE (impact factor 1.801, see enclosed file for latest JCR Report) and Scopus indexed CiteScore 1.8 https://www.scopus.com/sourceid/12900154727.
-For the papers I checked the time between received and accepted varies between 6 and 9 months and an additional 3-4 months before publication (it is well… not fast but not unusual)
So, overall, I think that the publisher has quite some credibility and it might be worthwhile to consider.
Best regards.
Question
Why is a Proof to Fermat's Last Theorem so Important?
I have been observing an obsession in mathematicians. logicians and number theorists with providing a "Proof for Fermat's Last Theorem". Many intend to publish these papers in peer reviewed journal. Publishing your findings is good but the problem is that a lot of the papers aimed at providing a proof for Fermat's Last Theorem are erroneous and the authors don't seem to realize that.
So
Why is the Proof of Fermat's Last Theorem so much important that a huge chunk of mathematicians are obsessed with providing the proof and failing miserably?
What are the practical application's of this theorem?
Note: I am not against the theorem or the research that is going on the theorem but it seems to be an addiction. That is why I thought of asking this question.
Muneeb Faiq , the situation has changed and there should be no fear, when staying before the FLT.
Question
Hello everyone,
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Kindly check also the following very good RG link:
Question
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov) normalize <- function(x) { return ((x - min(x)) / (max(x) - min(x))) } maxmindf <- as.data.frame(lapply(mydata, normalize)) go<-maxmindf forecasts <- NULL forecasts$prov <- 1:22
forecasts$predictions <- NA forecasts <- data.frame(forecasts) # Training and Test Data trainset <- maxmindf() testset <- maxmindf() #Neural Network library(neuralnet) nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01) nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
Question
I want to develop a Hybrid SARIMA-GARCH for forecasting monthly rainfall data. The 100% of data is split into 80% for training and 20% for testing the data. I initially fit a SARIMA model for rainfall and found the residual of the SARIMA model is heteroscedastic in nature. To capture the information left in the SARIMA residual, GARCH is applied to model the residual part. The model order (p=1,q=1) of GARCH is applied. But when the data is forecasted I am getting constant value. I tried applying different model orders for GARCH, still, I am getting a constant value. I have attached my code, kindly help me resolve it? Where have I made mistake in coding? or is some other CRAN package has to be used?
library(“tseries”)
library(“forecast”)
library(“fgarch”)
setwd("C:/Users/Desktop") # Setting of the work directory
datats<-ts(data,frequency=12,start=c(1982,4)) # Converting data set into time series
plot.ts(datats) # Plot of the data set
diffdatats<-diff(datats,differences=1) # Differencing the series
datatsacf<-acf(datats,lag.max=12) # Obtaining the ACF plot
datapacf<-pacf(datats,lag.max=12) # Obtaining the PACF plot
auto.arima(diffdatats) # Finding the order of ARIMA model
datatsarima<-arima(diffdatats,order=c(1,0,1),include.mean=TRUE) # Fitting of ARIMA model
forearimadatats<-forecast.Arima(datatsarima,h=12) # Forecasting using ARIMA model
plot.forecast(forearimadatats) # Plot of the forecast
residualarima<-resid(datatsarima) # Obtaining residuals
archTest(residualarima,lag=12) # Test for heteroscedascity
# Fitting of ARIMA-GARCH model
garchdatats<-garchFit(formula = ~ arma(2)+garch(1, 1), data = datats, cond.dist = c("norm"), include.mean = TRUE, include.delta = NULL, include.skew = NULL, include.shape = NULL, leverage = NULL, trace = TRUE,algorithm = c("nlminb"))
# Forecasting using ARIMA-GARCH model
forecastgarch<-predict(garchdatats, n.ahead = 12, trace = FALSE, mse = c("uncond"), plot=FALSE, nx=NULL, crit_val=NULL, conf=NULL)
plot.ts(forecastgarch) # Plot of the forecast
At the begin it happens as usual, and this way we learning, I would like to advise you to check your theory & codes line by line. It will work for sure.
Question
A comprehensive way to find the concentration of random solutions would enhance benefits related with health, industry, technology and commercial aspects. Although beer lambert law is a solution, there are some cases where Epsilon is unknown (Example: A Coca-Cola drink or a cup of coffee). In this cases, proper alternative ways of determining concentration should be suggested.
Question
I am trying to solve the differential equation. I was able to solve it when the function P is constant and independent of r and z. But I am not able to solve it further when P is a function of r and z or function of r only (IMAGE 1).
Any general solution for IMAGE 2?
Kindly help me with this. Thanks
check out this paper using a Laplace transformation for solving nonlinear nonhomogenous partial equations
the solutions are not trivial which is different if your coefficients and the pressure P are constant
Hopefully it helps
Question
Multinomial or crdered choice. Which one is applicable?
multinomial logistic regression model is best
Question
The complete flow equations for a third grade flow can be derived from the differential representation of the stress tensor. Has anyone ever obtained any results, experimentally or otherwise, that indicate the space-invariance (constancy) of the velocity gradient, especially for 1D shear flow in the presence of constant wall-suction velocity? Under what conditions were the results obtained?
Academic resources on Fluid Mechanics are provided on
SINGLE PHASE AND MULTIPHASE TURBULENT FLOWS (SMTF) IN NATURE AND ENGINEERING APPLICATIONS | Jamel Chahed | 3 publications | Research Project (researchgate.net)
Question
Grubbs's test and Dixon's test are widely applied in the field of Hydrology to detect outliers, but the drawback of these statistical tests is that it needs the dataset to be approximately normally distributed? I have rainfall data for 113 years and the dataset is non-normally distributed. What are the statistical tests for finding outliers in non-normally distributed datasets & what values should we replace in the place of Outliers?
Hello Kabbilawsh,
If you believed your sample data accurately represented the target population, you could: (a) run a simulation study of random samples from such a population; and (b) identify exact thresholds for cases (either individual data points or sample means or medians, depending on which better fit your research situation) at whatever desired level of Type I risk you were willing to apply.
If you don't believe your sample data accurately represent the target population, you could invoke whatever distribution you believe to be plausible for the population, then proceed as above.
On the other hand, you could always construct a Chebychev confidence interval for the mean at whatever confidence level you desired, though this would then identify thresholds beyond which no more than 100 - CI% of sample means would be expected to fall, no matter what the shape of the distribution. This, of course, would apply only to samples of 2 or more cases, not to individual scores.
Question
Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line?
If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure $\mu(t)$ in the integral representation in the Bernstein--Widder theorem for $f(x)=\frac{1}{\arctan x}$?
These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090
It seems that a correct proof for this question has been announced at arxiv.org/abs/2112.09960v1.
Qi’s conjecture on logarithmically complete monotonicity of the reciprocal of the inverse tangent function
Question
Hello Researchers,
Say that I have 'p' number of variables and 'm' number of constraint equations between these variables. Therefore, I must have 'p - m' independent variables, and the remaining variables can be related to the independent ones through the constraint equations. Is there any rationale for selecting these 'p - m' independent variables from available 'p' variables?
Bob Senyange Sir and Victor Krasnoshchekov Sir, thank you for your comments.
Question
I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.
The attached pdf file shows the results from different problems
Your question provoked me to write an preprint
NON-CONVEX FUNCTIONS: A GENERALIZATION OF JENSEN'S INEQUALITY
I posted it recently at ResearchGate
If all goes well, I want to write another preprint on a similar theme
Question
Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?
You may possibly mean the method of moments (deriving moments of mass) to solve a set of Smoluchowski coagulation equations as described in, e.g., "A Kinetic View of Statistical Physics" (Chapter 5) by Krapivsky et al.?
Definitely, you shall provide more details and/or the mentioned equations themselves. A lot of different expressions are called as Smoluchowski equations.
Question
The  Riemannian metric g satisfies
g(P_X\,Y, W)=-g(Y, P_X\,W)
where P_X\,Y is denote tangential part of (\nabla_X\J)\,Y=P_X\,Y+Q_X\,Y.
that condition can we take on Nordan manifold?
Bhowmik Subrata Here $(M_{2n},\phi)$ is an almost complex manifold with Norden matric $g$, then we called that $(M_{2n},\phi,g)$ is an almost Norden Manifold, if $\phi$ be an integrable, we can say that $(M_{2n},\phi,g)$ is a Norden Manifold. In addition $H=\mu I$ used mainly in statistical physics where $H$ is $g$-symmetric operator.
Question
Assuming we have a piece of timber C16 - 100x100x1000mm and we apply UDL + a point load at middle point on it (parallel to the fibre) as shown below, how much will the timber compress between the force and the concrete surface ?
I have attached a sketch as well. Please see below.
If you could show a detailed calculation would be much appreciated. Thank you!
Kindly find my working if you need more tutorials you can follow me on research gate. I can teach you for free.
Question
This paper is a project to build a new function. I will propose a form of this function and I let people help me to develop the idea of this project, and in the same time we will try to applied this function in other sciences as quantum mechanics, probability, electronics …
Are you sure you have defined your function correctly?
1. Usually z=x+iy. But in your function z is in the limit, thus being in both the arguments and what the integral is computed against. If z is not x+iy, the function is not a function of (x,y).
2. What do you mean by limit? Do you want to compute the case when z->0?
Question
The threats that global warming has recently posed to humans in many parts of the world have led us to continue this debate.
So the main question is that what actions need to be taken to reduce the risk of climate warming?
Reducing greenhouse gases now seems an inevitable necessity.
In this part in addition to the aforementioned main question, other specific well-known subjects from previous discussion are revisited. Please support or refute the following arguments in a scientific manner.
% -----------------------------------------------------------------------------------------------------------%
% ---------------- *** Updated Discussions of Global Warming (section 1) *** ---------------%
The rate of mean temperature of the earth has been increased almost twice with respect to 60 years ago, it is a fact (Goddard Institute for Space Studies, GISS, data). Still a few questions regarding physical processes associated with global warming remain unanswered or at least need more clarification. So the causes and prediction of this trend are open questions. The most common subjects are listed below:
1) "Greenhouse effect increases temperature of the earth, so we need to diminish emission of CO2 and other air pollutants." The logic behind this reasoning is that the effects of other factors like the sun's activity (solar wind contribution), earth rotation orbit, ocean CO2 uptake, volcanoes activities, etc are not as important as greenhous effect. Is the ocean passive in the aforementioned scenario?
2) Two major physical turbulent fluids, the oceans and the atmosphere, interacting with each other, each of them has different circulation timescale, for the oceans it is from year to millennia that affects heat exchange. It is not in equilibrium with sun instantaneously. For example the North Atlantic Ocean circulation is quasi-periodic with recurrence period of about 7 kyr. So the climate change always has occurred. Does the timescale of crucial players (NAO, AO, oceans, etc) affect the results?
3) Energy of the atmospheric system including absorption and re-emission is about 200 Watt/m2 ; the effect of CO2 is about how many percent to this budget ( 2% or more?), so does it have just a minor effect or not?
4) Climate system is a multi-factor process and there exists a natural modes of temperature variations. How anthropogenic CO2 emissions makes the natural temperature variations out of balance.
6) Some weather and climate models that are based on primitive equations are able to reproduce reliable results.  Are the available models able to predict future decadal variability exactly? How much is the uncertainty of the results. An increase in CO2 apparently leads in higher mean temperature value due to radiative transfer.
7) How is global warming related to extreme  weather events?
Some of the consequences of global warming are frequent rainfall, heat waves, and cyclones. If we accept  global warming as an effect of anthropogenic fossil fuels, how can we stop the increasing trend of temperature anomaly and switching to clean energies?
8) What are the roles of sun activities coupled with Milankovitch cycles?
9) What are the roles of politicians to alarm the danger of global warming? How much are scientists sensitive to these decisions?
10) How much is the CO2’s residence time in the atmosphere? To answer this question precisely, we need to know a good understanding of CO2 cycle.
11) Clean energy reduces toxic buildups and harmful smog in air and water. So, how much building renewable energy generation and demanding for clean energy is urgent?
% -----------------------------------------------------------------------------------------------------------%
% ---------------- *** Discussions of Global Warming (section 2) *** ---------------%
Warming of the climate system in the recent decades is unequivocal; nevertheless, in addition to a few scientific articles that show the greenhouse gases and human activity as the main causes of global warming, still the debate is not over and some opponents claim that these effects have minor effects on human life. Some relevant topics/criticisms about global warming, causes, consequences, the UN’s Intergovernmental Panel on Climate Change (IPCC), etc are putting up for discussion and debate:
1) All the greenhouse gases (carbon dioxide, methane, nitrous oxide, chlorofluorocarbons (CFCs), hydro-fluorocarbons, including HCFCs and HFCs, and ozone) account for about a tenth of one percent of the atmosphere. Based on Stefan–Boltzmann law in basic physics, if you consider the earth with the earth's albedo (a measure of the reflectivity of a surface) in a thermal balance, that is: the power radiated from the earth in terms of its temperature = Solar flux at the earth's cross section, you get Te =(1-albedo)^0.25*Ts.*sqrt(Rs/(2*Rse)), where Te (Ts) is temperature at the surface of the earth (Sun), Rs: radius of the Sun, Rse: radius of the earth's orbit around the Sun. This simplified equation shows that Te depends on these four variables: albedo, Ts, Rs, Rse. Just 1% variation in the Sun's activity lead to variation of the earth's surface temperature by about half a degree.
1.1) Is the Sun's surface (photosphere layer) temperature (Ts) constant?
1.2) How much is the uncertainty in measuring the Sun's photosphere layer temperature?
1.3) Is solar irradiance spectrum universal?
1.4) Is the earth's orbit around the sun (Rse) constant?
1.5) Is the radius of the Sun (Rs) constant?
1.6) Is the largeness of albedo mostly because of clouds or the man-made greenhouse gases?
So the sensitivity of global mean temperature to variation of tracer gases is one of the main questions.
2) A favorable climate model essentially is a coupled non-linear chaotic system; that is, it is not appropriate for the long term future prediction of climate states. So which type of models are appropriate?
3) Dramatic temperature oscillations were possible within a human lifetime in the past. So there is nothing to worry about. What is wrong with the scientific method applied to extract temperature oscillations in the past from Greenland ice cores or shifts in types of pollen in lake beds?
4) IPCC Assessment Reports,
IPCC's reports are known as some of the reliable sources of climate change, although some minor shortcomings have been observed in them.
4.1) "What is Wrong With the IPCC? Proposals for a Radical Reform" (Ross McKitrick):
IPCC has provided a few climate-change Assessment Reports during last decades. Is a radical reform of IPCC necessary or we should take all the IPCC alarms seriously? What is wrong with Ross argument? The models that are used by IPCC already captured a few crudest features of climate change.
4.2) The sort of typical issues of IPCC reports:
- The summary reports focus on those findings that support the human interference theory.
- Some arguments are based on this assumption that the models account for most major sources of variation in the global mean temperature anomaly.
- "Correlation does not imply causation", in some Assessment Reports, results gained from correlation method instead of investigating the downstream effects of interventions or a double-blind controlled trial; however, the conclusions are with a level of reported uncertainty.
4.3) Nongovernmental International Panel on Climate Change (NIPCC) also has produced some massive reports to date.
4.4) Is the NIPCC a scientific or a politically biased panel? Can NIPCC climate reports be trusted?
4.5) What is wrong with their scientific methodology?
5) Changes in the earth's surface temperature cause changes in upper level cirrus and consequently radiative balance. So the climate system can increase its cooling processes by these types of feedbacks and adjust to imbalances.
6) What is your opinion about political intervention and its effect upon direction of research budget?
I really appreciate all the researchers who have had active participation with their constructive remarks in these discussion series.
% -----------------------------------------------------------------------------------------------------------%
% ---------------- *** Discussions of Global Warming (section 3) *** ---------------%
In this part other specific well-known subjects are revisited. Please support or refute the following arguments in a scientific manner.
1) Still there is no convincing theorem, with a "very low range of uncertainty", to calculate the response of climate system in terms of the averaged global surface temperature anomalies with respect to the total feedback factors and greenhouse gases changes. In the classical formula applied in the models a small variation in positive feedbacks leads to a considerable changes in the response (temperature anomaly) while a big variation in negative feedbacks causes just small variations in the response.
2) NASA satellite data from the years 2000 through 2011 indicate the Earth's atmosphere is allowing far more heat to be emitted into space than computer models have predicted (i.e. Spencer and Braswell, 2011, DOI: 10.3390/rs3081603). Based on this research "the response of the climate system to an imposed radiative imbalance remains the largest source of uncertainty. It is concluded that atmospheric feedback diagnosis of the climate system remains an unsolved problem, due primarily to the inability to distinguish between radiative forcing and radiative feedback in satellite radiative budget observations." So the contribution of greenhouse gases to global warming is exaggerated in the models used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). What is wrong with this argument?
3) Ocean Acidification
Ocean acidification is one of the consequences of CO2 absorption in the water and a main cause of severe destabilising the entire oceanic food-chain.
4) The IPCC reports which are based on a range of model outputs suffer somehow from a range of uncertainty because the models are not able to implement appropriately a few large scale natural oscillations such as North Atlantic Oscillation, El Nino, Southern ocean oscillation, Arctic Oscillation, Pacific decadal oscillation, deep ocean circulations, Sun's surface temperature, etc. The problem with correlation between historical observations of the global averaged surface temperature anomalies with greenhouse gases forces is that it is not compared with all other natural sources of temperature variability. Nevertheless, IPCC has provided a probability for most statements. How the models can be improved more?
5) If we look at micro-physics of carbon dioxide, theoretically a certain amount of heat can be trapped in it as increased molecular kinetic energy by increasing vibrational and rotational motions of CO2, but nothing prevents it from escaping into space. During a specific relaxation time, the energetic carbon dioxide comes back to its rest statement.
6) As some alarmists claim there exists a scientific consensus among the scientists. Nevertheless, even if this claim is true, asking the scientists to vote on global warming because of human made greenhouse gases sources does not make sense because the scientific issues are not based on the consensus; indeed, appeal to majority/authority fallacy is not a scientific approach.
% -----------------------------------------------------------------------------------------------------------%
% ---------------- *** Discussions of Global Warming (section 4) *** ---------------%
In this part in addition to new subjects, I have highlighted some of responses from previous sections for further discussion. Please leave you comments to support/weaken any of the following statements:
1) @Harry ten Brink recapitulated a summary of a proof that CO2 is such an important Greenhouse component/gas. Here is a summary of this argument:
"a) Satellites' instruments measure the radiation coming up from the Earth and Atmosphere.
b) The emission of CO2 at the maximum of the terrestrial radiation at 15 micrometer.
b1. The low amount of this radiation emitted upwards: means that "back-radiation" towards the Earth is high.
b2. Else said the emission is from a high altitude in the atmosphere and with more CO2 the emission is from an even higher altitude where it is cooler. That means that the emission upwards is less. This is called in meteorology a "forcing", because it implies that less radiation /energy is emitted back into space compared to the energy coming in from the sun.
The atmosphere warms so the energy out becomes equals the solar radiation coming in. Summary of the Greenhouse Effect."
At first glance, this reasoning seems plausible. It is based on these assumptions that the contribution of CO2 is not negligible and any other gases like N2O or Ozone has minor effect. The structure of this argument is supported by an article by Schmidt et al., 2010:
By using the Goddard Institute for Space Studies (GISS) ModelE radiation module, the authors claim that "water vapor is the dominant contributor (∼50% of the effect), followed by clouds (∼25%) and then CO2 with ∼20%. All other absorbers play only minor roles. In a doubled CO2 scenario, this allocation is essentially unchanged, even though the magnitude of the total greenhouse effect is significantly larger than the initial radiative forcing, underscoring the importance of feedbacks from water vapour and clouds to climate sensitivity."
The following notions probably will shed light on the aforementioned argument for better understanding the premises:
Q1) Is there any observational data to support the overall upward/downward IR radiation because of CO2?
Q2) How can we separate practically the contribution of water vapor from anthropogenic CO2?
Q3) What are the deficiencies of the (GISS) ModelE radiation module, if any?
Q4) Some facts, causes, data, etc relevant to this argument, which presented by NASA, strongly support this argument (see: https://climate.nasa.gov/evidence/)
Q5) Stebbins et al, (1994) showed that there exists "A STRONG INFRARED RADIATION FROM MOLECULAR NITROGEN IN THE NIGHT SKY" (thanks to @Brendan Godwin for mentioning about this paper). As more than 78% of the dry air contains nitrogen, so the contribution of this element is not negligible too.
2) The mean global temperature is not the best diagnostic to study the sensitivity to global forcing. Because given a change in this mean value, it is almost impossible to attribute it to global forcing. Zonal and meridional distribution of heat flux and temperature are not uniform on the earth, so although the mean temperature value is useful, we need a plausible map of spatial variation of temperature .
3) "The IPCC model outputs show that the equilibrium response of mean temperature to a doubling of CO2 is about 3C while by the other observational approaches this value is less than 1C." (R. Lindzen)
4) What is the role of the thermohaline circulation (THC) in global warming (or the other way around)? It is known that during Heinrich events and Dansgaard‐Oeschger (DO) millennial oscillations, the climate was subject to a number of rapid cooling and warming with a rate much more than what we see in recent decades. In the literature, these events were most probably associated with north-south shifts in convection location of the THC. The formation speed of North Atlantic Deep Water (NADW) affects northerly advection velocity of the warm subtropical waters that would normally heat/cool the atmosphere of Greenland and western Europe.
I really appreciate all the researchers who have participated in this discussion with their useful remarks, particularly Harry ten Brink, Filippo Maria Denaro, Tapan K. Sengupta, Jonathan David Sands, John Joseph Geibel, Aleš Kralj, Brendan Godwin, Ahmed Abdelhameed, Jorge Morales Pedraza, Amarildo de Oliveira Ferraz, Dimitris Poulos, William Sokeland, John M Wheeldon, Michael Brown, Joseph Tham, Paul Reed Hepperly, Frank Berninger, Patrice Poyet, Michael Sidiropoulos, Henrik Rasmus Andersen, and Boris Winterhalter.
%%-----------------------------------------------------------------------------------------------------------%%
Question
Can an elliptic crack (small enough to remain a single entity, with no internal pressure or shear force) inside an isotropic material (no boundary effect) be expanded in its own plane under externally applied shearing stresses only?
If yes, how did you show that? Do we have experimental evidence for the process?
In the present modelling, the elliptical crack under arbitrary loading is represented by a continuous distribution of infinitesimal dislocations. The conditions under which the crack can be expanded in its own plane are investigated. We show that under applied shearing stresses parallel to the plane of the loop, an expansion is not feasible. Please see "Elliptical crack under arbitrarily applied loading: dislocation, crack-tip stress and crack extension force" in our contributions in Research Gate.
Question
There are many numerical techniques for obtaining approximate solutions of fractional order boundary value problems in which the order of differential equation is a fractional constant number. If we assume that the order of BVP is a continuous functions with respect to the time, then is there any numerical technique to obtain approximate solutions of a variable-order fractional BVP?
You can also read the following paper
On solutions of variable-order fractional differential equations
DOI:10.11121/ijocta.01.2017.00368
Question
In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.
However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?
Bests
Hasan
If you want to, yes. Why not?
Question
I want to work on solving differential equation using artificial neural network. I saw some paper is working on closed form solution. But will that be a good idea? But in this way it is not possible to deal with real data which may be discrete . Is there any paper which works to solve differential equation using ANN using totally numerical way?
A package for automating this can be found in Julia called NeuralPDE.jl.
Question
Let T, S  L(X) two non null and non compact operators.
Denote by χ ¬ the Hausdorff measure of noncompactness.
I’d like to know if this inequality is true or false.
χ(T/S)≤(χ(T))/(χ(S))
Thanks.
False
Question
I am looking for some interesting areas of research in mathematics or in mathematical physics for undergradute students, I am in my 3rd year, and I've taken some basic courses such as: linear algebra, advanced calculus, mathematical methods, applied mathematics, and ODE..., What do you suggest to me?
ODE , PDE can be solved numarically . You can go through the link - https://www.researchgate.net/post/Mathematical_operators_and_applications
Question
In my modest of opinion, this paper provides a very simple proof of Fermat's Last Theorem using methods that were available in Fermat's days. If correct, it follows that this proof squashes any criticism against Fermat's claims.
It took 358 years (1637-1994) to get Professor Wiles' complicated proof, but this paper shows that one could achieve this in a simpler manner. I am sure this proof is flawless as I have gone through it very thoroughly.
I think Fermat had the proof.
It looks like we have climbed a high mountain to look for something that is right at the foot of the mountain.
Fermat's claim: there does no exist integers x,y,z greater than unity for any (n>2) for which the equation:
xn+yn=zn.
has a solution. He went on to say:
"I have discovered a truly marvelous proof of this, which this margin is too narrow to contain."
Good job,
Cheers
Andrea Rossi
Question
Hi everyone,
In engineering design, there are usually only a few data points or low order moments, so it is meaningful to fit a relatively accurate probability density function to guide engineering design. What are the methods of fitting probability density functions through small amounts of data or low order statistical moments?
Best regards
Tao Wang
Good explanation is performed by Chao Dang,
Best regards
Question
Some graphs plotted by dedicated experimental setup software need to be replot in different format, scale or for other various reasons. Many times the separate data in tabular form is not available. Can you please suggest the best tool for graph points extraction in such cases?
Question
Some journals give reviewers 60 days, others give 40 days, 30 days, or 20 days to review a paper. MDPI journals give only 10 days, but it can be extended if the reviewer needs more time. In my opinion, 10 days might be too short, but 60 days is excessive. Allowing 60 days for a peer review is adding to the response time unnecessarily, and disadvantaging the authors. I can thoroughly review a paper in a day (if I dedicate myself to it), or two at most. A reviewer should only accept a review request if they are not too busy to do it in the next 10 to 20 days. I have encountered situations in which a reviewer agrees to the review, but does not submit the review at the end of 60 days, wasting those valuable 60 days from the author. What do you think the allowed time for reviewers should be?
15 day is enough....
Question
Journals with review time of 2-4 weeks and publication time of <6 months.Impact factor journals >1.
Engineering with computers IF=7.9
Soft computing IF=3.6
Question
I would like to do work on quantum gravity. But general relativity is not complete. So if i want to do work on GRT. I am beginner for this course. GRT fails in few aspects. Any one suggest me research papers. Please send me your answers.
Today I read a quote from Einstein (1934) that the properties of all the basic fields (universal “back ground” fields) represent the properties of space itself. I was a bit flabbergasted because it was Einstein in 1920 who stated that the theory of general relativity (gravitation) don’t exist in a universe without matter. A concept that was proved in 2011 by Eric Verlinde (emergent Newtonian gravity).
The consequence of an emergent force field is the absence of an “independent” field structure that is only dedicated to this force field. It means that gravity – no matter if it is GR or Newtonian gravity – is mediated by one of the existing universal “back ground” fields. In other words, at the moment matter is created in the universe there emerges a field we have termed “gravitation” and it is mediated by the Higgs field, the electric field or the magnetic field.
The exchange of energy between decreased scalars of the Higgs field and the local electric field is determined by Planck’s constant (the Higgs mechanism). But the magnetic field is a vector field and cannot exchange energy. A vector field only determines the direction of the transfer of energy. That is why the electric field and the magnetic field are corresponding fields. The electric field generates a local quantum and the local quantum creates a vector within the magnetic field and visa versa. The duration between the start of the increase of the local energy – the beginning of the flow of a fixed amount of energy – and the moment the quantum has the energy of Planck’s constant is termed “quantum time”. Thus quantum time is a constant.
But if the field of gravitation is mediated by one or two of these universal fields the “holy grail” of quantum gravity already exists. Because we know the properties of these universal fields.
If Einstein’s curved spacetime is mediated by the magnetic field GR is equal to Newtonian gravity but space isn’t curved at all (because the curvature of space represents nothing more than the magnitudes of the vectors).
If GR is mediated by the electric field curved space is not a real curvature but a resultant “curvature” because the electric field is a topological field with a discreet structure that is responsible for the creation of quanta, the fixed amounts of energy (Planck's constant).
The last possibility is the Higgs field. Unfortunately the Higgs field is nearly totally flat in the whole universe (vacuum space). Only rest mass itself forces scalars of the flat Higgs field to decrease their magnitude. Therefore it is impossible that space itself is curved like GR predicts.
The main law in physics – the law of conservation of energy – is restricted to the electric field if there exists no matter in the universe. The consequence is that all the vectors of the corresponding magnetic field in the universe are conserved too (a more fundamental conservation law than the conservation of momentum because momentum is directly related to detectable phenomena). So if matter is created in the universe a new situation is born. The decreased scalar(s) of the Higgs field doesn’t exchange vectors but at the same moment the force of gravitation emerges. The only sensible solution to this problem is that the force of gravitation is equal to the “lost” vectors of the magnetic field because the total amount of vectors in the universe is conserved. So at the end Newton was right although the vectors of the force of gravitation are influencing matter as a push force.
With kind regards, Sydney
Question
In 2010, Dr. Khmelnik has found the suitable method of resolving of the Navier-Stokes equations and published his results in a book. In 2021, already the sixth edition of his book was released that is attached to this question for downloading. Here it is worce to mention that the Clay Mathematics Institute has included this problem of resolving of the Navier-Stokes equations in the list of seven important millennium problems. Why the Navier-Stokes equations are very important?
I finally could check the PDF, Prof. Aleksey Anatolievich Zakharenko
Dr. Khmelnik uses a variational principle to solve the NS equation, which is very powerful indeed.
He also discusses and gives examples & a reason for turbulence.
I know that the solution of NS is a non-linear problem that involves several modes and that it depends on the source.
However, my knowledge of the foundations of NS is very limited to a few linear/non-linear problems on non-equilibrium gas dynamics& MHD solved by the method, Prof. Miguel Hernando Ibanez had.
Thank you for sharing the link. I recovered my account.
Question
Are there any conditions under which the difference between two matrices i.e. A-B will be invertible? In particular, I have a positive definite matrix A but B is a square matrix not necessarily symmetric. However, B has the form MP-1N with P as a square invertible matrix and M and N as arbitrary matrices of appropriate dimensions.
I think the question is not well posted.
B should be square and is not an additional condition!!
Also, A is a square; otherwise, there is no meaning to ask about the inverse.
In all cases, A - B is a square matrix, and det(A-B) is not zero, ensures the invertibility of A - B.
Question
There is a reference to such an embedding in a 2005 article of Nigel Kalton on Rademacher Decoupling. I've failed to figure out how to construct this embedding and am wondering if anyone can give me a hint or a reference.
Thank you so much for all comments.
Regards,
Question
Mathematics is the basis of exact sciences. The development of mathematics consists in the fact that, among others, new phenomena of the surrounding world, which until recently were only described in the humanistic perspective, are also interpreted in mathematical terms.
However, is it possible to write down the essence of artistic creativity in mathematical models and create a pattern model for creating works of art, creative solutions and innovative inventions? If that was possible, then artificial intelligence could be programmed to create works of art, creative solutions and innovative inventions. Will it be possible in the future?
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
Will mathematics help to improve artificial intelligence so that it will achieve human qualities of artistic creativity and innovation?
I invite you to the discussion
Best wishes
Dear Stan Sykora, Boris Pérez-Cañedo, Baidaa Mohammed Ahmed,
Thank you for answering the above question and participating in this discussion.
Regards,
Dariusz Prokopowicz
Question
1. I would like to post this question to clarify my doubts as there were two different answers seems to be correct. Two different experts (1. faculty members in applied mathematics department and (ii) and faculty in pure mathematics department ) have different opinion . Question: Find the limits of integration in the double integral over R , where R is the region in the first quadrant (i) bounded by x=1, y=1 and y^2=4x Problem 2 (ii) bounded by x=1, y=0 and y^2=4x.
If we consider your problems as MCQ problems (I) and (ii) for undergraduate students, the correct answers are :
(a) for problem(i) and (d) for problem(ii).
PS. Observe that we have several correct choices in each case, but they are not included in the offered choices.
Regards
Question
What is the average time complexity of the simplex method for solving linear programming? Is it polynomial or logarithmic?
The complexity of the simplex algorithm is an exponential-time algorithm. In 1972, Keely and Minty proved that the simplex algorithm is an exponential-time algorithm by one example. On the other hand, the simplex algorithm is behaving in the polynomial-time algorithm for solving real-life problems.
Question
I am an engineer/researcher  developing code for finding feature parameters of irregular shaped 2d closed curves . For example a square has 4 corners and one width/height , circle infinite corners/radius, oval infinite corners/ 2 lengths major and minor. So I was wondering if we could have various measure for real life curves as found in US cardiac images such as width of the ventricles.Any help would be deeply appreciated.
Thanks,
Sushma
following
Question
if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?
Please read the paper of Energy and Wiener index of Total graph over Ring Zn . In this paper I calculated the distance between two nodes.
Question
I would like to know if the SUPG method has any advantages over the least squares finite element method?
Dear Zmour,
It can be better in term of diffusion convection reaction. My opinion is little different, the least-squares method has better control of the streamline derivative than the SUPG.
Ashish
Question
Need heuristic for assignment problem. Use it in order to allocation tasks to 2 or more vehicle. So it can work on the same network. The heuristic should be easy to implement for exempel not GA.
NOTE the allocation of the task can be for exmpel vehicle 1 pick a goods from nod A to B and vehicle 2 pick from C to D.
I can't understand the problem, either. At first, it sounded like a vehicle routing problem. But then, when you mention shelves, goods placed on them and a corresponding coordinate system, it sounds like optimizing warehouse operations. Are you trying to schedule the movements of forklifts in a warehouse? Optimizing an automated material handling system? You need to provide more information so that the problem is understood by everyone here.
Question
I have a massive of x, y, z coordinates of 2D surface in 3D space, which builded in Ansys HFSS or AutoCAD. How can I get the analitical aproximation of this surface by formulas?
Mathematica and Matlab are good for this purpose. but in general I prefer Mathematica because of its ease of use.
Question
I shared the picture of three parameters 1.Change in Temperature, 2. Change in Relative Humidity, 3. Change in Pressure and respective error value for that.
From the attached data(picture and excel file attached), I need to find the Error value for different input parameter.
If
1.Change in Temperature = 1°C
2. Change in Relative Humidity = 1%
3. Change in Pressure = 1mbar
What is the error value?
If
1.Change in Temperature = 2°C
2. Change in Relative Humidity = 2%
3. Change in Pressure = 2mbar
What is the error value?
If
1.Change in Temperature = 4°C
2. Change in Relative Humidity = 3%
3. Change in Pressure = 2mbar
What is the error value?
Is it possible to find the error value by mathematics. Please tell the way to calculate using calculator or python programming.
if you don’t have the original model that this error term come from then you can’t get the exact answer you’re after. Though you could approximate it by fitting a regression model. E.g. fit a least-squares model to find the best set of parameters a,b,c,d to the equation error = a*temp + b*humid + c*pressure +d.
Question
When creating & optimizing mathematical models with multivariate sensor data (i.e. 'X' matrices) to predict properties of interest (i.e. dependent variable or 'Y'), many strategies are recursively employed to reach "suitably relevant" model performance which include ::
>> preprocessing (e.g. scaling, derivatives...)
>> variable selection (e.g. penalties, optimization, distance metrics) with respect to RMSE or objective criteria
>> calibrant sampling (e.g. confidence intervals, clustering, latent space projection, optimization..)
Typically & contextually, for calibrant sampling, a top-down approach is utilized, i.e., from a set of 'N' calibrants, subsets of calibrants may be added or removed depending on the "requirement" or model performance. The assumption here is that a large number of datapoints or calibrants are available to choose from (collected a priori).
Philosophically & technically, how does the bottom-up pathfinding approach for calibrant sampling or "searching for ideal calibrants" in a design space, manifest itself? This is particularly relevant in chemical & biological domains, where experimental sampling is constrained.
E.g., Given smaller set of calibrants, how does one robustly approach the addition of new calibrants in silico to the calibrant-space to make more "suitable" models? (simulated datapoints can then be collected experimentally for addition to calibrant-space post modelling for next iteration of modelling).
:: Flow example ::
N calibrants -> build & compare models -> model iteration 1 -> addition of new calibrants (N+1) -> build & compare models -> model iteration 2 -> so on.... ->acceptable performance ~ acceptable experimental datapoints collectable -> acceptable model performance
Dear Sindhuraj Mukherjee,
I suggest you to see links and attached files on topic.
Best regards
Question
As we know there are many papers in literature trying to derive or explain fine structure constant from theories. Two of interesting papers are by Gilson and by Stephen Adler (see http://lss.fnal.gov/archive/1972/pub/Pub-72-059-T.pdf), other papers are mostly based on speculation or numerology.
In this regards, in December 2008 i once attended a seminar in Moscow State University, Moscow. The topic of that seminar is relation between fundamental constants. Since the seminar was presented in russian language which i don,t understand, i asked a friend about the presenter. And my friend said that the presenter was Prof. Anosov. I only had a glimpse of his ideas, he tried to describe fine structure constant from Shannon entropy. I put some of his ideas in my note book, but today that book is lost.
I have tried to search in google and arxiv.org to find out if there is paper describing similar idea, i.e. to derive fine structure constant from Shannon entropy, but i cannot find any paper. So if you know that paper by Anosov or someone else discussing relation between fine structure constant and Shannon entropy, please let me know. Or perhaps you can explain to me the basic ideas.
Hello. If you are interested, please see this new paper on ResearchGate which derives the fine structure constant from trace dynamics and the exceptional Jordan algebra:
Question
Aim is to find signal value at x0 from signal values at xi, i=1,..N using Kriging, given as Z(x0)=sum(wi Z(xi)).
After fitting a non-decreasing curve to empirical variogram, we solve following equation to find the weights wi's-
Aw = B,
where A is padded matrix containing Cov(xi,xj) terms and B is vector containing Cov(xi,x0).
In my simulation setup, weights often have negative value (which is non-intuitive). Am I missing any step? As per my understanding, choice of curve-fitting function affects A. Weights are positive only if A is positive-definite. Is there a way to ensure that A is positive-definite?
I think the problem is that the sample space of the variables considered has not been taken into account. Kriging in any form (simple, ordinary, ...) has been devised for real random variables, with support the whole real line, going at least conceptually from minus infinity to plus infinity, and endowed with the usual Euclidean geometry. In such a case negative weights would be no problem. If you expect positive estimates, than the variable is not supported on the whole real line, and you need to take this into account. In the field of compositional data analysis you can find tools to adress this problem. The essential tool is to determine the natural scale of your data, to find appropriate orthonormal coordinates for your data, and to perform estimation in the resulting representation of your data.
Question
I am considering to distribute N-kinds of different parts among M-different countries and I wan to know the "most probable" pattern of distribution. My question is in fact ambiguous, because I am not very sure how I can distinguish types or patterns.
Let me give an example. If I were to distribute 3 kinds of parts to 3 countries, the set of all distribution is given by a set
{aaa, aab, aac, aba, abb, abc aca, acb, acc, baa, bab, bac, bba, bbb, bbc, bca, bcb, bcc, caa, cab, cac, cba, cbb, cbc, cca, ccb, ccc}.
The number of elements is of course 33 = 27. I may distinguish three types of patterns:
(1) One country receives all parts:
aaa, bbb, ccc 3 cases
(2) One country receives 2 parts and another country receives 1 part:
aab, aac, aba, abb, aca, acc, baa, bab, bba, bbc, bcb, caa, cac, cbb, cbc, cca, ccb 17 cases
(3) Each county rceives one part respectively:
abc, acb, bac, bca, cab, cba 6 cases
These types may correspond to a partition of integer 3 with the condition that (a) number of summands must not exceed 3 (in general M). In fact, 3 have three partitions:
3, 2+1, 1+1+1
In the above case of 3×3, the number of types was the number of partitions of 3 (which is often noted p(n)). But I have to consider the case when M is smaller than N.
If I am right, the number of "different types" of distributions is the number of partitions of N with the number of summands less than M+1. Let us denote it as
p*(N, M) = p( N | the number of summands must not exceed M. )
N.B. * is added in order to avoid confusion with p(N, M), wwhich is the number of partitions with summands smaller than M+1.
Now, my question is the following:
Which type (a partition among p*(N, M)) has the greatest number of distributions?
Are there any results already known? If so, would you kindly teach me a paper or a book that explains the results and how to approach to the question?
A typical case that I want to know is N = 100, M = 10. In this simple case, is it most probable that each country receives 10 parts? But, I am also interested to cases when M and N are small, for example when M and N is less than 10.
Thank, Luis Daniel Torres Gonzalez , you for your contribution. My question does not ask the probability distribution. It asks what is the most probable "pattern" when we distribute N-items among M-boxes. I have illustrated the meaning of "pattern" by examples, but it seems it was not sufficient. Please read Romeo Meštrović 's comments above posted in March, 2019.
Question
I'm stuck on how to and what constraint to be applied while using MCR for IR spectroscopy data. There are 3 types of constraint : Equality constraint, Unimodality constraint, Non-neagativity, and Closure constraint. Please help me proceed.
Muhammad Ali Thankyou for the article recommendation.
Question
The physical postulates of general relativity are conveniently stated using the mathematical vocabulary of tensor analysis. Tensor analysis is rigorous and does not require visual aids but when the goal is to visualize, or get an "intuitive feel", we think of a surface contained within a Euclidean space. Is it possible that space-time literally is a surface in a Euclidean space?
Dear L. D. Edmonds. In principle the spacetime of general relativity is a 4D pseudoriemannian manifold that does not necessarily have to be a 4D physical space or a 5D spacetime. However, for visualization purposes in a graphical representation, manifolds of dimension n endowed with a non-Euclidean an not-Lorentzian metric (non-flat varieties) are usually represented as embedded in a Euclidean space of dimension n+1. However, the question arises as to whether the spacetime of general relativity is actually physically a 4D spacetime embedded in a 5D Euclidean space. And so, by extension, extra-dimensional theories, such as Kaluza-Klein theory, M-theory, and others, could in fact be theories of one bigger dimension rather than the dimensions they consider. For example, the Kaluza-Klein theory should then be a 6D theory, with 5 spacetime dimensions, and the M theory should be 12D.
Question
If we have three different domain of data (e.g. security, AI and sport) and we did 3 different case study or experiments (1 for each domain) and we estimate the Precision, Recall and F-measure for each experiment. How we can estimate the overall Precision, Recall, F-measure for the model. Is use normal mean average is suitable or F1 or p-value? Which one is better?
Question
I am currently trying to classify images using a HOG descriptor and a KNN classifier in C++.
The size of the feature vectors obtained with the HOG depends on the dimensions of my images and all of them have differents  dimensions !
How can I make the size of the feature vector not depending on the images dimensions (without adapting the number of cells and blocs), or how can I use the KNN classifier with differents sized feature vectors?
You can normalize all the images to the same size and then apply the HOG descriptor.
Question
The definition, that I have learned, of the determinant of a matrix is a set of instructions on how to calculate its value. This definition can be put in equation form if we keep track of odd permutations and even permutations of products of matrix elements but this is a set of instructions appended to the equation. Is there an equation for the determinant of an nxn matrix that is self contained in the sense of not requiring that it be accompanied by an appended set of instructions?
we can calculate the determinant of a square matrix A by using the trace of A and its powers.
Examples:
n=2 , detA = 1/2[( trA)2 - (trA2 )]
n=3, detA=1/6[ (trA)3 - 3( trA)(trA2 ) + 2( trA3)]
A general formula is available.
Best regards
Question
I need guidance on the analytical model and calculation for shape (cross-section) optimization of Euler Bernoulli cantilever column under buckling with a point load. It may be using variational methods, with the use of Rayleigh Quotient.
For it, I found a similar case in "Haftka, Raphael & Gurdal, Zafer & Kamat, Manohar. Elements of structural optimization. 2nd revised ed." but with different boundary conditions, I have attached the scans of the procedure used in the text for reference.
Question
As we know, computational complexity of an algorithm is the amount of resources (time and memory) required to run it.
If I have algorithm that represents mathematical equations , how can estimate or calculate the computational complexity of these equations, the number of computation operations, and the space of memory that are used.
Question
for the approximate solution of different
types of differential equations"
Question
Is there a way to transform a similarity matrix from high space to low space and keep the same knowledge? For example, the attached matrix.
To reduce of dimensionality of space you can use Tensorized Random Projections:
Question