Science topics: Mathematical Methods
Science topic
Mathematical Methods - Science topic
Explore the latest questions and answers in Mathematical Methods, and find Mathematical Methods experts.
Questions related to Mathematical Methods
We suppose,
i- the starting point is to replace the complex PDE for the complex wave function Ψ (probability amplitude) by a real PDE for Ψ^2.
The PDE for Ψ^2 which is the probability density of finding the quantum particle in the volume element x-t dx dy dz dt is assumed exactly equal to the energy density of the quantum particles.
ii- Consequently, the solution of the Bmatrix chain follows the same procedure for solving the heat diffusion equation explained above in the last question:
How to invert a 2D and 3D Laplacian matrix without using MATLAB iteration or any other conventional mathematical method?
iii-The third and final step is to take the square root of the solution for Ψ^2 as the solution for Ψ itself.
As simple as that!
In at least two of mi papers it appears the name Eduardo Almaraz (in this web) instead my name Elena Almaraz Luengo. The papers are:
- Elena Almaraz Luengo; Antonio Gómez Corral. Number of infections suffered by a focal individual in a two-strain SIS-model with partial cross-immunity. Mathematical Methods in the Applied Sciences. 42 - 12, pp. 4318 - 4330. Wiley, 07/05/2019. https://doi.org/10.1002/mma.5652
- Elena Almaraz Luengo; Antonio Gómez Corral. On SIR-models with Markov-modulated events: length of an outbreak, total size of the epidemic and number of secondary infections. Discrete and Continuous Dynamical Systems Series B. 23 - 6, pp. 2153 - 2176. (Estados Unidos de América): AMER INST MATHEMATICAL SCIENCES-AIMS, 06/2018
How can be corrected this in this webpage?
Yhank you in advance.
Dra. Elena Almaraz Luengo
Hello, Dear scientific community,
I want to delineate hydrothermal alteration zones using an RGB Band combination on ASTER data. I've already consulted the literature on this topic and I found that 468 is the most relevant band combination for alteration and lithology discrimination. but I want to know if is there any mathematical method to calculate and select the most appropriate band combination.
I'm working on detection of seismic P-wave so I read about different type of mathematical methods. recently I was working on AR-AIC method but I saw some inconsistent with the main article in results so I am looking for another MATLAB code for evaluate my own code.
We assume that there is a complex story deeply hidden behind double and triple integration.
This is the reason why the current definition of double and triple integration is incomplete.
Brief,
Time is part of an inseparable block of space-time and therefore,
geometric space is the other part of the inseparable block of space-time.
In other words, you can perform integration using the x-t space-time unit with wide applicability and excellent speed and accuracy.
On the other hand, the classical mathematical methods of integration using the FDM technique in the geometric Cartesian space alone x can still be applied but only in special cases and it is to be expected that their results are only a limit of success.
As H.-J. Bunge defined, crystallographic orientation refers to how the crystallites in a volume of crystal (crystal coordinate system) are positioned relative to a fixed reference (specimen coordinate system) . There is a trending pattern in the orientations that are present and a propensity for the occurrence of certain orientations. This crystallographic orientation of the crystallites with the polycrystalline aggregate is known as preferred orientation, more concise texture[1].
If all possible orientations of crystallite occur with equal frequency, the orientationdependence will disappear on average. As W. A. Dollase defined, the axially symmetric flat-plate sample can be composed of effective rod- or disk-shaped crystallites. This shaping effect is also named as preferred orientation[2].
[1] Bunge, H.J. Texture Analysis in Material Science. In Mathematical Methods; Butterworth & Co.: Oxford, UK, 1982; pp. 1–5.
[2] Dollase,W.A. Correction of Intensities for Preferred Orientation in Powder Diffractometry: Application of the March Model. J. Appl. Crystallogr. 1986, 19, 267.
Calculates the optical gap.
Please, is there a mathematical method to calculate the optical gap directly from the transmittance without going to the graph method (the slope)?
I came across this question (attached below). I tried to solve it but got stuck in the portion where we need to calculate the inverse tranform. I found a solution of this question (also attached below) but there, in the encircled portion, I couldn't get how they took a 2x factor out and change the limits from (-∞, +∞) to [0,+∞ ). I know that we can do this changing of limits only if it's an even function and can take the limiting points [0,+∞ ) iff the function is of the form y = ax2 . But here the term inside the exponential function is of the form y = ax2 + bx + cix, where i =complex number, and accordingly the limit should change to some random [m,+∞ ) in place of [0,+∞ ). Also, the 2x factor would not be there because the limiting point is changing from [0,+∞ ) to [m,+∞ ) and the graph will not be symmetrical across X=0 Axis.
I will be highly grateful if you can kindly clarify my doubt or let me know where I am making a mistake in understanding the question.


There are many methods based on the stochastic arithmetic and also floating point arithmetic. What is advantage and disadvantage of the mathematical methods based on stochastic arithmetic?
Scientists and engineers solve physical and mathematical models containing tens and hundreds of SI variables, and take into account a larger number of potential effects of interaction between them, using super-powerful computers and developed advanced mathematical methods. But this requires significant time and financial resources. In addition, with the current rapid development of science, the situation with the identification of truly revolutionary discoveries is complicated by the complexity of their identification (the idea of incommensurability of scientific theories): supporters of each paradigm see the world in their own way because of their scientific training and previous experience. They use a different conceptual framework and different ideas about scientific standards.
Is there a limit to our knowledge?
Hi all,
I attached an image file of excel data in which the problem is to predict the value of u by using any mathematical method.so, please share me the procedure or method to be followed to solve the problem.thank you in advance.
Most MCDM problems use subjective weights mainly obtained from AHP.
These weights are developed by preferences, qualified using a dubious table (as is the opinion of many researchers), and criticized since decades ago.
They are also obtained under the aggravating circumstance that the DM works only with criteria without considering the different projects, WHICH THEY MUST EVALUATE, according to AHP hierarchy.
That is, in comparing, say environment and disposable income, the DM decides, by intuition, that the first is more important than the second, and assigns a numerical value to that preference, using values of the above-mentioned table.
Then, this assumed weight, for it is not a weight, but a trade-off value, is used to select alternatives. On what grounds? None given.
Consequently, the DM decides that said preference is valid for everything in life, since he does not have any reference, other that a single objective, where this preference will be used.
Not too much thinking is needed to conclude that this process is invalid, because that preference may apply well to a certain project but not in others, or even in comparing alternatives within a scenario.
Now, the DM, using mathematical methods (The Eigen Value method, or the Geometrical Mean), determines priorities for each criterion - after his estimates are subject to the verdict of a formula - because the method demands that his estimates MUST fulfill, with a 10 % of tolerance, that they are consistent, or that meet transitivity.
That is that if criterion A = 3 times more important than criterion B, or 3B, and B =2C, then A=6C.
Now, one wonders why his estimates must be transitive?
Nobody knows, but what is worse, is that the AHP method assumes that said transitivity MUST also be satisfied by the real-world problem, not considering that in general, the world is intransitive.
And now the question:
On what ground, on what theory, on what relationship is it assumed that those preferences from a DM or from a group of DMs, are valid for the real world?
As a matter of fact, a very well-known theorem, the ‘Arrow’s Impossibility Theorem’, says the opposite.
I have posted several times this question. Is there anybody that can give a RATIONAL answer?
Your response and discussion will be greatly appreciated.
Thank you
Nolberto Munier
What mathematical methods are helpful to correlate the values of laboratory and Production results of extrusion ?
Gaussian noise and speckle noise are the examples of linear noise whereas salt-and-pepper noise and uniform impulse noise are nonlinear noise. Gaussian noise can be expressed in the form of its mean and variance values while speckle noise can be modeled by random values multiplied by pixel values. In a nonlinear group, salt-and-pepper noise corrupts the image’s pixels randomly and sparsely that makes some pixel changing to bright or dark.Uniform impulse noise is characterized by replacing a portion of some image pixel values with random values. The switching bilateral filter (SBF) is one of the most effective filters that is proposed for the mixed noise removal in images. The performance of the SBF is controlled by two parameters, the values of the radiometric weight (sR) and the spatial weight (sS). The proportional values of these parameters depend on three important factors which are level of noise, the type of noise and the type of image. Generally, the most of noise removal techniques can be used to perform noise reduction when the image is contaminated with only one type of noise [6]. Nevertheless, currently, the images may corrupt by more than one types of noise. Then, former image noise removal techniques cannot remove the noises efficiently.
Papers:
Y. He, Y. Zheng, Y. Zhao, Y. Ren, J. Lian and J. Gee, “Retinal Image Denoising via Bilateral Filter with a Spatial Kernel of Optimally Oriented Line Spread Function”, Journal of Computational and Mathematical Methods in Medicine, Vol. 2017, Feb 2017.
K. Langampol, W Lee, and V. Patanavijit, “ The switching bilateral denoising performance influence of spatial and radiometric variance” , ECTI-CON 2016, Jul 2016.
M. K. Raijada, D. Patel and P. Prajapati. “A review paper on image quality assessment metrics”, JETIR, Vol. 2, Jan 2015.
W. Phummara, K. Langampol, W Lee, and V. Patanavijit, “An optimal performance investigation for bilateral filter under four different image types” , SITIS-2015, pp. 34-41, 2015.
I am trying to design a caching strategy for ICN-based IoT which will use the popularity for caching the content. if someone kindly tell me what is a good way to measure popularity. a link to a research article or any reference to a mathematical method will be highly appreciated.
thanks in advance
In the author's interpretation we consider concepts and methods of science, such as science, knowledge, model, gnosticism and agnosticism, the principle of Ashby, facts, empirical regularity, empirical law, scientific law, and others. We have formulated the main problem of the science, concluding that cognitive abilities of a human are limited and do not provide effective knowledge in a very large volume of data. The solution to this problem is to look at ways of automation of scientific research. Traditionally, we use information-measuring systems and automated systems research (ASNI) for this. However, the mathematical methods used in these systems, impose strict impracticable requirements to the source data, which dramatically reduces the effectiveness and applicability of these systems in practice. Instead of having to submit to the source data impracticable requirements (like the normality of the distribution, absolute accuracy and complete replications of all combinations of values of factors and their full independence and additivity) automated system-cognitive analysis (ASC-analysis) offers (without any pre-processing) to understand the data and thereby convert them into information and then convert this information to knowledge by its application to achieve targets (i.e. for controlling) and for solution for problems of classification, decision support and meaningful empirical research of the modeled subject area. ASC-analysis is a systematic analysis, considered as a method of scientific cognition. This is a highly automated method of scientific knowledge that has its own developed and constantly improving software tool – an intellectual system called "Eidos". The system of "Eidos" has been developed in a generic setting, independent of any domain and can be applied in all subject areas, in which people apply their natural intelligence. The "Eidos" system is a tool of cognition, which greatly increases the possibility of natural intelligence, just like microscopes and telescopes multiply the possibilities of vision (but in this case only if you have this possibility). The study proposes a new view of the models: phenomenological meaningful model, which is currently represented only by systemic cognitive models, and which is currently in the middle between empirical and theoretical knowledge. The system called "Eidos" is considered as a tool of automation of the learning process, providing meaningful synthesis of phenomenological models directly on the basis of empirical data
Scope
Biomedical imaging has emerged as a major technological platform for disease diagnosis, clinical research, drug development and other related domains due to being non-invasive and producing multi-dimensional data. An abundance of imaging data is collected, but this wealth of information has not been utilized to full extent. Therefore, there is a need for introduction of biomedical image analysis techniques that are accurate, reproducible and generalizable to large scale datasets. Identification of imaging patterns in an anatomical site or an organ at the macroscopic or microscopic scale may guide characterization of abnormalities and estimation of disease risk, analysis of large scale clinical datasets, and assessment of intervention therapy techniques.
This special track solicits original and good-quality papers for delineating, identifying and characterizing novel biomedical imaging patterns. These methods may aim for segmentation, identification and quantification of anatomies, characterization of their properties, and classification of disease. These approaches may be applied to radiological imaging such as CT, MRI and ultrasound, or imaging at the cellular scale such as microscopy and digital pathology techniques.
Topics
Topics of interest include, but are not limited, to the following areas:
Machine/Deep Learning for Computer-aided Diagnosis and Prognosis
Biomedical Image Segmentation and Registration
Radiomics, Immunotherapies, Digital Pathology
Cell Segmentation and Tracking
- let say if there are X1, X2, X3, X4,....................Xn factors and each factors depends on i (i=1,2,3,4...........) factors and N=25 factors of manufacturing performance.
if let us say
- X1= f (X2, X3, X4, X5............Xn) (i=n)
- X2= f(X1, X3, X4, X5............Xn) (i=n)
- X3 = f(X1, X2, X5, X6............Xn) (i=14)
- and so on.
- I want to know the level of dependence of each factor on other factor. eg what is the dependence level of factor X1 is on factor X2 and how much is the level of dependence of X2 on X1. Level of dependence of factor X1 to X2 and X2 to X1 are different. how such a model can be analyzed using statistical methods. please refer to the figure for further clarity.
- Please suggest some statistical/Mathematical Method or technique. this problem is related with the factors influencing the manufacturing performance but these factors are also influenced by each other to a large extent. so basically if one parameter changes it automatically affects all other performance parameter.

What is the suitable method to forecast solar irradiance? What is the advantages to use ANN method compare with another statistical method?
Mathematical and computational methods are expected to play a major role in nanoscience and nanoengineering. Mathematics and computation can provide effective theory and simulations for analysis and interpretation of experimental results, model-based prediction of nanoscale phenomena, and design and control of nanoscale systems.
When talking about the transmittance, scholars mostly use the transmittance at 550 nm to represent the sample's transmittance. Why is that? In UV-visible absorption spectroscopy, is it scientific and reasonable to choose the transmittance at 550 nm to represent it? Or is it necessary to calculate the overall transmittance of the sample by integral or other mathematical methods?
Thanks for your answers. Are there any suggestions for relevant literature?
If another mathematical method(s) is/are used to solve the resulting equations of zeroth order, first order, second order,..., especially for providing solutions to PDE, can we then regard homotopy perturbation, regular perturbation and singular perturbation methods as stand-alone approximate analytical methods ?
Je sui en train de travailler sur l'Ouvroir de litterature potentielle, le group de litterature francaise qui comprend Queneau, Perec, Calvino etc. Mon bout est de trouver une production contemporaine qui s'inspire de leur method mathematique
Any idea about equations and mathematical methods that can be derived to develop predictive models that will allow scaling up results from membrane bench scale experiments to large scale applications?
Many thanks
Can anyone suggest me any book or paper for indication of developing faults using IEC 61850-9-2 based current and voltage values?.
The main aim is to evaluate the kinematic disorders in multiple sclerosis patients by using an intelligent mathematical method.
In my question here is to seek and to know if there is any paper discussed Borel summability for non analytic divergent power series but smooth ( Smooth for real valued function) , I assume that the power divergent series convergent only at origine x=0 , and one can get a formel power series in that case .
Most Engineer use nonlinear system than linear system, but the mathematical methods is difficult for nonlinear.
We are interested in applying mathematical methods from optimization, machine learning and experiment design to enable scientific discovery and new industrial applications of SEM/TEM and related imaging techniques.
To measure the activity of dehydrogenase, the amount of formazan TPF (mg) is used to determine the specific weight of the soil for a certain time.
In different studies data are given for 1 minute, 1 hour, 24 hours. Weights also differ 1 g or 10 g. Is it possible to bring these results into a single species for comparison? by translating one or using some mathematical methods?
Hello everyone! We're currently working on a research about improving pharmaceutical supply chains using lean tools. In this work, we've encountered a problem that should be solved before we proceed. The problem/question is: is there any (mathematical) method or any other tools to determine the relationship between variable lead time and waste reduction (or the effect of variable lead time on waste reduction)? How to find one? Any advice and thoughts would be appreciated!
Partial discharge in gas has some symptoms such as chemical reaction, sonic and electromagnetic emissions. Which numerical software is proper for these kind of simulations that can address all of physical effect due to partial discharge?
To determine the mean river Width in sections, there are different methods. As you know, rivers shape isn't an unique shape and I can't calculate it by mathematical methods that usually we used in numeral shapes. I can calculate the area that covered by channel (A) and the length of flow (L) in study section, can I use this typical equation "A/L=w"?
Regards.
Hello, I am looking for a mathematically method to qualification range of some indicators. i mean when i have some quantify indicators, how can i understand are that low, high or moderate? can you help me?
Let A generate a strongly continuous semigroup T(t) on. X, B from U to X_{-1} be an admissible control operator for T(t), C X_1 to Y be an admissible observation opertator for T(t), Let K from X to U be a linear operator. The statte feedback control law
u(t)=Kx(t)+v(t).
under what condition for K, the closed loop is well posed.
What is the best mathematical method to smooth and calculate the acceleration of soccer players from 2D positional data (i.e. [x,y]) ?
I am new in data analysis and trying to identify and delete outliers from original data when doing T-squares analysis.
I have 150 data in a vector(150 rows and 1 col) which have 10 outliers
I would like to remove the outliers from these data in order to prepare that
for ANN learning,
So, I analyze this data by T-squares method and plot of this method show outliers in the normalized data obviously,
but I would like to have a way to mathematically delete these detected outliers from original data (not normalized data)
any suggestion or ideas to aid me in doing that is greatly appreciated.
Thanks in advance for your help.
Dear Researchers
I am solving a set of non-linear equations both numerically and mathematically. The numerical method is based on 4th order Runge-Kutta. Under some conditions (Chaos and bifurcations), the mathematical method results in multiple solutions, however, the numerical method only converges to one of the solutions. what is the way of controlling numerical method to extract all possible solutions?
Appreciating your time, looking forward for your answers.
You need to read first definitions 1-9, and then realize whether or not Lemma 1 is correct. I think I've found a counterexample, with five objects and two attributes, but maybe I got wrong the definitions.
The example is:
o1(0, 0) in class 1
o2(0, 0) in class 2
o3(0, 1) in class 3
o4(0, 1) in class 4
o5(1, 0) in class 5
For that case the sets of reducts with the usual decision, and with the generalized decision (using definition 8, i.e. REDpos) are not the same to me, so contradicting Lemma 1.
During the last quarter of the twentieth century, the electromagnetic force binding atoms and molecules and the weak nuclear force governing the decay of radioactive matter were merged into a single theory asserting them to be different manifestations of one and the same force-the "electroweak" force. Crucial parts of this theory have been confirmed at the world's most powerful accelerators at CERN and Fermi Lab, and concerted efforts are now under way to extend this unified theory to include the strong nuclear force that binds elementary particles within nuclei. Furthermore, although scientists are unsure at this time how, in turn, to incorporate into this comprehensive theory the fourth known force (gravity), we have reason to suspect that Einstein's dream is nearing-to understand all the forces of Nature as different aspects of a single, fundamental force.
In this article, according to the experimental observations, the Maxwell equations of electromagnetism is generalized to the gravitational field.
The strong interaction or strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves.
This article shows how homonymous charge particles absorb each other in very small distance. Generally, two homonymous charged particles produce binding energy, in small distance. This looking where based on CPH theory and it is continuing of Graviton and virtual photons.
Article Graviton and virtual photons
I am modelling an Organic FET. By making some empirical adjustments in conventional siMOSFET current equations I got a good fit in the linear regime of output characteristics ,but, when it goes to saturation regime the slope of the cure becomes discontinues. I am looking for a mathematical method to solve this problem, your suggestions would be appreciated, Thanks in advance .
I am new and don't know exactly,what to do other than rheometer determination?
Hello everyone,
I need fast mathematical methods to calculate (or estimate) eigenvalues/vectors for large scale matrices (just above 100000 dimensions). Would you please introduce some state-of-the-art references for this issue?
I want to thank you in advance for your kind support.
I am interested to work about a shock wave phenomena in supersonic flows.
Is there any mathematical method by which a shock wave can be detected in such flows?
I want to interpolate velocity field in seismic fault zone, which physical/crustal model could I combine with? Except mathematical method like KRIGING.
Ratios, in addition to frontier analysis, is one of the statistical/mathematical methods used for measuring efficiency.
In many isolated communities there may be plenty of choice and limited availability of energy resources. If you want to plan the rational use of this resource in a possible energization of the community, it is necessary to do tranves of mathematical methods. There are many mathematical methods applicable to isolated rural context.
I have some morphometric measurments (carapace width, carapace length, weight etc.) of about 1100 adult crabs. I would like to know if there is any method to calculate age structure of these crabs? Any mathematical methods? Have you got any publication on this topic
I've modeled the OFDM technique, so can involve the appropriate advanced mathematical methods at any stage, to get the best detection.
I am Looking for the latest advanced mathematical methods to use in the OFDM system, especially in detection process.
I've modeled the OFDM system, that it can involve data at any stage I want.
Is there an advanced mathematical model to serve this technology?
Just as that we make use of the Gaussian function for infinite dimensionals.
The dynamical fundamental equations (Newton, Lagrangian and Hamiltonian mechanics, Maxwell's equations, Schrodinger equation) are the energy conservation equations, and correspond to the first law of thermodynamics. Time reversal invariant is the basic symmetry of these equations, it is usually considered as a criterion for reversibility: T-symmetry means reversible.
However, the first law of thermodynamics is also time reversal invariant, and irreversibility is the conclusion of the second law, not that of the first law, such that T-symmetry of the first law cannot be a criterion for irreversibility of the second law.
Similar to that we cannot derive irreversibility from the first law of thermodynamics, in my opinion, “the fundamental structure of theoretical physics might be divided into the two main lines according to different laws, one being the first law physics, the other one being the second law physics”[1]. The conservation equations (CPT-symmetry) correspond to the first law physics, and irreversibility corresponds to the second law physics. T-symmetry of the conservation equations (the first law physics) cannot be considered as a criterion for irreversibility of the second law (physics). It is the symmetry of the equations, we are not sure if it is also the symmetry of the dynamic phenomena themselves.
Then how do we prove the conclusion that the fundamental dynamic processes are reversible?