# Mathematical Modelling

Mathematical model for identifying systems. What are the best indices of error behavior to validate the model by comparing data.
For example Fit, VAF, MSE, RMSE: what is the best?
Jose A. Ramos · Nova Southeastern University

Ernesto:

In my experience, FIT, VAF, RMSE/MSE in that order. MSE is problematic in the sense that two models with very different performance can give equal MSE.

• What is the difference between convex and non-convex optimization problems?
How do we know whether a function is convex or not?
What are the different commands used in matlab to solve these types of problems?
António Manuel Abreu Freire Diogo · University of Coimbra

I agree partially with Luís M. C. Simões. I suppose he is speaking about classic optimization techniques as Linear Programming and Nonlinear Programming.

What are the real world applications of Nondeterministic Finite Automata (NFA)?

Can any one help me by giving some real life examples (Other than compiler design) where the idea of NFA can be implemented? If I can find the shortest string accepted by any NFA .. what kind of real life problems I can solve with it?

Ellis D. Cooper · Endicott College

I have invented an extremely simple language called "timing machines" which includes finite automata, stochastic automata, randomly timed automata as special cases. My forthcoming book (Cognocity Press, 2014, to appear on Amazon) provides examples from biology (ion channels), chemistry (stochastic a la D.T. Gillespie), physics (stochastic Petri nets), mathematics, and computer science (including a model of nonde.js asynchronous programming with event loop and callbacks).

Has anyone already applied a physiologically based pharmacokinetic two-compartment model for a single oral dose or i.p. injection of pristine SWCNT?
I successfully implemented a two-compartment (gut-blood) mathematical model for a single oral dose (20 mg/kg) to simulate the SWCNT distribution in the blood using intestinal absorption constant (Ka = 0.033 min^-1/1.98 hrs^-1), volume of distribution ((Vd = D/C0), experimentally determined), and volume of the gut compartment (V1 = 82.5 ml/kg). The methodology might be further applied to build the two-compartment (peritoneum-blood) PBPK model for a single i.p. drug injection if the peritoneum-blood permeation constant is established.
Lawrence Margulies · University of Guelph

I haven't.

How can data envelopment analysis (DEA) theory be modeled in nonlinear instead of linear one?
Is it possible to develop a model of DEA in nonlinear functions instead of linear?
Gary Paul Martin Simpson · Aston University

Variable Returns to Scale (VRS) models already cope with nonlinear relationships between the inputs and outputs. Also the valuables used as inputs or outputs could already be transformed (for example the log of population rather than population). What sort of non-linearity are you thinking of?

How to segment a medical image using Markov Random field?
How to segment a medical image using Markov Random field?
Gayathri S P · Gandhigram Rural Institute

What are the limits of measurement in science?
When I was in high school Bohr's atom of shells, s and p orbitals was introduced in chemistry. Realization was automatic that the world was explained according to theory that was verified by experiment. Through college and graduate school, looking for more complete explanation, theory is challanged but it is not brought to question "what is an electron or proton, if they have mass but are visible only in the sense that they emit light energy as photons that also have mass, "spots of light in orbit around nuclei?, the atom a solar system in minature"? Physicists will say this is not the picture they have evolved, but all that remains is the image of equations on a chalkboard, at best 'the image of things of a particle nature in alteration with things of a light nature'. Can a pieced-together stepwise reality of this nature be accepted? In the Feyman quote below pieces are added that can break any of the established laws "they are not directly observeable" or affect "causality". In this same meaning though neither electrons, protons, photons or atoms are observable and their causal effects are but a matter of humanly constructed theory and similarly based experimental apparatus. The possibility exists that theory and theory based apparatus entail one another and all that might be gotten is that the real universe is identical in this respect...i.e. existence entails the experienced universe and visa-verse.
"You found out in the last lecture that light doesn't go only in straight lines; now, you find out that it doesn't go only at the speed of light! It may surprise you that there is an amplitude for a photon to go at speeds faster or slower than the conventional speed, c." These virtual photons, however, do not violate causality or special relativity, as they are not directly observable and information cannot be transmitted causally in the theory." (from "Varying c in quantum theory" http://www.researchgate.net/go.Deref.html?url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVariable_speed_of_light)
Paul Hubert Vossen · Duale Hochschule Baden-Württemberg Mannheim

Long time ago, out of pure necessity, psychologists have started to worry about measurement in psychology (and closely related disciplines). They've built up an amazing stock of knowledge about and models of measurement, which may equally well be applied in other disciplines. Unfortunately, I suspect that most of these highly sophisticated developments are unknown outside a rather small circle of experts, probably because no one would expect such deep theorizing in psychology. I am sure that many physicists and economists could learn a lot about measurement in their resp. disciplines by having a closer look at this field.

P.S.: However, it may be that physics deals with a very peculiar sort of phenomena for which only one very special measurement theory and related procedures suffice. How fortunate! See e.g. the books and other works of B. Roy Frieden

Must I derive the equations of natural frequencies of the complex shape horizontal axis wind turbine blade to use Pi Buckinghum theorem?

I want to use Buckinghum Pi Theorem to relate a similitude of the blade to the prototype blade.
I began as stating the natural frequency is function of ( Young's Modulus, Length of the blade, Cross Section area,Moment of inertia,Density)
Fn (fn,E,L,A,I,Rho) ,then I got the dimensionless pi groups that relate natural frequency of similitude to the prototype !!!

Is this correct procedure or should I derive the natural frequencies equations first ?

Bastian Schmandt · Technische Universität Hamburg-Harburg

Hello Mr. Hesham,

the Pi-Theorem of Buckingham is a formalism for data reduction and scaling. It is applicable, when all influential quantities and their dimensions are known. If more quatities than necessary are included, non-dimensional groups with no impact are generated. You do not have to know the mathematical equations in order to apply the Pi theorem. Thus, you could scale the results from measurements even when dimensional equations are unknown or unsolvable. Therefore, usually a 5 point list is used in order to identify physical quantities of impact. However, the a priori knowledge of the respective equations or first principles might be useful in order to identify the influential quantities.

Best regards

Bastian

How does size factor effect a dimensionless PI theorem analysis between 7m wind blade and 0.3m similitude?

I work on developing a similitude of wind blade of length 0.30 m to a prototype of 7m length.

I want to predict the deflection of the 7m blade to validate FE analysis. Because I cannot manufacture this prototype I need to apply Pi-Buckinghum Theorem to get the dimensionless ratio for the deflection.

The material of the 7m blade is orthotropic fiber glass composite and the material of the similitude is orthotropic ABS plastic.
Some professors said they have doubts about the possibility of this working due to size factor and difference in materials.

Shiuh-Hwa Shyu · WuFeng University

Since you have FE package for the analysis, why bother to use PI theorem? Use the full scale in your FE model.

Does anyone know of codes or software tools to perform discrete (time series) chaos analysis?
I will have recorded acoustic files and desire to perform chaos analysis on the acoustic data. I am also looking to analyze the signals for energy in time-frequency such as using a Gabor Transform technique that has been done on bat echolocation signals, phase space, Lyapunov exponent, capacity dimension, etc. Anyway, does anyone know of where to find the codes or software tools such as in Matlab or some other source to conduct the analysis? Any current literature related to the algorithms would also be appreciated. Any suggestions are welcome as well. Thank you.
Nikos Lazarides · University of Crete

Dear Barry,

I would agree with Babalola about the TISEAN package for the analysis of chaotic time-series. Here is a link:

http://www.mpipks-dresden.mpg.de/~tisean/TISEAN_2.1/docs/indexf.html

Of course, the other sources mentioned in the other answers may also be valuable. Depending on your experience on your experience, you would perhaps like to look first at something simpler, in particular for the calculation of Lyapunov exponents from a time-series. In that case, the old-and-famous paper by Wolf et al. Physica D 16, 285 (1985)   would be very useful; it contains both a comprehensive summary of the theory and the numerical code.

Concerning the Gabor transformation etc., you could take a look at

http://ltfat.sourceforge.net/doc/gabor/dgt_code.php

the large time-frequency analysis toolbox, although I have not used it myself.

Regards,

Nick

What are the best ways to do the mathematical modelling of Notch and Wnt pathway?

Need your opinion related to algorithms like Genetic Algorithm and others......

Gregory Francois · University of Applied Sciences and Arts Western Switzerland

I am also very much of this opinion:

"but what makes a model 'good' is the fact that you learn something from it."

This is a message that is hard to convey to many engineers or physicists since many of them have a kind of binary view on modeling. I mean a lot of people are convinced that if your model does not include everything, it cannot be informative or useful. This is wrong in my opinion.

Modeling is always performed with "a goal" and models should only be qualified w.r.t. the achievement of this goal. If you need a model for designing a PID controller, then an open-loop step response followed by the identification of a low-order linear model will be enough. If your goal is to do optimization, then it needs to be adequate for optimization in that it should be able to let you predict the conditions of optimality of your system (see e.g. the adequacy conditions in Forbes and Marlin papers). You may also like to validate/infirm assumptions. In such a case your model should elaborate around identifiable parameters whose identified values will help you decide. Finally if you want to predict accurately the behavior of several measured quantities you need to develop your model with this in mind. This is in my opinion what detailed models are intended for (although often being to detailed and complex lead to good fitting but poor predictions), the best being always to focus on the outputs (i.e. the measured quantities) that you are really interested in.

What do you think of model simplicity?
People often use mathematical models to address a physical/biological problem. Likewise, they often claim their models are "simple" even when it is non-trivial to many readers. Any views on what you consider key features of a "simple model"? Is a mathematical model considered "simple" as a result of its ease of implementation within a specific program or does its simplicity have something to do with the number of estimable parameters, quantifiable variables, and few variables? I am curious what you think!
Jimmy Omony · University of Groningen

Nice point of view, James!

Under-parameterizing models for data from a special situation?

Dear all,

I have been thinking about the following: I have a set of models which share the characteristic that they have each two rate parameters and another parameter that has a similar interpretation in all models. Now each model is mixed with a different model of a mechanism to explain aspects that are not covered by the common part. The data I have to compare these different mechanisms is a special case for the common part of all models (not related to the mechanism I'm interested in): both rate parameters have to be equal and the third parameter has to be zero.

So my question is now: when comparing the models (transdimensional mcmc) to figure out the best mechanism, should I use the simpler parametrization (with only one rate paramter and disregarding the parameter which is zero) since my data has this special case?

Or should I use the full model (which, by the way, when compared against the simplified model in this special case data is worse regarding the Bayes factor).

I know the question is a bit abstract, however, I feel adding concrete model details would rather confuse the issue. Since I'm new to this kind of analysis, maybe this is a common case with a common solution? Although I couldn't find anything, yet.

I'm very intersted in your opinions!

Adam L MacLean · University of Oxford

Hi,

From what you describe (and without details of the models), I think you could use the full model rather than the reduced one and incorporate the knowledge you have about the system into your prior, if you then wish to test these models on new data. i.e. if in a preliminary experiment a parameter is measured to be zero, set the prior over this parameter to 0. You could change this to a normal prior centred at 0 if you want to account for the uncertainty in the parameter.

Cheers,

Are we judging the researchers production correctly?
New Index for Quantifying an Individual's Scientific Research Output http://arxiv.org/abs/1305.6026
Ismat Beg · Lahore School of Economics

Judging the researchers production  is an ill posed problem. What we can do is to have best possible evaluation and then keep on improving it whenever we have further information.

Can the ebola virus be considered as an epidemic in a scale-free network?

Some researchers begin to consider epidemics in scale free network. Does the ebola virus belong to this situation? See Abramson (2001)

And what are its implicatins for ebola spread prediction and modelling?

Victor Christianto · University of New Mexico

Thanks, Michael. Best wishes

How do we model the kinetics of a material with multiple components?
As I am trying to modify the Arrhenius equation for processes that do not undergo under a constant temperature, I was also trying to add another factor- what if the material undergoing a single process (i.e. thermal degradation) has hundreds of components? Do I model the process as if it has only 1 activation energy, or do I consider it having different compounds undergoing the same processes at different activation energies? How do I modify the Arrhenius equation in this situation? Or, does it matter?
Peter Simon · Slovak University of Technology in Bratislava

Description of the kinetics of multicomponent systems is very difficult since the components do not react as individuals but the reactions can mutually affect each other. Consider that the temperature dependence of the rate of individual reactions obey the Arrhenius equation.  In my paper JTAC 88 (2007) 709 DOI: 10.1007/s10973-006-8140-y it has been shown that the Arrhenius equation is the worst approximation of the temperature depencence of the complex processes where the individual steps obey the Arrhenius law. Try to apply the non-Arrhenian kinetics.

How does one develop better cell models to characterize common diseases and complex traits?

Whole genome sequencing and genome-wide association studies (GWAS) clearly demonstrate the complexity of common diseases and in the majority of cases risk cannot be defined by a single gene mutation. As a result of the wealth of genetic data, modeling of complex traits has become ever more challenging.

I would be very interested in your opinion on where we should be going in designing better cell models and at the end better animal models. Needless to say that monocausal approaches are dated, even though most of our attempts to understand the molecular underpinnings of diseases are based on monocausal models, but the important question is how complex diseases could be modeled in a more appropriate way? Your opinion is most valued.

Peter Kloehn · University College London

I certainly agree that mathematical and statistical modelling will be indispensable to better define how genetic variance may affect disease pathways and complex traits. However, systems approaches often fall short of delivering viable hypotheses, particularly, when genes or gene pathways are poorly described. This may result in a hypotheses that are skewed towards well-described genes.

My question regarding better models to characterize common diseases and complex traits is related to experimental models, rather than data mining. For instance, induced pluripotent stem cells (iPS) cells, isolated from tissues of patients and healthy controls may be a superior model to the single gene approach we are generally pursuing. I would be interested to understand whether there are other options to 'custom-design' model systems.

What are good resources for learning mathematics and statistics with life sciences examples?

What are good resources for learning mathematics and statistics for life science with applied examples with basic knowledge?

Are there any free books with easy guide.

Miles P Davenport · University of New South Wales

I don't know of a book that cover both mathematics (?modelling) and statistics well, but would suggest:

Modeling the dynamics of Life, F R Adler.

and

Intuitive Biostatistics, H Motulsky [the latter is associated with Graphpad Prism]

How do I learn Robotics, Mathematical modelling and implementation from the very basics?

Hello everybody my respectable wishes to all,

I am Parthiban did my Post Graduation in the area of Embedded systems.

I have some idea about Micro-controller programming (both in Assembly and C),   Digital Electronics.

I want to do my research in the area of  Nonlinear System Mathematical modelling, implementation and control of robots, Quadcopter using Fuzzy logic or Neural networks or Genetic algorithm,

but I know nothing about the above mentioned area.

So I request you all to please provide your valuable guidance to me to learn about robotics, Quadcopter modelling and implementation, Fuzzy logic, Neural networks, Genetic algorithm.

Especially, first about Mathematical modelling of Linear control systems and then more importantly about  Mathematical modelling of Non-Linear control systems.

Narasim Ramesh · Sri Jagadguru Chandrasekaranathaswamiji Institute of Technology

IMHO .. you have enough background to begin work..suggest

you identify first an application by observing problems being faced by society..

eg 1. last kilometer safety for women..

2. identification of pests in storage..

3. a robot for ensuring safety of children at the gate of schools.

4. robots to carry gas cylinders up stairs..

5. robots for reiving outpatients for Doctors..

6. robots to monitor crowds

etc..then it will be clear what you need when you try to see what has already been

done.. so development and improvements can be identifies..which will

require specific skills that you mentioned..else ..I think there is too much material

out  there which is distracting..

Good Luck

Cheers.

What are the recent and well-behaved methods in solving a stochastic mathematical model?

There are methods such as sample average method and progressive hedging algorithm for solving a mixed integer programming stochastic optimization.

Ralf Gollmer · University of Duisburg-Essen

Addendum: The notion of a two- or multistage stochastic problem is not the time horizon of a dynamic problem, though in many cases the choice of the stages is influenced by the time, since it is assumed that the data realizations reveal over time. In these problems it is assumed that a part of the variables has to be decided without any knowledge of the realizazion and afterwards, as part of the data realizations is known, more variables have to be decided - every time assuming the knowledge of the data realiszations already revealed, but in a nonanticipative way (i.e. without sipposed knowledge of the other data) just based on knowledge of the possible scenarios.

Thus your problem does not belong to that class of problems and none of the methods for two- or multistage stochastic optimization is applicable.

Can anyone help with a Particle Swarm Optimization algorithm?
Can anyone help me with a PSO algorithm? Is there any c/c++ programme available?
Mohammad Reza Garoosi · Khaje Nasir Toosi University of Technology

Yoy can use the function is provided by Ebbesen et al (2012) in MATLB  software in following article:

Ebbesen, S., Kiwitz, P. & Guzzella, I. (2012), A Generic Particle Swarm Optimization Matlab Function, American Control Conference, Montreal, Canada.

How can I implicate two sets of differential equations with different time scaling simultaneously (second and millisecond) in Matlab R2013a simulink?
We have two sets of differential equations for neuron dynamics and astrocyte Ca2+ oscillations but with different time scales (Di Garbo 2009). Is it correct that for simulation of these sets simultaneously you must use integrators with 1000 coefficient for rescaling one set?
Fabio Mavelli · Università degli Studi di Bari Aldo Moro

of course, you can approximate the ODE set by decoupling the equations, but my suggestion is firstly try a MATLAB numerical procedure for stiff ODE set that could solve easily your problem

Does anyone know of any mathematical models for chickenpox with vaccination?
A mathematical model for vaccination of chickenpox.
Jimmy Omony · University of Groningen

Hi Stephen,

Below are a couple of papers that should be helpful to you. I read them last week and found them very insightful.

http://www.sciencedirect.com/science/article/pii/S0264410X02001809

I am sure, they address your specific interest, but at least it should provide you with a few tips and clues on how to go about your question.

Good luck:

What are the mathematical models that fit the impedance spectrum?
How many models do we have to fit the impedance data? I want to know all models. Please can anyone help me on this topic. Suggesting websites and pdf's is appreciated.
please send the models which you have used.
I would like to know the models.
thank you.
What is the difference among Deterministic model, Stochastic model and Hybrid model?
How to develop hybrid model?
John W. Kern · Kern Statistical Services, Inc., University of Wyoming, Montana State University
I agree with Barry....except that I would add that in many if not most situations, the "Known" first principles and necessary "Known" parameters are less well understood than one would like and in nearly all nontrivial situations at least one parameter is free requiring estimation from data and therefore an uncertain "stochastic" setting. For example, we commonly use deterministic models for groundwater flow but almost never know the distribution of the hydraulic conductivity field. When this uncertainty is ignored, a deterministic approach results, and when efforts to account for this uncertainty are used, a stochastic model results.

In my experience, the primary reason that deterministic models are used in these incompletely specified situations has to do with the difficulty of developing rigorous inference for complex computational models. For example in the field of climate change circulation models are enormously computationally intensive....so it is difficult to iterate in the parameter space to identify maximum likelihood parameter estimates...So models are "calibrated" essentially by hand with a small number of model runs....Optimal parameter sets are virtually never found by hand calibration.

This stumbling block has led to the area of Model Emulation, where statistical models are fit to deterministic model inputs and outputs resulting in a computationally efficient version of the original model which can be used to develop statistical inference through Monte-Carlo simulation or other approaches.
Does anyone know a good source on scaling-up models/techniques for chemical pilot plants to industrial scale?
I am trying build a process model for a bioenergy system. In the literature, there are sufficient lab scale and pilot plant models for this process, but much less industrial (commercial) level data. I am looking for a book or a paper discussing what options we have to mathematically scale up a pilot plant model to industrial level. thanks!
Andreas Bode · BASF New Business
There is also a classical book from the chemical engineering side by Zlokarnik: Scale-up in Chemical Engineering. You may find scale-up rules for different type of equipment.
How can I add a vaccination compartment in a chickenpox mathematical model?
vaccination compartment
Konstantin K. Avilov · Russian Academy of Sciences
It depends on the model that you are modifying and upon the way the vaccine works (and also is applied).

In the simplest case of an ODE-based model (like classical SIR models) and life-long 100% protection for the vaccinated people (who are vaccinated at birth) the vaccination (or more precisely, vaccinated) compartment is an additional compartment that accumulates a given fraction of newborns entering the model population.
If the vaccine protection is not life-long there may be a decay rate added which results in moving a fraction of vac.compartment's population into susceptible compartment. If vaccine gives only partial protection the vaccinated compartment acts like another susceptible compartment but with lower infection rate. And so on...

In the case of age-structured models (and age structured vaccination schemes) the situation becomes more technically complicated, although the main principle remains the same.
What is the physical interpretation of local and global stability of a disease free or endemic equilibrium in disease modelling?
How can you explain the local and global stability of DFE or EE to a non-mathematician?
Konstantin K. Avilov · Russian Academy of Sciences
That's pretty simple in my opinion...
Local stability of an equilibrium point means that if you put the system somewhere nearby the point then it will move itself to the equilibrium point in some time. Global stability means that the system will come to the equilibrium point from any possible starting point (i.e., there is no "nearby" condition).

In even more physical interpretation, it could be like this:
If DFE or EE is locally stable then all epidemiological situations not-so-much different from the given stable equilibrium will (with time) evolve to (or transform into) the equilibrium point. Also it means that the equilibria are stable to small perturbations: if you push the situation a bit out from the equilibrium point then the situation will return back on its own (from the physicist's point of view, it means that the equilibrium may be a stable situation in real life, because the real world always is somewhat noisy).

Global stability of an equilibrium point in this case may be described as "the inevitable fate of the epidemic process regardless of its starting situation". But a caveat should be put that this "inevitability" holds as long as the world strictly follows the underlying mathematical model of epidemic process.

For still even more lay (or medical) audience all these things may be easily interpreted in terms of "adding or removing a small number of infectious people" but I believe it would make the central idea less clear.