- Viktor Szabo added an answer:What boundary conditions should we use when modeling blood flow in coronary arteries?
We are doing a one-dimensional modelling of coronary blood flow. Is there any publication referring to the boundary conditions we should use both on the inlet and on the outlet? Thank you very much for your help.
Thank you very much, Alin for this useful material!Following
- Joao A. N. Filipe added an answer:Does any have experience in modeling strategy of relaxation?
Given a PDE model with some constraints restricting the range of values of model quantities, how does one apply the idea of relaxation to such a model? Any idea or references on relaxation idea of modelling, will be appreciated.
Following on your reply re ODEs of 19 days ago. I focus on a very simple form of relaxation by assuming that the logistic equation, du/dt = a*u*(k-u), is related to your problem because its solutions do not exceed k if it is initially below k. Here, a, k > 0 are parameters, k is often known as ‘carrying capacity’. If we add an extra term with time-varying per capita rate v(t), we can be recast the equation as a logistic model with time-varying carrying and a thus new long-term bound on u. Specifically: du/dt = a*u*(k-u)+v(t)*u, with v(t)>0. This can be rewritten as: du/dt = a*u*((k+v(t)/a)-u) = a*u(q(t)-u), where q(t)=k+v(t)/a is a time varying carrying capacity. Unlike the basic logistic, this ODE is not likely to have an exact solution (unless v(t) is special, such as a constant), but numerical solution should be straightforward. There are many options for the function v(t), depending on the specific application. Here is an article exploring the case where q(t) also obeys a logistic equation; in this case, there is a form of relaxation if q increases with time.
P.S Meyer and JH Ausubel, Carrying Capacity: A Model with Logistically Varying Limits, Technological Forecasting and Social Change 61(3):209-214, 1999.
This specific choice of v(t) incorporates a time scale parameter characterising the pace of the relaxation, which may be relevant. This is just a concrete example to illustrate a way of thinking about modelling relaxation; many other basic models and modifications thereof to incorporate relaxation would be possible.Following
- Antonio Luiz Pacifico added an answer:What is the procedure of a distorted model in Buckingham pi theorem?
I have a distorted model to a prototype.
Dimensional scale for thickness is not same as other geometrical dimensions.
I found that there is procedure to use distorted model to predict the prototype.
I searched in many papers and books about this issue but I did not find clear general procedure for this case.
Thanks in advance.
In these cases you have to know, previously, the phenomena you are studying/testing. This is necessary because in distorted models you will have to assume hypothesis that will take you to disregard some dimensionless groups. For example, in wind tunnel tests it is well known the hypothesis that in "in the absence of thermal and Coriolis effects and for a specified flow system, whose boundary conditions are expressed non dimensionally in terms of a characteristic length L and velocity U, the turbulent flow structure is similar at all sufficiently high Reynolds numbers" (Townsend, 1956, apud Snyder, W. H., Guideline for fluid modeling of atmospheric diffusion, EPA Report 600/8-81-009, April 1981). Thus under these conditions you can work with models doing analogy with others dimensionless groups but not Reynolds number. To do so, it is crucial to write all differential equations that govern your phenomena, normalize them and try to find the nondimensional groups involved. Then make a sensitivity test on these equations in order to find what could you disregard. It is not a easy task, it demands knowledge of the phenoma and, probably, several hypothesis testes.Following
- Fateme Seihani Parashkouh added an answer:How can data envelopment analysis (DEA) theory be modeled in nonlinear instead of linear one?Is it possible to develop a model of DEA in nonlinear functions instead of linear?
I think you can use Charnes and Cooper transformation (Tone 2001) in a reverse direction.Following
- Joao A. N. Filipe added an answer:Does anyone know of any mathematical models for chickenpox with vaccination?A mathematical model for vaccination of chickenpox.
A text book explaining modelling approaches and with additional references: Keeling and Rohani, Modelling Infectious Diseases in humans and animals, Princeton Univ Press, Chapter 8.Following
- Stefan Gross added an answer:How can I properly calculate Akaike information criterion for data with unclear sample size?
The situation is as follows:
An experiment measured the concentration of a certain chemical in cells at various times after exposure. The results were normalized by the concentration at time zero, so they are fractions. Millions of cells were combined together for the assay. The data points are means and SEMs based on several repeat experiments. I am interested to fit some models to these data and compare them by AIC, but am not sure how to calculate AIC properly because the sample size is unclear. I would appreciate suggestions from anyone interested!
see http://en.wikipedia.org/wiki/Akaike_information_criterion, bullet point 3) for the formula of AICc.Following
- Mohammed Lamine Moussaoui added an answer:Any ideas about the stability of two solutions in a nonlinear system?Suppose a non linear system has at least 2 positive periodic solutions in a bounded domain. If one can show that the system is globally asymptotically stable, then is it a contradiction to the previous statement? If not, what can one say about the other solution? Can anybody suggest me any relevant references?
Dear Santanu Biswas,
To be Convergent the Solution has to be Stable and Consistent (see lax theorem). The choice of the Discretization Step(s) defines a Domain of Stability this gives you several solutions for each choice. But the solution must be unique as it is stated by Fletcher in his book.Following
- Samuel Alizon added an answer:Can someone assist me with some notes on modelling hiv vertical transmission?I am an MPhil student. I need material to understand the concepts better for my research. Papers and notes are welcomeFollowing
- Bilombo Raoul added an answer:What are the real world applications of Nondeterministic Finite Automata (NFA)?
Can any one help me by giving some real life examples (Other than compiler design) where the idea of NFA can be implemented? If I can find the shortest string accepted by any NFA .. what kind of real life problems I can solve with it?
In the automated management, the research of the strategies to optimize the time
or to optimize the energy is a fundamental question.
It is about determining an optimal order corresponding to an optimal trajectory
or to an optimal energy or at one optimal time.
A simple example in the industry is the management of the train.
One can determine the strategies to minimize the energy spent by the train in a journey.
One can also minimize the length of a journey.
Optimization of a system (train) in movement.
Several examples can be observed in the industry.Following
- Muthu Ramesh Babu added an answer:What is the difference between convex and non-convex optimization problems?How do we know whether a function is convex or not?
What are the different commands used in matlab to solve these types of problems?
Convex problems have only one local optimum point, which is also the global optimum point. The point is maximum or minimum based on the second order derivative of that function. On the other hand, non convex problems have multiple optimum points.Following
- Jose A. Ramos added an answer:Mathematical model for identifying systems. What are the best indices of error behavior to validate the model by comparing data.For example Fit, VAF, MSE, RMSE: what is the best?
In my experience, FIT, VAF, RMSE/MSE in that order. MSE is problematic in the sense that two models with very different performance can give equal MSE.Following
- Lawrence Margulies added an answer:Has anyone already applied a physiologically based pharmacokinetic two-compartment model for a single oral dose or i.p. injection of pristine SWCNT?I successfully implemented a two-compartment (gut-blood) mathematical model for a single oral dose (20 mg/kg) to simulate the SWCNT distribution in the blood using intestinal absorption constant (Ka = 0.033 min^-1/1.98 hrs^-1), volume of distribution ((Vd = D/C0), experimentally determined), and volume of the gut compartment (V1 = 82.5 ml/kg). The methodology might be further applied to build the two-compartment (peritoneum-blood) PBPK model for a single i.p. drug injection if the peritoneum-blood permeation constant is established.
- Gayathri S P added an answer:How to segment a medical image using Markov Random field?How to segment a medical image using Markov Random field?
Sorry for the late reply and thank you for your response.Following
- Paul Hubert Vossen added an answer:What are the limits of measurement in science?When I was in high school Bohr's atom of shells, s and p orbitals was introduced in chemistry. Realization was automatic that the world was explained according to theory that was verified by experiment. Through college and graduate school, looking for more complete explanation, theory is challanged but it is not brought to question "what is an electron or proton, if they have mass but are visible only in the sense that they emit light energy as photons that also have mass, "spots of light in orbit around nuclei?, the atom a solar system in minature"? Physicists will say this is not the picture they have evolved, but all that remains is the image of equations on a chalkboard, at best 'the image of things of a particle nature in alteration with things of a light nature'. Can a pieced-together stepwise reality of this nature be accepted? In the Feyman quote below pieces are added that can break any of the established laws "they are not directly observeable" or affect "causality". In this same meaning though neither electrons, protons, photons or atoms are observable and their causal effects are but a matter of humanly constructed theory and similarly based experimental apparatus. The possibility exists that theory and theory based apparatus entail one another and all that might be gotten is that the real universe is identical in this respect...i.e. existence entails the experienced universe and visa-verse.
"You found out in the last lecture that light doesn't go only in straight lines; now, you find out that it doesn't go only at the speed of light! It may surprise you that there is an amplitude for a photon to go at speeds faster or slower than the conventional speed, c." These virtual photons, however, do not violate causality or special relativity, as they are not directly observable and information cannot be transmitted causally in the theory." (from "Varying c in quantum theory" http://www.researchgate.net/go.Deref.html?url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVariable_speed_of_light)
Long time ago, out of pure necessity, psychologists have started to worry about measurement in psychology (and closely related disciplines). They've built up an amazing stock of knowledge about and models of measurement, which may equally well be applied in other disciplines. Unfortunately, I suspect that most of these highly sophisticated developments are unknown outside a rather small circle of experts, probably because no one would expect such deep theorizing in psychology. I am sure that many physicists and economists could learn a lot about measurement in their resp. disciplines by having a closer look at this field.
P.S.: However, it may be that physics deals with a very peculiar sort of phenomena for which only one very special measurement theory and related procedures suffice. How fortunate! See e.g. the books and other works of B. Roy FriedenFollowing
- Bastian Schmandt added an answer:Must I derive the equations of natural frequencies of the complex shape horizontal axis wind turbine blade to use Pi Buckinghum theorem?
I want to use Buckinghum Pi Theorem to relate a similitude of the blade to the prototype blade.
I began as stating the natural frequency is function of ( Young's Modulus, Length of the blade, Cross Section area,Moment of inertia,Density)
Fn (fn,E,L,A,I,Rho) ,then I got the dimensionless pi groups that relate natural frequency of similitude to the prototype !!!
Is this correct procedure or should I derive the natural frequencies equations first ?
Hello Mr. Hesham,
the Pi-Theorem of Buckingham is a formalism for data reduction and scaling. It is applicable, when all influential quantities and their dimensions are known. If more quatities than necessary are included, non-dimensional groups with no impact are generated. You do not have to know the mathematical equations in order to apply the Pi theorem. Thus, you could scale the results from measurements even when dimensional equations are unknown or unsolvable. Therefore, usually a 5 point list is used in order to identify physical quantities of impact. However, the a priori knowledge of the respective equations or first principles might be useful in order to identify the influential quantities.
- Shiuh-Hwa Shyu added an answer:How does size factor effect a dimensionless PI theorem analysis between 7m wind blade and 0.3m similitude?
I work on developing a similitude of wind blade of length 0.30 m to a prototype of 7m length.
I want to predict the deflection of the 7m blade to validate FE analysis. Because I cannot manufacture this prototype I need to apply Pi-Buckinghum Theorem to get the dimensionless ratio for the deflection.
The material of the 7m blade is orthotropic fiber glass composite and the material of the similitude is orthotropic ABS plastic.
Some professors said they have doubts about the possibility of this working due to size factor and difference in materials.
Since you have FE package for the analysis, why bother to use PI theorem? Use the full scale in your FE model.Following
- Nikos Lazarides added an answer:Does anyone know of codes or software tools to perform discrete (time series) chaos analysis?I will have recorded acoustic files and desire to perform chaos analysis on the acoustic data. I am also looking to analyze the signals for energy in time-frequency such as using a Gabor Transform technique that has been done on bat echolocation signals, phase space, Lyapunov exponent, capacity dimension, etc. Anyway, does anyone know of where to find the codes or software tools such as in Matlab or some other source to conduct the analysis? Any current literature related to the algorithms would also be appreciated. Any suggestions are welcome as well. Thank you.
I would agree with Babalola about the TISEAN package for the analysis of chaotic time-series. Here is a link:
Of course, the other sources mentioned in the other answers may also be valuable. Depending on your experience on your experience, you would perhaps like to look first at something simpler, in particular for the calculation of Lyapunov exponents from a time-series. In that case, the old-and-famous paper by Wolf et al. Physica D 16, 285 (1985) would be very useful; it contains both a comprehensive summary of the theory and the numerical code.
Concerning the Gabor transformation etc., you could take a look at
the large time-frequency analysis toolbox, although I have not used it myself.
- Gregory Francois added an answer:What are the best ways to do the mathematical modelling of Notch and Wnt pathway?
Need your opinion related to algorithms like Genetic Algorithm and others......
I am also very much of this opinion:
"but what makes a model 'good' is the fact that you learn something from it."
This is a message that is hard to convey to many engineers or physicists since many of them have a kind of binary view on modeling. I mean a lot of people are convinced that if your model does not include everything, it cannot be informative or useful. This is wrong in my opinion.
Modeling is always performed with "a goal" and models should only be qualified w.r.t. the achievement of this goal. If you need a model for designing a PID controller, then an open-loop step response followed by the identification of a low-order linear model will be enough. If your goal is to do optimization, then it needs to be adequate for optimization in that it should be able to let you predict the conditions of optimality of your system (see e.g. the adequacy conditions in Forbes and Marlin papers). You may also like to validate/infirm assumptions. In such a case your model should elaborate around identifiable parameters whose identified values will help you decide. Finally if you want to predict accurately the behavior of several measured quantities you need to develop your model with this in mind. This is in my opinion what detailed models are intended for (although often being to detailed and complex lead to good fitting but poor predictions), the best being always to focus on the outputs (i.e. the measured quantities) that you are really interested in.Following
- Jimmy Omony added an answer:What do you think of model simplicity?People often use mathematical models to address a physical/biological problem. Likewise, they often claim their models are "simple" even when it is non-trivial to many readers. Any views on what you consider key features of a "simple model"? Is a mathematical model considered "simple" as a result of its ease of implementation within a specific program or does its simplicity have something to do with the number of estimable parameters, quantifiable variables, and few variables? I am curious what you think!
Nice point of view, James!Following
- Adam L MacLean added an answer:Under-parameterizing models for data from a special situation?
I have been thinking about the following: I have a set of models which share the characteristic that they have each two rate parameters and another parameter that has a similar interpretation in all models. Now each model is mixed with a different model of a mechanism to explain aspects that are not covered by the common part. The data I have to compare these different mechanisms is a special case for the common part of all models (not related to the mechanism I'm interested in): both rate parameters have to be equal and the third parameter has to be zero.
So my question is now: when comparing the models (transdimensional mcmc) to figure out the best mechanism, should I use the simpler parametrization (with only one rate paramter and disregarding the parameter which is zero) since my data has this special case?
Or should I use the full model (which, by the way, when compared against the simplified model in this special case data is worse regarding the Bayes factor).
I know the question is a bit abstract, however, I feel adding concrete model details would rather confuse the issue. Since I'm new to this kind of analysis, maybe this is a common case with a common solution? Although I couldn't find anything, yet.
I'm very intersted in your opinions!
Thanks in advance,
From what you describe (and without details of the models), I think you could use the full model rather than the reduced one and incorporate the knowledge you have about the system into your prior, if you then wish to test these models on new data. i.e. if in a preliminary experiment a parameter is measured to be zero, set the prior over this parameter to 0. You could change this to a normal prior centred at 0 if you want to account for the uncertainty in the parameter.
- Ismat Beg added an answer:Are we judging the researchers production correctly?New Index for Quantifying an Individual's Scientific Research Output http://arxiv.org/abs/1305.6026
Judging the researchers production is an ill posed problem. What we can do is to have best possible evaluation and then keep on improving it whenever we have further information.Following
- Victor Christianto added an answer:Can the ebola virus be considered as an epidemic in a scale-free network?
Some researchers begin to consider epidemics in scale free network. Does the ebola virus belong to this situation? See Abramson (2001)
And what are its implicatins for ebola spread prediction and modelling?
Thanks, Michael. Best wishesFollowing
- Peter Simon added an answer:How do we model the kinetics of a material with multiple components?As I am trying to modify the Arrhenius equation for processes that do not undergo under a constant temperature, I was also trying to add another factor- what if the material undergoing a single process (i.e. thermal degradation) has hundreds of components? Do I model the process as if it has only 1 activation energy, or do I consider it having different compounds undergoing the same processes at different activation energies? How do I modify the Arrhenius equation in this situation? Or, does it matter?
Description of the kinetics of multicomponent systems is very difficult since the components do not react as individuals but the reactions can mutually affect each other. Consider that the temperature dependence of the rate of individual reactions obey the Arrhenius equation. In my paper JTAC 88 (2007) 709 DOI: 10.1007/s10973-006-8140-y it has been shown that the Arrhenius equation is the worst approximation of the temperature depencence of the complex processes where the individual steps obey the Arrhenius law. Try to apply the non-Arrhenian kinetics.Following
- Peter Kloehn added an answer:How does one develop better cell models to characterize common diseases and complex traits?
Whole genome sequencing and genome-wide association studies (GWAS) clearly demonstrate the complexity of common diseases and in the majority of cases risk cannot be defined by a single gene mutation. As a result of the wealth of genetic data, modeling of complex traits has become ever more challenging.
I would be very interested in your opinion on where we should be going in designing better cell models and at the end better animal models. Needless to say that monocausal approaches are dated, even though most of our attempts to understand the molecular underpinnings of diseases are based on monocausal models, but the important question is how complex diseases could be modeled in a more appropriate way? Your opinion is most valued.
I certainly agree that mathematical and statistical modelling will be indispensable to better define how genetic variance may affect disease pathways and complex traits. However, systems approaches often fall short of delivering viable hypotheses, particularly, when genes or gene pathways are poorly described. This may result in a hypotheses that are skewed towards well-described genes.
My question regarding better models to characterize common diseases and complex traits is related to experimental models, rather than data mining. For instance, induced pluripotent stem cells (iPS) cells, isolated from tissues of patients and healthy controls may be a superior model to the single gene approach we are generally pursuing. I would be interested to understand whether there are other options to 'custom-design' model systems.Following
- Miles P Davenport added an answer:What are good resources for learning mathematics and statistics with life sciences examples?
What are good resources for learning mathematics and statistics for life science with applied examples with basic knowledge?
Are there any free books with easy guide.
I don't know of a book that cover both mathematics (?modelling) and statistics well, but would suggest:
Modeling the dynamics of Life, F R Adler.
Intuitive Biostatistics, H Motulsky [the latter is associated with Graphpad Prism]Following
- Narasim Ramesh added an answer:How do I learn Robotics, Mathematical modelling and implementation from the very basics?
Hello everybody my respectable wishes to all,
I am Parthiban did my Post Graduation in the area of Embedded systems.
I have some idea about Micro-controller programming (both in Assembly and C), Digital Electronics.
I want to do my research in the area of Nonlinear System Mathematical modelling, implementation and control of robots, Quadcopter using Fuzzy logic or Neural networks or Genetic algorithm,
but I know nothing about the above mentioned area.
So I request you all to please provide your valuable guidance to me to learn about robotics, Quadcopter modelling and implementation, Fuzzy logic, Neural networks, Genetic algorithm.
Especially, first about Mathematical modelling of Linear control systems and then more importantly about Mathematical modelling of Non-Linear control systems.
Please help me, please provide useful study materials, research papers that are elaborately (easily understandable) describing the above concepts also please provide your precious suggestions in this research area.
IMHO .. you have enough background to begin work..suggest
you identify first an application by observing problems being faced by society..
eg 1. last kilometer safety for women..
2. identification of pests in storage..
3. a robot for ensuring safety of children at the gate of schools.
4. robots to carry gas cylinders up stairs..
5. robots for reiving outpatients for Doctors..
6. robots to monitor crowds
etc..then it will be clear what you need when you try to see what has already been
done.. so development and improvements can be identifies..which will
require specific skills that you mentioned..else ..I think there is too much material
out there which is distracting..
- Ralf Gollmer added an answer:What are the recent and well-behaved methods in solving a stochastic mathematical model?
There are methods such as sample average method and progressive hedging algorithm for solving a mixed integer programming stochastic optimization.
Addendum: The notion of a two- or multistage stochastic problem is not the time horizon of a dynamic problem, though in many cases the choice of the stages is influenced by the time, since it is assumed that the data realizations reveal over time. In these problems it is assumed that a part of the variables has to be decided without any knowledge of the realizazion and afterwards, as part of the data realizations is known, more variables have to be decided - every time assuming the knowledge of the data realiszations already revealed, but in a nonanticipative way (i.e. without sipposed knowledge of the other data) just based on knowledge of the possible scenarios.
Thus your problem does not belong to that class of problems and none of the methods for two- or multistage stochastic optimization is applicable.Following
- Mohammad Reza Garoosi added an answer:Can anyone help with a Particle Swarm Optimization algorithm?Can anyone help me with a PSO algorithm? Is there any c/c++ programme available?
Yoy can use the function is provided by Ebbesen et al (2012) in MATLB software in following article:
Ebbesen, S., Kiwitz, P. & Guzzella, I. (2012), A Generic Particle Swarm Optimization Matlab Function, American Control Conference, Montreal, Canada.Following
- Fabio Mavelli added an answer:How can I implicate two sets of differential equations with different time scaling simultaneously (second and millisecond) in Matlab R2013a simulink?We have two sets of differential equations for neuron dynamics and astrocyte Ca2+ oscillations but with different time scales (Di Garbo 2009). Is it correct that for simulation of these sets simultaneously you must use integrators with 1000 coefficient for rescaling one set?
of course, you can approximate the ODE set by decoupling the equations, but my suggestion is firstly try a MATLAB numerical procedure for stiff ODE set that could solve easily your problemFollowing
About Mathematical Modelling
Everything about mathematical modelling.