Science topic

# Stochastic - Science topic

Explore the latest questions and answers in Stochastic, and find Stochastic experts.
Questions related to Stochastic
Question
Or can we say that if a graph is balanced, its weight matrix is doubly stochastic?
It indicates nothing about the directed version.
Question
Assume that there are m number of patients. Each patient records his/her pain level in an ordinal scale (from "not at all" to "excruciating") at t_i time point fro i=1,2,...,n_i. Suppose that we want to know whether patients are improving with respect to time or not. Is there any existing literature to test that?
Question
1. We know that the stochastic monotonicity condition for the one-dimensional case is as follows: sum_{j>=k}q(j|i)<=sum_{j>=k}q(j|i+1) for all i, k belongs to state space S such that k does not equal to i+1, where q(j|i) is the transition rate associated with the continuous-time Markov chain. NOW, QUESTION is what is about in a two-dimensional case?
What about it? The opening statement is wrong, that’s all. The fact that the labels of the states are integers doesn’t imply anything about dimensionality. It implies that the states are countable. And that’s all that’s needed.
Question
Hello,
Can you help me regarding MATLAB solver for a system of stochastic delay differential equations? To be specific the system is delay differential equation where parametric stochasticity is used.
Thank you
Hi,
I found this (attached), it is developped to integrate a specific system of SDDE with 2 coupled oscillators, delay is in the coupling. But you can extract the method test it against results obtained by E Buckward, but she uses Ito formulation, here it is more like an Euler formulation; please tell me if it is ok.
can't find the author except:
mw@eml.cc 2013
Regards
Julien
Question
Hello,
I am trying to estimate a stochastic frontier model in the panel data set using stata13. In doing so, I prefer to use the TRE model. Even though I am aware that this model allows estimating the frontier model and the inefficiency determinants at a time(following a single step), I am not sure if I am following the correct procedure.
Could someone help me?
sfpanel ln y ln x1 lnx2 .....lnxn, model (tre) dist(hnormal) usigma(z1,z2,z3...zn) vsigma()
The assumption here is z1....zn are inefficiency determinants and also considering heteroscedasticity at the same time.
Thank you!
Many thanks, Abebayehu Girma Geffersa . It was two years ago and as you writously said, I followed the procedure that I mentioned here and it is the correct approach. Akbar Muhammad Soetopo please follow a similar procedure that I wrote here as well. For a detailed answer to this, please read this paper. Belotti, F., Daidone, S., Ilardi, G., & Atella, V. (2013). Stochastic frontier analysis using Stata. The Stata Journal, 13(4), 719-758.
Question
I have a multi-stage stochastic programming model. I have 3 groups of variables: the first group takes values ​​at the beginning of the planning horizon before the first realization and does not change until the end of the planning horizon and has no t index (they are binary and continuous), the second option is “here and now” variables that before each realization Are taken value and are continuous, the third group are “wait and see” variables that take value after each realization (binary and continuous). The model is SMINLP. I converted it to SMILP through linearization and solved it by CPLEX solver with generating a small number of scenarios ... I want to consider a continuous distribution for the stochastic parameter and generate a large number of scenarios by sampling and run an algorithm for it. nested benders decomposition or progressive hedging algorithms are more efficient for this model?
If anyone has experience, thank you in advance for your help.
Question
I am trying to generate a streamflow series with a stochastic framework. I would like to find some criteria to statistically compare between generated and observed series.
The most important part is that if I am trying to compare between different generators; Am I able to apply criteria in order to get accuracy value for each generator?
Regards,
AIC and BIC can be two options. Generally, you can develop an arbitrary criterion based on the weight of observations (time points) and the definition of distance (between observation and expectation). For example, you can define distance as the square or absolute value of difference.
Question
I am looking for a scientific field or real-life subject where I can use some convex analysis tools like Stochastic Gradient Descent or unidimensional optimization methods. Any suggestions?
Convex optimization theory has an importent aspect: duality gap. This gap is occurring for bad constraints. To search for such problems, we use the analysis of orders of smallness of infinitesimal quantities
Question
I ran a Battese and Coelli 1995 sfpanel model in Stata 12.1 of the following translog equation
sfpanel lny lnl lnm lne lnk lnksq lnlsq lnesq lnklnl lnmsq lnklne lnklnm lnllne lnllnm lnelnm year, model(bc95) dist(tn) emean( for for5 for10 for15 for20 for25 exp_firm firm_size) ort(o)
Aimed at establishing the effect of FDI on efficiency and productivity at firm level.
I wish to estimate TFP whose components are Technical Change(TC), Technical Efficiency Change (TEC) and Scale Efficiency Change (SEC).
1. Can this be done directly in Stata?
2. And what is the syntax considering the 4-input translog equation?
Thanks.
bonjour je voudrai connaitre la commande pour décomposer la productivité totale des facteurs en efficacité technique et progrès technologique
Question
Dear all, I would like to test the half-normal distribution (of the error component from stochastic frontier models) against alternative distributions. Does anybody have some experience with doing this in stata. could anybody point me towards relevant literature? Thanks in advance.
Dear Andrew, I will try that. Thank you very much.
Question
I wish to transform the data from long memory stochastic to short memory volatility models to visualize the effects. Is there any transformation protocol to shift the data from long memory stochastic to short memory volatility models or either there is no mathematical relationship between these two models? Further, can we use MATLAB for such transformation?
As I know , filtering out the long memory from short memory data is possible in frequency domain but transform from long to short memory data in time domain seem to be interesting , maybe it is good for research.
Question
Dear fellows
I am trying to estimate cost efficiencies using Fourier flexible stochastic cost frontier but I can’t find Stata commands relating to this subject. Kindly assist
This article published in the The Stata Journal (2012) talks about Stochastic frontier analysis using Stata
Question
Hello,
I am currently working on sensitivity analysis in the context of AHP. I use the online tool BPMSG from Goepel, maybe someone here knows it. However, I have a problem with the traceability of the results. Let's assume that there are exactly 3 criteria in the AHP (C1,C2,C3). Then I would like to know how the final value for an alternative (a1) results if one of the criteria changes in weighting, right?
I'll just say C1 decreases by x. However, the value x that is taken away from C1 must be distributed to C2 and C3. I just wonder which method is used to do this. Is x simply distributed equally to C2 and C3 or does this happen according to the share of C2 or C3 in the sum of C2 and C3?
When I do that, I get the following for the remaining two criteria:
(C1-x) = New C1
(C2 + (C2 / (C2 + C3)) * x) = New C2
(C3 + (C3 / (C2 + C3)) * x) = New C3
Unfortunately, however, I do not know if this is correct. If I multiply the criteria with the corresponding values of alternative a1 and combine the whole thing to a final value, I can calculate the same again with the other alternatives. When I compare the graphs to see how big x has to be to change the final prioritization of the alternatives, I always get the wrong values compared to the online tool. Therefore I would like to know if the redistribution of the weights is correct.
I hope someone can help me despite the long question. Thanks a lot!
Kindly viait..
Question
Is correct? In stochastic mathematics, the arithmetic mean and standard deviation are important. In fuzzy mathematics and interval reasoning, the arithmetic mean and standard deviation are not always important
Please see the following paper:
Comparison of deterministic, stochastic and fuzzy logic uncertainty modelling for capacity extension projects of DI/WFI pharmaceutical plant utilities with variable/dynamic demand
Question
Hello everybody
According to what mentioned in the attached snapshot from the appendix of this article
My question is, if we already have the transition probability matrix P, how could we calculate numerically v^=α0(I−P)−1, as the limit of the recurrence v(t+1)=v(t)P+α0?
I have got the full text of your book. It looks great.
Thank you very much for your fruitful comments.
Regards,
Ahmed
Question
Hello Everyone,
I am recently working on the topic " Deep Reinforcement Learning in production scheduling".
Most recently I am working on the simulation of the environment, the state and action engineering process.
The uprising question for me here is: how are uncertainty and constraints taken into account in a RL project?
In stochastic optimization there is a probability distribution of several factors which are taken into account. Also the solution space is restricted with constraints.
How are these factors considered in model-free RL? Is the uncertainty included in the simulation of the environment? Are the state or action space restricted through constraints?
Best regards,
Christoph
Dear Christoph,
when you use a model-free reinforcement learning algorithm like Q-learning, you do not need to model the environment but also determine the action-space and state-space. I recommend you to use MushroomRl library or gym library in python
to simulate your environment. for more information, you could review these documents:
MushroomRL — MushroomRL 1.7.0 documentation
Making a custom environment in gym | by Ashish Poddar | Medium
Question
I want to design different scenarios of wind speed for a day i.e. for 24 hours to do stochastic optimization. Can anyone please provide code or files.
thanks
Question
Let me consider harmonic oscillations x'' + x = f_t(t), where f_t(t) describes thermal stochastic force <f_t(t) f_t(t')> \sim \delta(t - t'). I have already understood, that I can switch from such differential equation to the integral equations, where the integrals can be performed in Ito or Stratanovich sense. Which one relates to the real situation?
Another case: escape probability. If I want to numerically extract mean lifetime of a classical particle in a potential well due to thermal fluctuations, what kind of stochastic modeling should I use?
All this and more is described in van Kampen’s book Stochastic processes in physics and chemistry".
Question
Hi!
I have the following scientific problem: there is a nonlinear SDE
dX = (a(t)+b*(X-X0))*X*dt+c*X*dB
If a = 0 and b is a negative number then the negative feedback "pushes" always back the X path to the direction of X0. I experienced that the mean of the X paths shows some oscillation properties like frequency. My question: what is the name of this phenomen? How should i search after? Is there a book or article about this? I know that stochastic oscillators exist but they are second order ordinary differential equations perturbated by Brownian motion and that is not what i am looking for.
Thank you very much for your help!
Tamas Hajas
Write the equation in a form more useful for calculating anything:
dX/dt=(a(t)+b(X-X0))X+cXη(t)
where η(t) is the noise.
Now write a(t)X+b(X-X0)X=-dU(X)/dX, which shows that this equation describes the motion of a particle in a potential U(X)=-(1/2)(a(t)-bX0)X^2-(b/3)X^3, in the presence of multiplicative noise.
So, for a(t)=0, the term that’s quadratic in X, does describe oscillations with frequency squared bX0, with the cubic term describing escape from the well.
However the multiplicative noise complicates things.
If we divide out by X and set Y(t)=ln X, the noise for Y is additive.
Question
There are many methods based on the stochastic arithmetic and also floating point arithmetic. What is advantage and disadvantage of the mathematical methods based on stochastic arithmetic?
Dear Prof. Imtiaz Husain
Thank you for your answer. I have some papers about the applications of the stochastic arithmetic. But I need to know more and real applications.
Question
Is stochastic gradient descent an incremental machine learning method? How does stochastic gradient descent relate to incremental machine learning/ online learning?
Also, I am new to incremental ML and need to build my knowledge about it .. Any good beginners references you can recommend will be appreciated :-)
Stochastic gradient descent is a very popular and common algorithm used in various Machine Learning algorithms, most importantly forms the basis of Neural Networks. In this article, I have tried my best to explain it in detail, yet in simple terms. I highly recommend going through linear regression before proceeding with this article.
Gradient, in plain terms means slope or slant of a surface. So gradient descent literally means descending a slope to reach the lowest point on that surface.
Regards,
Shafagat
Question
Visscher, M. L. (1973). Welfare-Maximizing Price and Output with Stochastic Demand: Comment. American Economic Review, 63(1), 224-229.
The derivation of formula (4) in the literature (Visscher, 1973) involves the derivation of double integral. How are formulas (5) and (6) obtained? In the file added, it provides the process in my opinion, but I can’t get the same result as the equation (5) in the literature. What went wrong in the solution process, I hope you can help me point it out, thank you!
Equations (5) and (6) are obtained from eq. (4) by differentiation. The tricky part is that the variables P and Z appear, also, in the limits of integration and not, only, in the integrands. This means the calculation is a bit more complicated. I'll try to verify them.
Question
Is there any rational formulation for the velocity slip boundary conditions for the stochastic Landau-Lifshitz Navier-Stokes Equations?
YES, I think, you can use an alternative approach, such as saddle point approximation. See e.g.,
Question
Dear all,
Could anybody provide the names of the researchers and literature in the area:Stochastic Calculus and Stochastic Integration - Ito Integration
1. Introduction to Stochastic Calculus for Finance: A New Didactic Approach (Lecture Notes in Economics and Mathematical Systems)
Dieter Sondermann
2.From Measures to Itô Integrals (AIMS Library of Mathematical Sciences)
Ekkehard Kopp
3. Stochastic Integration by Parts and Functional Itô Calculus
Vlad Bally, Lucia Caramellino, Rama Cont (auth.), Frederic Utzet, Josep Vives (eds.)
Question
I have gone through Adrian papers on this topic but details are still missing. Linear stochastic estimation is a powerful tool for vortex packet identification.
There are some really good papers on this topic aside from the Adrian paper you referring to :
The first paper by José Hugo Elsa and L. Moriconihas the details you are looking for.
There is another paper by Jonathan H. Tu that you can see
You can also read the dissertation by Yangzi Huang of Syracuse University to read more about vortex detection:
Question
What is stochastic and combinatorial optimization problem.
Also, How I can identify the problem i am working is Continuous or Discrete Optimization.
Discrete optimization would be something like the classic Traveling Salesman problem - you are finding a sequence of discrete points that satisfies some optimization criteria. Continuous optimization involves finding a set of extreme points on a continuous hypersurface that is defined by a continuous cost function. Hamiltonian and Lagrangian approaches from physics are very old forms of this type of optimization.
Online search engines provide a treasure-trove of hits on stochastic and deterministic approaches. Give it a whirl!
Question
Most of the optimization methods utilize the upper and lower bounds constraints to handle this issue. At the same time, each variable can significantly affect the direction of the optimization method to find its optimal value. Thus, returning to the lower or/and upper values leads to a delay in finding the optimal values in each iteration.
Dears Dr. Cenk and Ghosh,
Thank you so much for your response. In fact, I have officially developed a new strategy ehich can handly this issue practically and return optimal solution within feasible ranges rather than upper and/or lower bounds. The results showed execellent performance of the proposed algorithm. The matlab code will be given in recent future.
Best regards,
Hussein
Question
Hi all,
Does anyone have an idea about how to make a stochastic based crack distribution model which we can use further in the finite element model to predict the growth?
There are several stochastic-based crack distribution models being researched upon in recent years. However, there are some fundamental papers in this area that still influences research in this area as their models are simple, elegant, and effective. Some of them are
1. A stochastic theory of fatigue crack propagation (YK Lin, JN Yang - AIAA journal, 1985 - arc.aiaa.org)[
2. Stochastic modeling of fatigue crack dynamics for on-line failure prognostics(A Ray, S Tangirala - IEEE Transactions on Control Systems …, 1996 - ieeexplore.ieee.org) [ /]
3. A study of stochastic fatigue crack growth modeling through experimental data (WF Wu, CC Ni - Probabilistic Engineering Mechanics, 2003 - Elsevier) [
4. A stochastic model for the growth of matrix cracks in composite laminates (ASD Wang, PC Chou, SC Lei - Journal of Composite …, 1984 - journals.sagepub.com) [ ]
5. A simple second order approximation for stochastic crack growth analysis (JN Yang, SD Manning - Engineering fracture mechanics, 1996 - Elsevier) [ ]
6. Numerical modelling of concrete cracking based on a stochastic approach ( P Rossi, S Richer - Materials and Structures, 1987 - Springer) [https://link.springer.com/article/10.1007/BF02472579]
7. Stochastic modeling of fatigue crack growth (K Ortiz, AS Kiremidjian - Engineering Fracture Mechanics, 1988 - Elsevier) [ ]
I hope these articles will be of some help to your research.
Question
For orientation I am looking for theories as a basis of an goal-based investment approach. So far I found some appraoches using
• Portfolio Optimization with Mental Accounts (POMA)
• Asset Liability Management (ALM)
• Stochastic Dominance (SD)
Do you have any other suggestions?
Which approach is most common and accepted?
Based on the substitution of personal goals for market goals - these are more theories from the field of behavioral finance. As we say: a tit in the hand is better than a crane in the sky!
Question
Hi everyone, Below references written by latex and I need a help to identify which the style are written. I need the style used and the supported packages for it.
Thank you
H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Th. Relat. Fields 100 (1994) 365–393.
H. Kunita, Stochastic Flows and Stochastic Differential Equations (Cambridge Univ. Press, 1990).
R. Lefever and J. Turner, Sensitivity of a Hopf bifurcation to external multiplicative noise, in Fluctuations and Sensitivity in Nonequilibrium Systems, eds. W. orsthemke and D. K. Kondepudi (Springer-Verlag, 1984), pp. 143-49.
P. Ruffino, "Rotation numbers for stochastic dynamical systems", PhD thesis, University of Warwick, 1995.
Thanks @Ette Etuk
Question
Elections, not just voting, can become trustworthy.
Additional methods to the ones that made voting trustworthy (choose your favorite) can be applied. Eiections are not used to choose the captain of a ship, said Socrates, 2500 years ago. One can use technology to help. One has to verify competence, reciprocity, and (for fairness) anonymity.
Although voting is deterministic (all ballots are counted), information can be treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, can be explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with AI providing many copies of results without voter identify. A voting method to do so, is explained at https://www.researchgate.net/publication/286459956_The_Witness-Voting_System
How about the next step, fair elections? Can one learn from the events in the US in Jan/6?
One needs to be more attentive and not just believe on politicians.
For example, according to France and Brazil, the airplane was invented by Santos Dumont, who flew around the Eiffel Tower, and though he won the International prize for it, this is not accepted in the US, it is not even mentioned.
I see that other countries do not believe in "America First." The leadership stands divided.
Some are just grateful to be born and live here, hopeful to make contributions. China is passing the US on 5G. In short, the more one advances, the more opportunities for others to contribute and have success.
Here, elections must be based on trust, not just voting. As Stalin said, "who counts the votes control the voting." Many countries don't even consider the US a democracy, because there is no direct vote here yet -- no people voting. The president is voted indirectly. Maybe the problem yesterday will do away with the slave-time Electoral College. Will see.
Second, from size, India has a bigger claim to "largest democracy."
Question
Instead of manual tuning of algorithm's parameters, it is recommended to utilize automatic algorithm configuration software. Mostly beacuse it was shown that they increase manyfold the algorithm's perfomance. However, there are some differences among the proposed configuration software and beside those listed in (Eiben, Smit, 2011) it is important to gather experiances from the researchers. I would like to hear how does one decide on the stopping criteria, or values for parameters, for heuristic steps within the stochastic algorithm... there are so many questions.
As you mentioned, parameter tuning studies for a metaheuristic is quite important. Researchers should determine proper control parameters for their optimization problem to increase the success of the algorithm. However, many researchers uses algorithm parameters suggested by their developers as this is can be a time consuming task via a trial and error approach. Also, I agree that self-adaptive versions of these algorithms can increase both effectiveness and performance compared to their original versions. However, they can require definition of extra parameters as well in the algorithm. In my cases, I prefer to use original versions of the algorithms via a parameter tuning study. Besides, I use two termination criteria including a predefined maksimun generation number and a tolerans value. If the algorithm provides a misfit value less than the tolerans, it stops before the reaching maksimum number of generation. Sometimes I take into account a number of successive generations. For instance, if the solution do not improve during the last 30 generations, I stop the algorithm. This provides relatively decrease the high computation cost due to much execution of the forward equation. This is the biggest drawback of the global optimization compared to derivative-based approaches considering high-dimensional optimization problems.
Question
I am trying to understand Parametric weather generator models for the purpose of downscaling GCM data. This is my first time working on this topic, and I am finding it hard to understand how exactly these weather generators are used. Do you use a package in a software like R or are there pre-existing weather generator softwares or programs that can be directly used to generate results? While there is literature regarding the theory of parametric weather generators, I cannot find any resources that help with understanding the implementation. Any resources or explanation regarding this will be helpful!
Question
Hello, I would like to know any experiences about stochastic optimization software or possibility to prepare it in Python for example. I am looking for an optimization process in Revenue Management. Many thanks for your answers. MP
In python you can use sckitlearn library or You can use other software, like Maple or GAMS. It depends on your need.
Question
There has been rich literature describing how to reformalize stochastic MPC in linear systems into deterministic optimization. However for nonlinear system, propagation of uncertainty through time seems very complicated, so most literature I find solve nonlinear stochastic MPC with probabilistic constraints using sampling based approaches. I am new to nonlinear stochastic MPC, so I am wondering whether there's any existing library/toolbox for nonlinear stochastic MPC with probabilistic constraints? Or if there's no such convenient tools, based on your practical experience, which paper do you recommend me so that I can implement the algorithms myself?
1- Stochastic Model Predictive Control: An Overview and Perspectives for Future Research
DOI: 10.1109/MCS.2016.2602087
2- Stochastic model predictive control — how does it work?
By: panelTor Aksel N.HeirungJoel A.Paulson
3- Stochastic Nonlinear Model Predictive Control with Probabilistic Constraints Ali Mesbah1 , Stefan Streif , Rolf Findeisen , and Richard D. Braatz
Question
Please, I need to know the best meta-heuristic and hybrid meta-heuristic algorithms (except for Genetic Algorithm) for finding the optimal or near optimal solution of stochastic scheduling problems.
All metaheuristic optimization approaches are alike on average in terms of their performance. The extensive research studies in this ﬁeld show that an algorithm may be the topmost choice for some norms of problems, but at the same, it may become to be the inferior selection for other types of problems. On the other hand, since most real-world optimization problems have different needs and requirements that vary from industry to industry, there is no universal algorithm or approach that can be applied to every circumstance, and, therefore, it becomes a challenge to pick up the right algorithm that sufficiently suits these essentials
For more information on this subject, refer to page 40 of the following article:
Question
Please kindly suggest some good FEM softwares for the numerical solution of both time-dependent and time-independent stochastic PDEs.
For software refer following article...
Question
I would be grateful for suggestions to solve the following problem.
The task is to fit a mechanistically-motivated nonlinear mathematical model (4-6 parameters, depending on version of assumptions used in the model) to a relatively small and noisy data set (35 observations, some likely to be outliers) with a continuous numerical response variable. The model formula contains integrals that cannot be solved analytically, only numerically. My questions are:
1. What optimization algorithms (probably with stochasticity) would be useful in this case to estimate the parameters?
2. Are there reasonable options for the function to be optimized except sum of squared errors?
Dear;
You can solve the integrals manually or by a software.
Regards
Question
In a population dynamical system, the non-explosion property is often not good enough but the property of ultimate boundedness is more desired. The conditions for the ultimate boundedness are much more complicated than the conditions for the non-explosion.
The nonexplosion property in a population dynamical system is often not good enough but the property of ultimate boundedness is more desired
Question
In the article Random Ordinary Differential Equations, Journal of Equations Differentials 7, 538-553 (1970), by JL Strand, reference refers to his doctoral thesis: Stochastic Ordinary Differential Equations, University of California (Berkeley), 1968, and also to doctoral thesis: Random Ordinary Differential Equations, University of California (Berkeley), 1968, by R. Edsinger.
Do you know how to get these two doctoral theses or does anyone have these two papers?
see
Random Ordinary Differential Equations and Their Numerical Solution
Springer SingaporeXiaoying Han, Peter E. Kloeden (auth.)Year:2017
Random Differential Equations in Science and Engineering
Academic PressT.T. Soong (Eds.)Year:1973
Random Differential Equations in Science and Engineering
Academic PressT.T. Soong (Eds.)Year:1973
Question
Stochastic Mortality Models (SMM) have often be used by insurance coy to model the risk of mortality in general, but I want to apply and restrict it to childhood mortality. How do I go about it?
I hope so too. Thanks Dr. @Hom Nath Chalise.
Question
Dual control in Stochastic optimal control suggests that control input has probing action that would generate with non zero probability the uncertainty in measurement of state.
It means we have a Covariance error matrix at each state.
The same matrix is also deployed by Kalman filter so how it is different from Dual Control
Dear Sandeep,
These are the dual objectives to be achieved in particular, a major difficulty consists in resolving the Exploration / Exploitation (E / E) compromise.
Best regards
Question
I'm working on comparing latent space representations of image patches which are encoded as multivariate normal distributions over the respective latent space.
Which metrics - besides (symmetric) KL-divergence, Hellinger distance and Bhattacharyya distance - exist to measure the distance between multivariate normal distributions, ideally fulfilling the mathematical definitions of a metric?
Second, from what I've noticed, Hellinger distance has a very small "window of sensitivity" - meaning that if I compute the similarities between encoded distributions, I get small values for identical image patches and values close to 1 for everything else, while symmetric KL divergence covers a wider range of values and also measures small distance between non-identical input. Any ideas on this?
You might try looking at the Continuous Ranked Probability Score (CRPS) . You would need the version of this that compares two probability distributions, and would also need to extend the definition from 1 to multiple dimensions. Essentially it is just the integral of the squared difference between the two cumulative distributions functions. I don't know if there are formulae for univariate or multivariate normal distributions. In the univariate case, it is an extension of the mean absolute error.
Otherwise, you might look through the list of distance measures at https://en.wikipedia.org/wiki/Statistical_distance .
Question
Hello
I was wondering if there is such a big enough data set available in oder to estimate the stochastic distribution of Covid 19-related symptoms in relation to location and time. This could be helpful for detecting variations/mutations in the disease and support diagnosis.
Thank you.
You can find some dataset in the following link:
All the best.
Question
I am using benchmarking package of in R to analysis the energy efficiency of firms by stochastic and deterministic methods. I am getting the dmu score more than 1 or equal to one or less than 1. Can anyone help me in this regard?
Nice Dear Anup Kumar Yadava
Question
Hello All,
I am working on stochastic upscalling of damage in concrete microstructure. Can someone provide a UMAT for Mazar's damage model or any resource to help me create UMAT on my own.
Thank you.
Vasav Dubey
Texas A&M University
Texas, USA
Find it here; Good luck!
Question
I want to study stochastic finite element, Can any one introduce a straight forward reference in simple method?
You would find plenty of references online. However, I studied one lecture note by Gordan Žitkovic several years ago, and found really helpful. I am attaching the copy of the book with the answer, please find attached. I hope it helps!
Question
Euler method and the Milstein method
The Euler scheme with Brownian increments and step T/n strongly converges to the underlying diffusion rate n^{-1/2} for the sup norm over [0,T] in every L^p such tat X_0\in L^p.
The Milstein scheme with Brownian increments and step T/n strongly converges to the underlying diffusion rate n^{-1} for the sup norm over [0,T] in every L^p such tat X_0\in L^p.
Hence it t converges faster BUT, first it applies to SDEs with smoother drifts and diffusions coefficients, it is not simulable when the dimension of the Brownian motion is greater than 1 because in higher dimension it involves the simulation of Levy areas which is not possible at reasonable computational cost.
Finally both schemes have the same weak rate of convergence (O(1/n)) ie rate of rate of convergence
\E f(X_T) -\Ef(\bar X_T)
under various regularity or uniform ellipticity assumptions assumptions on teh coefficients of the diffusions and /or the function f. If I may, all this is detailed and proved in my book
Numerical Probability: an introduction with appliaction to Finance
Universitext, Springer.
Question
We are working on a large number of building-related time series data sets that display various degrees of 'randomness'. When plotted, some display recognisable diurnal or seasonal patterns that can be correlated to building operational regimes or the weather (i.e. heating energy consumption with distinct signatures as hourly and seasonal intervals). However some appear to be completely random (Lift data that contains a lot of random noise).
Does anyone know if an established method exists that can be deployed on these data sets and provide some 'quantification' or 'ranking' of how much stochasticity exists in each data set?
No, there is nothing precisely like that.
"Random" is what we can not explain or predict (for whatever reason; it does not matter if there is no such possible explanation or if we are just not aware of one).
The model uses some predictors (known to us; like the time of the day, the wether conditions including the day in the year, etc.) and makes a prediction of the response (the energy consumption) - the response value we should expect, given the corresponding values of the predictors. You can see the model as a mathematical formula of the predictor values. The formula contains parameters that make the model flexible and adjustable to observed data (think of an intercept and a slope of a regression line, or the frequency and amplitude of a sinusoidal wave).
The deviation of observations from these expected values are called residuals. They are not explained by the model and are thus considered "random". This randomness is mathematically handled by a probability distribution: we don't say that a particular resudual will be this or that large; instead we give a probability distribution (more correctly, we give the probability distribution of the response, conditional on the predictors). Using this probability model allows us to find the probability of the observed data (what is called the likelihood) given any combination of chosen values of the model parameters. Usually, we "fit" these parameters to maximize this likelihood (-> maximum likelihood estimates).
Thus, given a fitted model (on a given set of observations), we have a (maximized) likelihood (which depends on the data and on the functional model and on the probability model).
This can be used to compare different models. One might just see which of the models has the largest (maximized) likelihood. There are a few practical problems, because models with more parameters can get higher likelihood s just because they are more flexible - not more "correct". This is tried to be accounted for in by giving penalties for the model flexibility. This leads to the formulation of different information criteria (AIC, BIC, DIC and alikes, that all differ in the way the penalties are counted).
So, after that long post, you may look for such ICs to compare different models. The limitation remains that the models are all compared only on the data that was used to fit them, without guarantee that they will behave similar for new data. So if you have enough data it might be wise to fit the models using only a subset of the available data and then check how well these models predict the rest of the data. It does not really matter how you quantify this; I would plot the differences of the models side-by-side in a boxplot or a scatterplot.
Question
Once we select the appropriate model specification and estimation of panel stochastic production frontier model what robustness checks are required before the results are used for discussion? Since I am using Stata 12.1 version, I would appreciate if anyone knows the stata command as well.
Question
I develop vehicular emissions inventories at street level and I would like to run dispersion models for scientific applications with my outputs. After some literature research, I have found some models like
However, it seems that there are many other models.
Do you know more models?
How was your experience?
Many thanks!
You can use MESO-NH, with SURFEX and model pollutant dispersion. The advantage of MESO-NH is that it simulates well the turbulence.
Question
I have a stochastic mixed-integer non-linear model and it is difficult to be solved. So I want to write the constraints with uniform distribution in the sub-model that is found in the whole model. what are the steps for this? Thanks
I have been using LINGO for a while, and yes stochastic mixed-integer non-linear model is difficult to be solved, are you sure that writing the constraints with uniform distribution in the sub-model will solve the problem?
There is quite useful lingo library which you can access from the lingo website for free. There are also some examples of writing submodels if you look at SCM planning problem. You might find your exact answer there.
Question
I am struggling over the search for studies showing that the age of an individual has an impact on the increased prevalence of Rheumatoid diseases. Wouldn't that be rather a stochastic issue (the longer the time, the higher the chance)? While I could imagine, how increased incidences of cancer correlate with age or that infections result in increased mortality with age, I am struggling about how the cellular age results in autoimmune disorders. I read about thymic atrophy/less T cell variablity and changes in B cell responses and alterations in Tfh cell reactivity and alteration in phagocyte functions (less phagocytosis, less radical synthesis), but does that necessarily lead to an increased incidence of autoimmunity? And if autoimmunity is posively correlated with age, shouldn't the overall response during a particular period be stronger in old organisms than the same disease of comparable period in younger organisms?
UPDATE (March2020): I would say that depending on the autoimmune disease, age shows a positive or negative correlation. However, I am still looking for publications with valid in vivo models showing the age effect on the disease outcome or not.
That is interesting regarding the poor Treg function in RAG1ko‘s. Sorry that the paper was of no help to you, as I said it was a quick google search but is obviously 2013! Would you be able to let me know what you find out?
Question
Dear Researchers,
I have time series data & I’m running OLS regression on Stata.
I want to de-trend a variable while taking into consideration that the trend is stochastic not linear. This is the code I found while searching but I’m not sure if it treats the trend as stochastic:
reg x time
predict x_detrended, resid
In fact, when I plotted the detrended series (x_detrended) with the time variable (quarter in my case) to see the trend, the extracted trend seems to be linear not stochastic. I’m attaching the graph below.
So, I want to know the command on Stata that detrend the series given a stochastic trend not linear.
I will appreciate your help.
Thank you very much
Actually I am not expert in stata, but I suggest you to use seasonal differencing to detrend your data.
Question
Suppose there is a regression model y=Ax+e which A is stochastic. and objective function is Huber function. as you see in attached file if A is deterministic equation (4-58) reduces to (4-60). but if it is not, what happens for equation (4-58)
I would appreciate if anyone tell me how i can solve it or suggest papers, books.
thank you
Question
stochastic differential equation
dX(t)=sin(X(t))dt +dW(t)
OK, but I have no constructive Idea for getting the second moment.
Question
I am looking to solve a Two-Stage Stochastic Mixed-Integer optimization problem in GAMS for a pre- and - post-disaster resource allocation problem.
Hi
Here ts my email
Question
Dear friends,
I have a problem with the derivation of the expected value of a stochastic variable. my question is if the following relation is correct or not:
tau (t) is stochastic variable follow Bernoulli distribution if :
\frac{d}{dt} E\{\tau(t)\}= E\{ \frac{d \tau(t)}{dt} \}.
if this relation not correct.
Isn't this line
MR: "\tau(t) = ( \sigma(t) +\bar \sigma -\bar \sigma)*d(t). "
equivalent to
"\tau(t) = \sigma(t) *d(t)."
?
Probably, some parantheses are missig, arn't they?
Question
There are two types of mathematical models: Deterministic and Stochastic.
Explain them with proper examples.
Deterministic-no probabilities, stochastic- has probabilities
Question
In 2 stage stochastic optimization, why I found that the optimization problem has equations for the 1st stage and equations for the 2nd stage but those two groups are solved simultaneously however I think that we first solve the 1st stage equations then take the results and substitute in the 2nd stage equations (new problem) or there is something I overlooked?
Also why we dont combine the equations for 1st and 2nd stages as they are solved simultaneously ?
my case study is for power systems with renewable energy uncertainty when I make day ahead decisions for the power dispatch, power dispatch of each generator are computed (1st stage decisions) then after the realization of uncertain events (renewable energy) redispatch is done in the 2nd stage using reserve or may some load shedding is done.
My question is how the decisions of the 2 stages done simultaneously as I see from some papers. why we optimize the 1st stage first and run our optimization then take the results and apply them in the 2nd stage problem and run it again? and if the optimization of the 2 stages made simultaneously, why we did not combine the equations (constraints) of the 2 stages together ? as I see 1st stage constraints and 2nd stage constraints.
I agree that both decisions should be considered together. I know of several large power producers that do just that. I wrote the original scheduling software for several of TVA's plants and more that was used at the Load Control Center. I have performed many historical simulations of this sort and recently published a book on the subject (https://www.amazon.com/dp/B07YJ1JFLS ) The book will be free on the Tuesday after Christmas. The software (including several plant models) is free here http://dudleybenton.altervista.org/software/index.html More of the power plant models can be seen here http://dudleybenton.altervista.org/projects/Power%20Plants/PowerPlants.html Appendix A of the book has a whole list of links to publications you can download free with a description of each. Appendices B, C, and D explain how to get weather data for anywhere on Earth and build that into the model. I have provided spreadsheet models for several plants that go out and grab the latest weather predictions, estimate the capacity for multiple units, factor in the ramp rates and demand, then calculate the advantage of bringing units up or down, switching from simple cycle to combined cycle, and turning duct firing on or off, including how this might impact their emissions quotas. Of course, these last are proprietary, but I'd be glad to get you started building your own.
Question
For stochastic adaptive systems important is to consider local ergodic system, Markov chain, local Monte Carlo simulation and maximum entropy method to be more efective.
Under the conditions of local controllability, local ergodicity, follows for linearized case , the possibility of local stabilizability.
Question
I have a model free based method for scenario generation to be used in stochastic optimization problem. But the problem is after scenario generation I want to assign a probability fro each scenario How can I do this?
@ Paresh Date thanks for yr reply but i have generated scenarios from historical data using neural network then i dont have the probability distribution of them.
but I only have some scenarios with equal probability of occurrence how can I assign probability to each scenario or can I make scenario reduction then assign a probability to each new scenario?
Question
Hello! I want to solve a stochastic PDE-constrained optimization problem and so I want to use an existing FEM open-source package (Python/C/C++)  to solve the following stochastic PDE constraint:
F(u,z;w) := - \nabla . ( k(w) \nabla u)  - z = 0
where u is the state variable, z is the control variable and w the stochastic variable. I want to use the adjoint approach to compute the gradient of the objective function, i.e., the gradient of \tilde{J}(z) = J(u(z),z). Therefore, I hope that the FEM solver can provide the derivative of F(u,z) w.r.t. u and z.
I have found the dolfin-adjoint package but I find it not suitable for my stochastic application. Could you please help me with your advice?
I stumbled over this old post during research and thought it might be helpful for someone finding this post:
NGSolve has built in functionality accessible from Python for this since this summer, it was presented at the usermeeting, resources are here:
Question
in my project, I deal with stochastic data by markov model through a short study period, hence I will do a simulation to reach sufficient sample size, however my question how can do that and what the best performance to estimate transition probability matrix some of the papers use norm but I do not understand how did that.
The simulation of a Markov chain is described, for example, in m y book "Probability and Stochastic Modeling", pp. 135-136.
Question
Chemical reaction systems can be modeled through a deterministic or stochastic approach. However, it is common to introduce new variables in order to perform dynamic analysis in deterministic systems. Could these reduced forms be directly used to derive stochastic equations or is it necessary to start from the original expressions? It is common for many concentrations to be normalized after these procedures.Does this affect the construction of stochastic models?
I also recommend these two references (below) that explain the relation between microscopic (scale) model, mesoscopic (scale) model, and the macroscopic (scale) model.
1-Pavliotis, Grigoris, and Andrew Stuart. Multiscale methods: averaging and homogenization. Springer Science & Business Media, 2008.
2- Lachowicz, Mirosław. "Microscopic, mesoscopic and macroscopic descriptions of complex systems." Probabilistic Engineering Mechanics 26.1 (2011): 54-60.
Question
I've already estimated the technical efficiency model, technical inefficiency effects model, and got the values of log likelihood, σ2, γ, and I just want to test the hypotheses concerning the model parameters.
Attached is an example of a table containing the results of a stochastic frontier model (Cobb-Douglas production function), in addition to the results of the technical inefficiency effects model (10 business environment variables (Zi)). Could you please, given the estimated parameters in table (1) tell me how to test the following hypothesis:
1. Cobb-Douglas does not the appropriate production function form
(H0: β3 = β4 = β5 = 0)
2. No Technical inefficiency effects
(H0: γ = δ1 = δ2 = …= δ10 = 0)
3. No stochastic inefficiency
(H0: γ = 0)
4. No joint inefficiency variables
(H0: δ1 = δ2 = …= δ10 = 0)
Thank you very much in advance for this valuable help.
Hi,
Question
Can one use the L-shaped method for solving a two-stage stochastic program with binary variables in the first stage and integer and continuous variables in the second stage?
Dr. Pegman,
Thank you for your answer. This paper is rather a short but helpful paper to understand the integer L-shaped method in a special case where we have binary variables in the first stage and continuous and integer variables in the second stage. This paper validates my approach to solve my problem. However, it does not provide many details. I have also looked at Chapter 5 of the "Introduction to Stochastic Programming" textbook which explains the method in more detail.
I would appreciate it if you could recommend more references to learn how to apply this method.
Thank you,
Maryam
Question
As you know in stochastic programming, it is common that we consider data with normal distribution. why normal ? What properties does a normal distribution have that made it important?
Due to the central limit theorem (CLT), the normal distribution is seen in a broad range of cases, where multiple random variables are present. This is due to the quite reduced set of conditions that is sufficient for the CLT to be applied. So the normal distribution may be the default assumption in many cases.
Note: There are other limit theorems that extend the idea to more exotic conditions. Therefore, you need to scrutinize your setup to check the validity of CLT.
Question
Uses only single training example to calculate the gradient and update parameters.
Calculate the gradients for the whole dataset and perform just one update at each iteration.
Mini-batch gradient is a variation of stochastic gradient descent where instead of single training example, mini-batch of samples is used. It’s one of the most popular optimization algorithms.
It is the first basic type of gradient descent in which we use the complete dataset available to compute the gradient of cost function. As we need to calculate the gradient on the whole dataset to perform just one update, batch gradient descent can be very slow and is intractable for datasets that don’t fit in memory.
Batch Gradient Descent turns out to be a  slower algorithm. So, for faster computation, we prefer to use stochastic gradient descent. The first step of algorithm is to randomize the whole training set. Then, for updation of every parameter we use only one training example in every iteration to compute the gradient of cost function. As it uses one training example in every iteration this algo is faster for larger data set. In SGD, one might not achieve accuracy, but the computation of results are faster.
Mini batch algorithm is the most favorable and widely used algorithm that makes precise and faster results using a batch of ‘ m ‘  training examples. In mini batch algorithm rather than using  the complete data set, in every iteration we use a set of ‘m’ training examples called batch to compute the gradient of the cost function. Common mini-batch sizes range between 50 and 256, but can vary for different applications.
Question
If so, what does this say about having a high number of references?
If not, then are recommendations predictable? If so, how? If not, why not? Are stochastic signals being a basis of the human brain a reason to say there is no hope in predicting the work you do here?
Dear Dr.
Sure, There are more books and scientific researches that
find many recommendations, due to the number of
scientific quotes, effective applications and accurate
information that open up a large field in the future.
In short, make sure that scientific
research and books find more recommendations.
This needs more work and life developments.
Question
I solved a mixed integer nonlinear nonlinear program by using the genetic algorithm in global optimization toolbox of MATLAB. However, the minimized objective value was different when I solved the problem by running the algorithm several times.
Genetic algorithm is a stochastic algorithm so it may give different solution at each run, but I think that I am dealing with premature convergence issue in my case.
Premature convergence is probably due to loss of diversity in the population but the MATLAB genetic algorithm for MINLPs overrides any crossover and mutation functions.
Is it appropriate to obtain multiple solutions by running the algorithm several times and choose the best solution among them?
I believe premature convergence happens when your population stops improving, i.e., the best individual has the same values during multiple generations. So, if your algorithm was running for 30 generation and it stopped improved by generation 20, it is a case of premature convergence, if you know that there are better solutions for the problem.
In the case you don't know the best solution to the problem, the only solution is to run the algorithm multiple times with different parameters and see the behavior of the population.
Question
For given samples X1...Xn, which follow the distribution F with parameters p1,...pm, one may use the maximum likelihood method to estimate p1,...pm. In the technical process, the method is only an optimization problem in m-dimensional field.
In some cases, it is irritating to acquire different estimated values of p1,...pm than the one acquired if one only estimate p1 (assuming this parameter does not depend on p2,...pm). E.g. the joint estimators (\mu, \sigma) may be different than \mu, when estimated alone. Since \mu is reserved for expected value, which one is "correct"? the joint-\mu or the single-\mu?
1) if the joint one is correct, then any single estimator should be doubted.
2) if the single one is correct, then joint-model does not have any values. So one should always choose univariate model.
In some cases, the joint-\mu does not make any sense. The interpretation that \mu is an expected value is somehow no longer valid in a joint-model.