Science topic

# Stochastic Processes - Science topic

Processes that incorporate some element of randomness, used particularly to refer to a time series of random variables.
Questions related to Stochastic Processes
Question
Exploring how P-completion deepens insights into a stochastic process's natural filtration and its associated information structure.
Hi, P-completion in stochastic processes refers to the process of making a stochastic process "complete" in a probabilistic sense. This concept has significant implications for understanding stochastic processes and their information structures. Here's how:
Enhanced Understanding of Information Structures:
Information Accessibility: Incomplete stochastic processes may not fully reveal the underlying information structure. P-completion allows for a more comprehensive understanding of the available information at any point in time.
Prediction and Estimation: With P-completion, predictions and estimations based on the process become more reliable, as the completion accounts for all probabilistically possible outcomes.
Modeling and Analysis:
Robustness in Modeling: P-completion makes models more robust by ensuring that they account for all possible scenarios, including those with small probabilities.
Accuracy in Analysis: Completing a process in this manner allows for more accurate analysis, particularly in scenarios where the tail events (events with low probability but high impact) are significant.
Financial Applications:
Risk Management: In finance, P-completion is crucial for accurate risk modeling. It helps in understanding the full range of potential market behaviors, which is essential for pricing derivatives and managing portfolio risks.
Option Pricing: The completeness of a market, a concept closely related to P-completion in stochastic processes, is fundamental in option pricing models. It ensures that all contingent claims can be priced and hedged.
Handling of Incomplete Information:
Filling Gaps: P-completion provides a framework for dealing with incomplete information, allowing for the construction of a probabilistically complete picture from partial data.
Decision Making Under Uncertainty: This process is crucial in decision-making scenarios where information is incomplete or arrives sequentially.
Impact on Statistical Inference:
Parameter Estimation: In statistical inference, P-completion influences the estimation of parameters by accounting for all possible data realizations.
Hypothesis Testing: It impacts the robustness of hypothesis testing by ensuring that tests consider all probabilistically relevant scenarios.
Algorithmic Implications:
Algorithm Development: In algorithmic stochastic processes, P-completion can impact the development of algorithms by ensuring they work under a comprehensive set of probabilistic scenarios.
Computational Efficiency: While P-completion provides a more complete view, it may also introduce computational challenges, especially in complex or high-dimensional spaces.
Question
Sumudu Transform:
The Sumudu transform is a generalized version of the Laplace and Fourier transforms. It has been used in diverse fields such as signal processing, image analysis, and mathematical biology. In recent years, the Sumudu transform has been applied to study the fractal properties of different systems. The fractal dimension is a measure of the complexity and self-similarity of fractal sets. The Sumudu transform can be used to calculate the fractal dimension of different objects and systems. The basic idea is to use the scaling properties of the Sumudu transform to obtain a relation between the fractal dimension and the scaling exponent of the Sumudu transform. This relation can then be used to calculate the fractal dimension of different systems. For example, the Sumudu transform has been used to study the fractal dimension of fractional Brownian motion, which is a self-similar stochastic process that is often used as a model for natural phenomena such as turbulence. The fractal dimension of fractional Brownian motion can be obtained by analyzing the scaling properties of its Sumudu transform. In general, the Sumudu transform can be used to study the fractal properties of different systems by providing a new way to analyze their scaling properties.
Caputo fractional derivatives:
Caputo fractional derivatives are a type of fractional derivative that take into account the initial conditions of a system. They are often used in modeling complex systems with anomalous diffusion, such as in fractals or porous media. In these systems, the fractal dimension plays a key role in determining the behavior of the system over time. The fractal dimension describes how the system fills space, and can be thought of as a measure of how complex and irregular the system is. When modeling these systems using Caputo fractional derivatives, the fractal dimension can be incorporated into the derivative itself, allowing for a more realistic and accurate representation of the system's behavior. This is done by replacing the usual order of differentiation with a fractional order that depends on the fractal dimension.
Fractal nonlocal derivatives:
Fractal nonlocal derivatives in fractal dimension refer to a mathematical concept which uses fractal geometry to define a nonlocal derivative operator. This operator is used to describe the behavior of a function on a fractal set, where traditional calculus may not apply because the fractal set has a non-integer dimension. The idea behind fractal nonlocal derivatives is that the derivative of a function at a point on a fractal set is not just dependent on nearby points, but also on the global behavior of the function on the fractal set. This concept is important for understanding the behavior of complex systems that exhibit self-similarity and can be modeled using fractal geometry. The use of fractal nonlocal derivatives has applications in fields such as physics, finance, and biology, where the behavior of systems on fractal sets is of interest. It is also an active area of research in mathematics, as it allows for the development of new tools to study and understand the behavior of functions on fractal sets.
Fractal differential equations:
Fractal differential equations are an important tool in studying fractals. These equations are formulated in terms of fractional calculus, an extension of classical calculus that deals with non-integer powers of differentiation and integration. Fractal differential equations are used to model physical, biological, and engineering systems that exhibit fractal behavior. The term "fractal dimension" refers to the concept of measuring the complexity of a fractal object. It is a non-integer dimension, typically expressed as a real number between 1 and 2 for most fractals. Fractal differential equations can be formulated in terms of this dimension, allowing researchers to study the behavior of fractals in a more systematic way. One example of a fractal differential equation is the so-called fractal heat equation. This equation describes how heat diffuses through a fractal medium, such as a fractal network of blood vessels or airways. Another example is the fractal wave equation, which describes the propagation of waves (such as light or sound) through a fractal medium. Fractal differential equations have many applications in science and engineering. They have been used to model the behavior of porous materials, the electrical properties of fractal networks, and the dynamics of fluid flow through fractal geometries, among other things. In general, fractal differential equations provide a unique and powerful tool for understanding the complex behavior of fractal systems.
Fractional stochastic systems:
Fractional stochastic systems in fractal dimension are systems that exhibit both fractal geometry and randomness through the use of fractional calculus. Fractional calculus deals with non-integer orders of differentiation and integration, which enables modeling of phenomena that exhibit anomalous diffusion and memory effects. Fractal geometry pertains to objects that are self-similar at different scales, and characterized by a fractal dimension, which is a non-integer number between its topological and metric dimension. Examples of fractional stochastic systems in fractal dimension could include the modeling of rainfall patterns, which exhibit fractal properties due to the self-similarity of the precipitation clusters, and can also be characterized as random processes. Another example is financial market modeling, which can be approached through fractional Brownian motion, a fractional diffusion process that can capture long-term dependence and volatility clustering of stock price time series. The study of fractional stochastic systems in fractal dimension is an interdisciplinary field that combines mathematics, physics, and engineering, among others. It has diverse applications in various fields, such as signal processing, medical imaging, geophysics, and materials science, to name a few.
Fractal Picard iteration: Fractal Picard iteration is a mathematical method used to find the fixed points of a self-similar mapping or contraction mapping. It involves repeatedly applying the mapping to an initial guess while keeping track of the intermediate results. The resulting sequence of iterates usually converges to the fixed point, which is the point that maps to itself under the mapping. This method is especially useful for analyzing the behavior of fractals, which are objects that exhibit self-similarity at different scales. Fractal Picard iteration can be used to compute the attractors of fractal functions or to generate fractal patterns. The procedure involves dividing the domain into smaller subdomains that are related by contractions. Each subdomain is then mapped to a smaller subset of the domain, which is then recursively subdivided and mapped again. The process is repeated several times until a self-similar pattern emerges. Fractal Picard iteration is a powerful tool in mathematics, computer science, and physics, among other fields. It has many applications, including image compression, data analysis, and the modeling of complex systems such as turbulence and chaos.
Fractional differential equations:
Fractional differential equations in fractal dimension are mathematical models that describe the behavior of systems with fractal geometry using fractional calculus. In these equations, the order of the derivative is non-integer, and thus they are a powerful tool for modeling phenomena that exhibit complex, non-linear behavior. Fractal geometry is characterized by structures that exhibit self-similarity at different scales. Fractional differential equations in fractal dimension allow us to model complex systems that exhibit this self-similarity, and to study their behavior over different scales. Such equations have applications in physics, biology, finance, and engineering. They are used, for example, in modeling the behavior of porous materials, in predicting the spread of infectious diseases, in predicting the behavior of financial markets, and in modeling the conduction of heat in materials.
Yes, there are various resources available that provide more details on fractal analysis. Here are a few recommended materials to explore fractal analysis further:
1. Books:"Fractals and Chaos: An Illustrated Course" by Paul S. Addison "Fractal Geometry: Mathematical Foundations and Applications" by Kenneth Falconer "The Fractal Geometry of Nature" by Benoit B. Mandelbrot
2. Research Papers:"Fractal Analysis: Definition, Quantification, and Interpretation" by Weierstrass Institute for Applied Analysis and Stochastics (WIAS) "Fractal Analysis: A Brief Overview" by A. Krzywicki and B. Trzeciak
3. Online Courses:Coursera offers courses on fractal analysis, such as "Fractals and Scaling" and "Fractals and Dynamical Systems in MATLAB." edX provides courses like "Introduction to Fractals and Fractional Calculus" and "Fractals and Scaling In Finance."
4. Scholarly Journals:"Fractals: Complexity, and Chaos in the Natural and Social Sciences" is a journal dedicated to fractal analysis and related topics. "Chaos, Solitons & Fractals" is another journal that covers various aspects of fractal analysis.
These resources should provide you with a comprehensive understanding of fractal analysis, including its principles, techniques, and applications.
Question
In structural stochastic analysis, the input can be modeled by random field or stochastic process. In a discrete state, there is randomness between the two input discrete points. For output, can it also be directly characterized by random field or stochastic process? However, for the output, two discrete points in a discrete state appear to be deterministic due to the constraints of the physical system.
Yes, they can be used for uncertainty modeling and characterization. However, I still have some doubts about this. Taking the input elastic modulus random field as an example, the modulus of one element of a sample is A1, and the element is A2/A3.../An, A1... An is only constrained by the mean, variance and correlation with the random field. There is randomness between any two units. And corresponding to the output response, one element has a modulus of B1 and the other elementis B2, rather than B3... Bn. Because a fixed physical system leads to this determinism, even though the output has randomness across all input samples, it also has determinism with respect to the single sample. This seems different from input input.
Question
I was looking over the internet and could not find a satisfactory answer: What is a "concrete" (i.e., in applications outside of Math.) solution of any (definite) stochastic integral or rather how to find such a solution ? Recall that in the stochastic integration the result (which we can eventually apply) of the integration is not a number nor another stochastic process but a random variable. So, how to get it and also how to find or approximate its probability distribution ?. Shall we integrate ALL the realizations of the integrated process or some of them to obtain a statistical sample of the solution or somewhat else. Where can I find this problem properly elaborated or who can
explain this ? Jerzy F.
These lectures: https://irfu.cea.fr/Phocea/file.php?class=page&file=678/QFT-IRFU1.pdf might be a good place to start.
Question
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
What I did on the job (portraying the maintenance process from Plan to Approve to Schedule to Work to Closeout) was make a "bubble chart" with arrows from stage to stage (including skips and reversals, such as the plan was not approved and kicked back to planning) with arrows and the average time to go on the path and the number of packages to go on the path in a given time frame (such as a month).
With modern graphics, one could actually animate with ants going from mound to mound I suspect.
Question
I am doing pricing of some financial instrument. I have simulated interest rate paths which follows the CIR process (see attached). Now, I want to find the probability of each part. This is under the assumption that we can treat each path as a state and inference a transition matrix P and a stationary distribution (pi(t)), s.t.
pi(t)*P = P
I would very much appreciate if any one with knowledge of these stochastic processes point me to a resource such as a paper, R or Python implementation code or any advice on how I could proceed.
Guy Mélard thanks for the information. I will take a look
Question
Nowadays, the mainstream psychology is to use mathematical statistics to study psychology, which is a research method for groups, and its conclusions cannot be applied to individuals, and the main object of psychological research is the individual. So how do you study individuals? Is the time series and the stochastic process OK?
wish you success
Question
I am already using YUIMA package for estimating stochastic differential equations.
I am wondering which software/packages researchers use for estimating SDEs (other than YUIMA).
Julia differential equation solvers have high order and adaptive methods (https://diffeq.sciml.ai/stable/tutorials/sde_example/). But more importantly, these implementations are compatible with automatic differentiation, making it easy to do things like gradient descent for doing parameter estimation and model calibration. Tutorials of this can be found in the Julia SciML libraries, such as https://sensitivity.sciml.ai/dev/sde_fitting/optimization_sde/. The other advantage of course is performance, where the Julia performance over Python and MATLAB solvers is a few orders of magnitude (https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packages.html)
Question
Dear researchers,
I am working on formulating hydrological model when runoff(output variable) is available at monthly time-step while rainfall(input variable) is at daily time-step.
I firstly wanted to explore mathematical models and techniques that can be used here. I have found MIDAS regression method, which forms relationship between mixed frequency data variables (output at monthly time step and input at daily time step). But the problem is variables in hydrological models are at the same time step. So that technique will not work, because the MIDAS model will have relation between variables sampled at different frequency.
So can anyone suggest relevant literature, in which both output and input variables of model are related at high frequency (say daily) but the model is learning through low frequency (monthly) output data and high frequency (daily) input data.
Can you use daily input data to forecast daily output data then cumulate to monthly? You will be smoothing output forecasts, but observed output is already smoothed.. Because output data is only available monthly, daily variation in output is not observable.
Question
The birth and death probabilities are p_i and q_i respectively and (1-(p_i+q_i)) is the probability for no change in the process. zero ({0}) is an absorbing state and sate space is {0,1,2, ...}. What are the conditions for {0} to be recurrence (positive or null)? Is the set {1,2,3,...} transient? What we can say about duration of process until absorption and stationary distribution if it exists and etc?
Every comment is appreciated.
There is no logical (reasonable) condition that {0} is not absorbing, so it is always a recurrence state. {1,2,...} is always transient.
Question
I have a short time series (5 observations) and would like to know both the best approach for modelling said data and the most reliable predictive option?
The data is a stochastic process, recording the amount of 'green space' converted from natural environment to built form [in m2 per km2]. There is no auto-corrrelation or seasonality, but the data is non-stationary [and cannot be coerced through differencing etc]. I have modelled the data using a Dynamic Linear Model, but the forecast predictions are not particularly reliable, therefore I wondered whether I had taken the wrong approach and there were appropriate alternatives?
I have also tried an ARIMA, but have similar issues to the DLM.
Regards
John
For extremely short time series, simper models work much better. Since, there is no way of validating your model with held-out samples due to sample scarcity, you can use AIC as a proxy to select your best simpler models. From your description, it looks like even random-walk or AR(1) process can stand out as good candidate. The R package auto.arima() also selects simpler models for short timeseries. For further reference, you can have a look into this blog by Rob J Hyndman: https://robjhyndman.com/hyndsight/short-time-series/
Wish you best luck!
Question
The origin of gravitation, the origin of electric charge and the fundamental structure of physical reality are resolved, but these facts are not yet added to common knowledge. Also the structure of photons is resolved and the begin of the universe is explained. A proper definition of what a field is and how a field behaves have been given. These facts are explained in .
This model still leaves some open questions. The model does not explain the role of massive bosons. It does not explain the existence of generations of fermions. The HBM also does not provide an explanation for the fine details of the emission and absorption of photons. The model does not give a reason for the existence of the stochastic processes that generate the hopping paths of elementary particles. The model does not explain in detail how color confinement works. It also does not explain how neutral elementary particles can produce deformation. The referenced booklet treats many of its open questions in sections that carry this title.
The model suggests that we live in a purely mathematical model. This raises deep philosophical questions.
With other words, the Hilbert Book Model Project is far from complete. The target of the project was not to deliver a theory of everything. Its target was to dive deeper into the crypts of physical reality and to correct flaws that got adapted into accepted physical theories. Examples of these flaws are the Big Bang theory, the explanation of black holes, the description of the structure of photons, and the description of the binding of elementary particles into higher order modules.
The biggest discovery of the HBM project is the fact that it appears possible to generate a self-creating model of physical reality that after a series of steps shows astonishing resemblance to the structure and the behavior of observed physical reality.
A major result is also that all elementary particles and their conglomerates are recurrently regenerated at a very fast rate. This means that apart from black holes, all massive objects are continuously regenerated. This conclusion attacks the roots of all currently accepted physical theories. Another result is that the generation and binding of all massive particles are controlled by stochastic processes that own a characteristic function. Consequently the Hilbert Book Model does not rely on weak and strong forces that current field theories apply.
The HBM explains gravity at the level of quantum physics and thus bridges the gap between quantum theory and current gravitation theories.
The Hilbert Book Model shows that mathematicians can play a crucial role in the further development of theoretical physics. The HBM hardly affects applied physics. It does not change much in the way that observations of physical phenomena will be described.
Lee Smolin has an opinion about “Which fundamental questions about physical reality are still open?”:
1. The problem of quantum gravity: Combine general relativity and quantum theory into a single theory that can claim to be the complete theory of nature.
2. The foundational problems of quantum mechanics: Resolve the problems in the foundations of quantum mechanics, either by making sense of the theory as it stands or by inventing a new theory that does make sense.
3. The unification of particles and forces: Determine whether or not the various particles and forces can be unified in a theory that explains them all as manifestations of a single, fundamental entity.
4. The tuning problem: Explain how the values of the free constants in the standard model of particle physics are chosen in nature.
5. The problem of cosmological mysteries: Explain dark matter and dark energy. Or, if they don't exist, determine how and why gravity is modified on large scales. More generally, explain why the constants of the standard model of cosmology, including the dark energy, have the values they do.
With kind regards, Sydney
Question
It is known that the FPE gives the time evolution of the probability density function of the stochastic differential equation.
I could not see any reference that relates the PDF obtain by the FPE with trajectories of the SDE.
for instance, consider the solution of corresponding FPE of an SDE converges to pdf=\delta{x0} asymptotically in time.
does it mean that all the trajectories of the SDE will converge to x0 asymptotically in time?
The Fokker-Plank equation can be treated as a so-called forward Kolmogorov equation for a certain diffusion process.
To derive a stochastic equation for this diffusion process it is very useful if you know a generator of this process. Finally, to find out a form of the generator you have to consider a PDE, dual to the Fokker-Plank equation which is called the backward Kolmogorov equation. The elliptic operator in the backward Kolmogorov equation coincides with the generator of the required disffusion process. Let me give you an example.
Assume that you consider the Cauchy problem for the Fokker-Plank type equation
u_t=Lu, u(0,x)=u_0(x),
where Lu(t,x)=[A^2(x)u(t,x)]_{xx}-[a(x)u(t,x)]_x.
The dual equation is h_t+L^*h=0, where L^*h= A^2(x)h_{xx}+a(x)h_x.
As a result the required diffusion process x(t) satisfies the SDE
dx(t)=a(x(t))dt+A(x(t))dw(t), x(0)= \xi,
where w(t) is a Wiener process and \xi is a random variable independent on w(t) with the distribution density u_0(x).
You may see the book Bogachev V.I., Krylov N.V., Röckner M., Shaposhnikov S.V. "Fokker-Planck-Kolmogorov equations"
Question
Consider an experiment in which we prepare pairs of electrons. In each trial, one of the two electrons - let's name it the 'herald' - is sent to a detector C, and the other - let's name it 'signal' - to a detector D. The wave-function of the signal is therefore
(1) |ψ> = ψ(r) |1>,
i.e. in each trial of the experiment, when the detector C clicks, we know that a signal-electron is in the apparatus. Indeed, the detector D will report its detection.
Now, let's consider that the signal wave-packet is split into two copies which fly away from one another, one toward the detector DA, the other to the detector DB,
(2) |ψ> = 2ψA(r) |1>A + 2ψB(r) |1>B.
We know that the probability of getting a click in DA (DB) is ½, but in a given trial of the experiment we can't predict which one of DA and DB would click.
Then, let's ask ourselves what happens in a detector, for instance DA. The 'thing' that lands on the detector has all the properties of the type of particle named 'electron', i.e. mass, charge, spin, lepton number, etc. But, to the difference from the case in equation (1), the intensity of the wave-packet is now 1/2. It's not an 'entire' electron. Imagine that on a screen is projected a series of frames which interchange very quickly. The picture in the frame seems to be a table, but it is replaced very quickly by a blank frame, and so on. Then, can we say what we saw on the screen? A table, or blank?
The situation of the detector is quite analogous. So, will the detector report a detection, or will remain silent? What is your opinion?
For a deeper analysis see
Dear Mazen,
You wanted me to reply to your question, but I have nothing to say.
"The particle didn't know all forces exist in space, but space itself know that, and know the particle itself, so when the particle appears at some point, the space (which is the second player that make the motion) can do (based on some internal mechanism) the sum-over-all-trajectories for this particle to give the particle (which is the first player that make the motion) the opportunity to exist in some specific points in space and time with different preferences (and this what I mean by "space gates")."
Exactly as you say that the space knows all sort of things, I can say that between my door and the door of my neighbor, exists a galaxy. You can say whatever you want, there is no limitation to that.
Question
what stage of flood damage management should i take
Identify 50, 100, 250, and 500 year floodplain. The 250 year flood probably includes the upper confidence limit for the 100 year flood. The 500 year flood should include the area affected by both the upper confidence limit of 100 year flood, and the effect of urbanization, hydrologic modifications from levees, dam failure, etc. Assume any road crossing not designed for at least 100 year flood, is likely to fail at least once in a 100 year life. Identify facilities and structures in GIS that are located within this area. Identify available flood data, flood stage marks from history, stream types with and without floodplain (entrenchment). If area has potential for hurricanes, tropical cyclones, tidal surge zones, potential for sea level rise, then adjust accordingly. Many facilities can be designed to limit losses if flooded, so these considerations could be made when designing, mitigating or retrofitting facilities and structures, or installing facilities or management such as lowering dams before major storms to help reduce potential for failure. Areas within vicinity of ring of fire or historically subject to the effects of geologic plate adjustments might want to consider tsunami. At some point, evacuation and life protection becomes more important than damage considerations.
Question
Hello everyone,
i want to prove the existence and uniqueness of SDE (stochastic differential equation ) which depends on a time parameter, a Levy Process and Omega.
The Problem ist found in the Book "David Applebaum: Levy Processes and Stochastic Calculus , 2nd edition " on the page 375 .
I actually proved the existence of such SDE under Lipschitz and Growth condition via the Theorem 6.2.3, but i dont know how to show the uniqueness?
Did someone has any ideas or hints to show the uniqueness ?
Thanks and best wishes
Actually, under Lipschitz and Growth condition uniqueness is proved by the Gronwall lemma. You assume that on the contrary there exit two solutions of the SDE and evaluate their difference in a suitable norm. Finally you apply the Gronwall
lemma to verify that the difference is zero
Question
Hi every body
I am modelling a process that is the product of two stochastic process. Is there any way to estimate to find out the parameters of each of these processes separately?
Without any additional nformationof the factors - it is simply IMPOSSIBLE.
Thus, if any inference is expected, some additional structure of the processes is to be known.
For instance, if real valued processes X and Y are known as independent stationary gaussian, then their product is stationary too, but not gaussian, and there is a chance to find some procedures detecting e.g. some features of their two correlation functions. Obviously, one cannot separate their sizes.
More precisely, for two independent gaussian variables X and Y with pd-s N(m,v) and N(n,u), respectively, the basic parameters of X*Y we have:
E(X*Y)= E(X)* E(Y)=m*n, E{(X*Y)^2} = E(X^2)* E(Y^2) = (m^2+v)(n^2+u),
hence the variance equals D^2(X*Y) = v*u + u*m^2 + v*n^2.
As we see, some additional knowledge is needed if m and n are to be found.
Question
I want to improve the specification performance of my MEMS Gyro, As we know, the measurement errors of a MEMS gyroscope usually contain deterministic errors and stochastic errors. I just focus on stochastic part and so we have:
y(t) = w(t)+b(t)+n(t)
where:
{w(t) is "True Angular Rate"}
{b(t) is "Bias Drift"}
{n(t) is "Measurement Noise"}
The bias drift and other noises are usually modeled in a filtering system to compensate for the outputs of gyroscope to improve accuracy. In order to achieve a considerable noise reduction, there's another solution that the true angular rate and bias drift are both modeled to set as the system state vector to design a KF.
Now if I want model the true angular rate, How could I do this? I just have a real dynamic test of gyro that includes above terms and I don't know how can I determine parameters required by the different models (such as Random Walk, 1st Gauss Markov or AR) for modeling ture angular rate from an unknown true angular rate signal!
You can also model the scaling errors and angular displacement, so the full model would be
y(t) = S R w(t) + b(t) + n(t),
where matrix S is matrix of scaling factors, and R is matrix for angular displacement. However in practice the biggest contributor of error is bias b(t). Errors due to scaling error and angular displacements are nowday usually low, because manufacturing quality of gyro sensors is quite good now.
Question
I am eager to study stochastic processes and their application in finance. as I am a student in economics the concepts are completely unfamiliar for me. Any help would be appreciated. Can anyone suggest me the introductory textbook?
I think, It is very introductory book that everyone enjoy it:)
Thomas Mikosch
Question
I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.
All the riders and drivers arriving in the system will wait for some amount of time until being matched.
The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.
The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.
I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.
However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a buffer) into this model.
Can you please explain in more detail the process of matching. The enclosed picture is the standard random walk with two competing waiting exponentially distributed independent times: If wins the 1 type - we go one step to the right, in the oposite case -we go to the left. No service is sketched. Thus as much as possible precise description of the service is needed. Now, the main doubt is caused by lack of the interpretation of negative positions: Isn't it the difference of the numbers of arrived riders and drivers?
Also, writing these words:
GQ: "The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching. "
it is not explained what are the "outnumbers". My English is too weak to understand the context. Can this be explained in more simple wards like this:
The state is characterized by current values of two numbers: of riders and the drivers in the waiting room. At the instant of matching k\tmes delta t the numbers are becomming less by the miniumum f the two (hence one becomes zero) . . . .
Note that this s a kind of guess what was meant by you!
Joachim
Question
One of the main stability theories for stochastic systems is stochastic Lyapanuv stability theory, it is the same as Lyapanuv theory for deterministic systems.
the main idea is that for the stochastic system:
dx=f(x)dt+g(x)dwt
the differential operator LV(infinitesimal generator- the derivative of the Lyapanuv function) be negative definite.
there is another assumption for this theory:
f(0)=g(0)=0
and this implies that at equilibrium point (here x_e=0) the disturbance vanishes automatically.
what I want to know is that is it a reasonable assumption?
i.e in engineering context, is it reasonable to assumed that the disturbance will vanish at the equilibrium point?
From my practical experience if f(0)=0, then g(0) doesn't equal 0, because of the sensors noises.
Question
it seems that with solving the stationary form of forward Fokker Planck equation we can find the equilibrium solution of stochastic differential equation.
is the above statement true?is it a conventional way to find the equilibrium solution of a SDE? and do SDEs always have equilibrium solution?
The question about equilibrium solution" and probably also many other questions concerning Fokker-Planck equation are answered in the book of H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications. For this one search the key-word detailed balance, as suggested above. The question abut stability of SDE is discussed for example here http://www6.cityu.edu.hk/ma/ws2010/doc/mao_notes.pdf .
Question
for deterministic systems, with defining proper terminal constraint , terminal cost and local controller we can prove the recursive feasibility and stability of nonlinear system under model predictive control. For stochastic nonlinear system it is impossible to do that since we do not have bounded sets for states.
what is the framework for establishing the recursive feasibility and stability of MPC for stochastic nonlinear systems?
bounded uncertainty is a robust approach not stochastic approach.
Question
when we need to solve the Fokker Planck equation (Kolmogorov Forward equation) with finite difference, we need to solve it in a bounded domain, (regardless of the dimension of the FPE), for more accurate solution, which kinds of boundary condition should be considered?
1-Natural boundary condition:
which is a Dirichlet type boundary condition
the value of probability at the boundaries equal to zero
2-the Reflecting boundary condition:
which I think is the Robin type boundary condition
and the Flux at boundaries is zero?
Question
In Mathematics, delay differential equation is one type of differential equation in which the derivative of the unknown function at a certain time and stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution. Therefore, share your valuable ideas on "How to distinguish between Delay differential equation and Stochastic differential equation?"
Thanks for sharing some papers on above mentioned one @ A.M. Abdallah.
Question
(This question is not mine, I copied here an issue raised by Hans van Leunen in one of my threads. I think it is worth of a separate discussion.)
Here are some of the problems Hans van Leunen posed:
• Why does the squared modulus of the wavefunction describe a probability density distribution? Probability density of what?
• What is the relation of these distributions with fields?
• How do they interact? Do you know the mathematics that describes these interactions?
• obviously physical reality is ruled by stochastic processes and not by all kinds of strange forces and force carriers.
I would add a problem of myself
• for which particles we may speak of a quantum field, and which ones are "too much classical" to be represented by a field? For instance, for atoms we may speak of a field, or only for elementary particles?
I trust that Hans would explain his views, and I also hope that the thread won't remain only mathematical. I mean, I think that it would be interesting to discuss the meaning of the concept of field in QFT.
André,
You will have little problem in imagining a one-dimensional normal distribution. The famous bell shaped function. Now take a two-parameter normal distribution. It has an elliptic or a a round rotational symmetric bell shape. Next take a three-parametric normal distribution. It means that the parameter space is three-dimensional or with time included four-dimensional. The shape is a bell in four or five dimensions. One extra than you are used to with the usual bell. The value of the bell function is still defined in real numbers. The squared modulus of the wavefunction will not differ much from this three-parametric normal distribution.
This function specifies the probability of detecting the owner of the wave function at the location and time of the parameter value. For a point-like object it can be the location where the object actually was, is or will be. The problem is that this smooth distribution in fact describes the location density distribution of a swarm of locations that may be hop landing locations. The swarm may reflect the hop landing locations of a cycle of a stochastic hopping path of a point-like object. The mystery lies thus in the fact that the wavefunction is a smooth function, while it may represent a swarm of discrete locations. The location density distribution may have a Fourier transform. That makes the location density distribution a wave package. Usually moving wave packages disperse. However, not this one because the swarm that the location density distribution describes may be recurrently regenerated and that regenerates the location density distribution.
Question
in process control in engineering, of course in many situations we need to control a system under a performance index (optimal control), where the system is exposed to uncertainty ( parameter uncertainty or disturbance or noise). and sometimes we need some constrained on the state of the system.
there are two approaches: robust optimal control, stochastic optimal control.
when we use robust optimal control (because some bounds on the uncertainty is known) we consider the worst case scenario, and we can use optimal control and hard constraints on the states can be satisfied.I think this is a practical approach
on the other hand, when we can not specify some bounds on the uncertainty and the probability distribution of uncertainty is known, we must use stochastic optimal control. In this case, the hard constraint can not be defined, and we should use the definition of chance constraints, meaning the constraint can be satisfied with some level of probability.
now my question is, does such definition a practical definition in real-world application?and is it really applied in industry?
Most of the constraints are for safety. for example we want the temperature of the boiler to be bounded. it is dangerous if we want the temperature of the boiler to be bounded with some probability. so I want to know that, is chance constrain a practical definition in real-world application in engineering?
Dear Dr. Nieuwenhuis , Dr.Lafifi and Dr. Mahrouf
what I want to know here is that I believe the chance constraint is accepted by theorists, but what about experimentalists? do they really use it?
for an engineer, does chance-constrain really has a meaning?
is it really used in process control or even in engineering approaches nowadays?
Question
I have historic time series of 40 years of many weather variables. Call each variable's time series A, B, C ... Z for simplicity.
I want to use all 40 year time series for training with the intention of reproducing stochastic and synthetic time series.
Now i can use simple Markov chain or Monte Carlo approaches for individual variables with great success. However, the relationships between the variables will not be maintained.
I need all variables to relate, such that A has a strong connection to B, but not to C etc.
So when I stochastically generate A, I want that to influence B and not C.
What is the best method to simulate complex inter-dependencies?
Stretch goal: how can this be done in Python 3??
Thanks for any and all help!
Best,
Jamie
Yes, there is definitely some correlation going in many of these plots, so I think that copulas could work to some degree of accuracy. As regards the exact setup, I do not really follow how the variables are supposed to tie together. Generally, copulas can handle many variables, so it could be easier to include even seemingly uncorrelated ones, just in case there might be some small correlation. But, then again, I do not really follow all variables here.
Question
The universe must expand otherwise temporary local deformations would not perceive as attractive. The same mechanism that locally pumps volume into the field, will expand that field. The local addition starts spreading over the field. The mechanism is implemented by spherical pulse responses.
Stochastic processes generate the pulses. They create mass out of nothing. Here mass stands for local deformation. This deformation quickly fades away. The processes produce a continuous stream of massive objects that dilute into the increasing volume of the universe. This means that mass is a very transient property. That property must recurrently be regenerated. That is why all elementary particles are recurrently regenerated by regenerating their constituents, which are spherical pulse responses.
The embedding of the separate Hilbert spaces in which the elementary particles reside, into a non-separable Hilbert space drives the stochastic processes. Thus the volume stream comes from content of the separable Hilbert spaces that is added to the non-separable Hilbert space.
Physics ignores the existence of shock fronts. They are solutions of the wave equation and exist only in odd dimensions. The spherical shock fronts integrate over time in the Green's function of the carrier field. The Green's function owns some volume. The actuator of the spherical shock front infuses this volume into the carrier. Consequently, the field deforms locally. The volume spreads over the full field. This diminishes the deformation, but the added volume stays in the field and expands it. A stochastic process recurrently regenerates the spherical pulse responses. This makes the deformation persistent and establishes an ongoing expansion of the field, which is our living space. See: "Nature's Basic Dark Quanta"; http://vixra.org/abs/1712.0241
Question
I need help in understanding the role of (random) sampling in implementation of a control system in Simulink. I need a basic, general example to visualize the role of the sampler in a control system, and the way it can be programmed (to be random/event-triggered etc).
Any help in this regard is very much appreciated
Hi Samira,
Referring to the Examples 9.3 and 9.4 in Prof. Lewis' book (Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, 2e), the attached MATLAB example (m-file) shows how to simulate a stochastic control system.
Hope this helps!
Question
I need an advance because I have to create a mathematical model based on Markov stochastic process to indicate where a human step will be in the space. I have to start from this:
Pr ( S(t1) = S1 | S(t0) = S0) = Pr(step length, step frequency) Pr(change length step, change step frequency)
The first probability of the product is deterministic while the second one is based on empirical estimates. I have to make this product explicit using factors such as the step length, step frequency and the weight of the person.
Thanks so much. You've been really kind. Now I try to look at what you told me and then I hope I can create this model.
Question
The universe must expand otherwise temporary local deformations would not perceive as attractive. The same mechanism that locally pumps volume into the field, will expand that field. The local addition starts spreading over the field. The mechanism is implemented by spherical pulse responses.
Stochastic processes generate the pulses. They create mass out of nothing. Here mass stands for local deformation. This deformation quickly fades away. The processes produce a continuous stream of massive objects that dilute into the increasing volume of the universe. This means that mass is a very transient property. That property must recurrently be regenerated. That is why all elementary particle are recurrently regenerated by regenerating their constituents, which are spherical pulse responses.
The embedding of the separate Hilbert spaces in which the elementary particles reside, into a non-separable Hilbert space drives the stochastic processes. Thus the volume stream comes from content of the separable Hilbert spaces that is added to the non-separable Hilbert space.
The relationship makes sense as well as matter!
Question
real data that i can use it for Seismic Signals Segmentation analysis use any stochastic process
Sagiru, did you say a seismic trace, for example, extract from a 3D volume?
What you understand as a "segmentation"? Is the concept that authors in the attached paper made?
Question
Hi
I am trying to find steady state solution of a stochastic differential equation
dy/dt=Aydt+B1ydV1+B2ydV2
where A, B1 and B2 are operators , dV1 and dV2 are color noises.
is there any way or any literature , where steady state solution (dy/dt=0) of a stochastic differential equation has been found out. your help will be appreciated.
Dear Arif, see attached pdf, Chap.4, point 4.5 . Gianluca
Question
for stochastic processes how can we define a region in which the state trajectories of an SDE always remain in? Of course, because of the unboundedness of the uncertainty, it is not possible to define this region (like the robust method) but intuitively there should be a pair (region, probability) such that states trajectory will not leave this region with a corresponding probability. how can we define and calculate this region and its probability for an SDE?
Thanks a lot Dr. Domsta and Dr. Prykhodko . my problem is an SDE which its diffusion part is modeled by Brownian motion, So using the corresponding Fokker-Planck equation(FPE) of the SDE, theoretically is a brilliant idea. I have used it for 1-D SDE. but because FPE is a PDE which its order depends on the order of the SDE , solving such PDE is computationally cumbersome. specially in the field of optimal control which online computation is essential. I thought there may be a relation between the probability of hitting time of an SDE with Lipschitz constant of drift and diffusion function of SDE and the initial PDF of state. because Lipschitz constants somehow represent the grow ratio of states of an SDE.
Question
In which software can i run stochastic dominance analysis?
R
While R is free, the learning curve is very steep. If you survive, this will be a valuable skill. R can be downloaded at https://cran.r-project.org/
I would suggest RStudio as a user interface to make using R easier. You need to download and install R, then RStudio (https://www.rstudio.com/).
That is the simple part. There are a large number of online resources for learning R. There are a large number of books that can also help. There are online support sites where you can ask questions. However, this is not an approach to try if you need the results tomorrow. I do not know if R has a special package for doing stochastic dominance analysis. What I do know is that a sufficiently skilled user can write a program in R that will do this. How long it takes you will depend on your skill in computer programming, your current knowledge of statistical methods, your ability to find someone at your institute that already knows R and is willing to help you, and the hours per day that you can devote to learning R.javascript:
DataCamp, Coursera, Edx are some sites that offer free classes in R. These were helpful for me. I only used the free versions, but these sites also offer pay services.
Question
I have question regarding simulating under mentioned 1D Stochastic Differential Equation in R using Sim.DiffProc package:
dx1 = (b1*x1 − d1*x1) dt + Sqrt(b1*x1 + d1*x1) dW1(t)
I have taken this equation from book: Modeling with Ito Stochastic Differential Equations by E. Allen. In the deterministic and diffusion part of equation, b1 and d1 are model parameters representing birth and death rates (for single population approximation of two interacting populations compartment model). Relevant lines of my code are as under (note that i,ve used theta's to represent parameters in my code):
Code (1):
> fx <- expression( theta*x1-theta*x1 ) ## drift part
> gx <- expression( (theta*x1+theta*x1)^0.5 ) ## diffusion part
> fitmod <- fitsde(data=mydata,drift=fx,diffusion=gx,start = list(theta1=1,
+ theta2=1,theta3=1,theta4=1),pmle="euler")
Or should I model it like this
Code (2):
>fx <- expression( theta*x1-theta*x1 )
> gx <- expression( (theta*x1+theta*x1)^0.5 )
> fitmod <- fitsde(data=mydata,drift=fx,diffusion=gx,start = list(theta1=1,
+ theta2=1),pmle="euler")
I am not clear whether to use theta, theta, theta, theta as I have used at first place above or should I code it like only using parameters theta and theta (done at second place above) because in original model the parameters b1 and d1(birth and death rates) appearing in the deterministic part are same as appearing in the diffusion part.
I don’t find a single example in Sim.DiffProc package documentation where there is any repetition of parameters just like I have done at second place.
Thanking in anticipation and best regards.
I would use code [2} above with two parameters theta and theta.
Also,  it is very easy to code this directly without using any packages by applying the Euler Maruyama approximation method (which is described in E. Allen's book).
Also, see the book by Linda J.S. Allen which has ALL THE CODES for such examples problems given in the book . So may copy into R directly and implement:
Linda J.S. Allen, An Introduction to Stochastic Processes with Applications to Biology, Second Edition
Also, the following papers contain more examples of somewhat more complicated stochastic differential equations which have been solve in MATLAB (similar to R) using Euler Maruyama approximation:
A.S. Ackleh and S. Hu, Comparison between Stochastic and Deterministic Selection-Mutation Models. Mathematical Biosciences and Engineering, 4(2007), 133-157.
A.S. Ackleh, K. Deng and Q. Huang, Stochastic Juvenile-Adult Models with Application to a Green Tree Frog Population. Journal of Biological Dynamics, 5(2011), 64-83.
Question
Hi !
I am working on Stochastic resonance(SR), i wanna find SNR before and SNR after Stochastic resonace model is applied, because i wanna find SNR gain. Can anyone help me?
Hello,
I am not sure what question you came across, but I wish you to refer to one of my papers as follows. Wish it helps you.
Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Aug;72(2 Pt 1):021902
Signal-to-noise ratio gain of a noisy neuron that transmits subthreshold periodic spike trains.
Good luck
Yanmei
Question
I am trying to discretize a stochastic process of two variables that are weakly correlated and I would like to calibrate the correlation coefficient using data.
Hi Carlos,
There are many ways, so explain first what kindof simplcity you keep in mind.
1. by generators of pseudo-random number; e.g.the normal distribution is one of the simplest since the result  corresponds to a pd. with density; the agorithm is based on the formulas:
step1 generate two independent X := NORM(0,1), AUX:= NORM(0,1)
step2  calculate  (X,Y) := (X,  \sqrt{1 - \rho^2} \cdot AUX  + \rho \cdot X)
2. with the use of a simple density function, e.g.
f(x,y) =  .25  + B \cdot x \cdot y   (with siutable  B ),   |x|, |y| \le 1
etc. etc. . . .
Best
Question
Doing spatial analysis of ecological data, taking environmental and spatial variables and performing partitioning of variance there are obtained fractions that are associated with environment, environment plus space, and only space, among others. It is not clear for me, from literature, the role of this fractions on explaining or suggesting stochastic or neutral processes structuring the community. I know it has to be taken carefully, but I still do not get it completely. Any references for reading?
Hi Sara,
don´t give too much on this partitioning. Depending on which method you choose, the results can be quite different. See Gilbert & Bennett (2010) for a critique:
Gilbert, B., & Bennett, J. R. (2010). Partitioning variation in ecological communities: do the numbers add up?: Partitioning variation in communities. Journal of Applied Ecology, 47(5), 1071–1082. http://doi.org/10.1111/j.1365-2664.2010.01861.x
cheers
Question
I think that there is this analogy in the field of linguistics as the part of information science and the field of modern statistical theory ( thermodynamics ). The calculations can be done via the Maximum Entropy  principle.
Dear Danilo,
decades ago, an expert in linguistics, Zipf, stated that 80% of modern every-day speech in English is made just of 20% of the words in the dictionary. This observation strongly resembles the remark of an Italian sociologist of the first decades of the XX century, Vilfredo Pareto, who remarked that 20% of the population enjoys 80% of the income in every society. I wonder how this kind of behaviour in systems which are usually not investigated by physicists (linguistics, sociology) can be described with the help of Maximum Entropy principle. Of course it is possible to invoke non-Gibbs entropies, like e.g. Tsallis' entropy, But then the arbitrariness in the choice of the definition of the entropy of interest is just a mask of our ignorance of the dynamics underlying the system.
Let me quote a well-known conundrum. Tsallis' q-entropy is a maximum when the system follows a power law law (q=1 correponds to a Gaussian law, just like in the maximisation of GIbbs' entropy), and Zipf- Pareto law is retrieved for a particular value of q. You can even postulate that the system relaxes to a maximum entropy state according to a kinetic equation which generalises the familiar Fokker-Planck equation. Should the latter hold, it would lead the system through a Markovian process towards a maximum of Gibbs' entropy and to a Gaussian law. But then the detailed structure of the required kinetic equation depends on the choice of q. The point is, nobody knows how to derive q from first principles, but for very restricted examples...
Question
It would be interesting to explore sequences in mother-infant dyad interactions, however, it seems to be that no one uses lag sequential analysis in Observer rather calculate Markov matrices...etc.
Dear Agnes Kata Szerafin,
I hope the following links may be useful for you.
There are some of references related to mother-child interactions and sequential analysis. The link for the list of references is given below.
Question
A continuum equation is used to analyze the model related to the stochastic growth of surface using poisson distribution. In this stochastic growth, the flat surface is continued to become rougher as time proceeded but the correlation length is always zero during the stochastic growth process. i am not able to understand why the correlation length will always be zero.
I can tell you about Asymmetric Simple Exclusion (ASEP) and related stochastic particle processes, which also have a surface growth representation. There the stationary evolution sees independent GRADIENTS of the surface (which are just the one-site occupation numbers in the particle picture) at any fixed time moment, so no correlation length there. BUT: the real interest is in the space-time correlations E(gradient at time 0, space 0 * gradient at time t, space x). These are far from trivial. Maybe you can take a look at such a quantity in your model?
Question
How I can find stationary Markov perfect equilibrium by numerical method in a stochastic dynamic game while the action space and state space both are uncountable and infinite but they are bounded.
I'm studying a paper that includes the model of a stochastic game. I attach it in this message. Thank you for every help.
Hi again Roya,
Perhaps you need some general info on Stochastic Dynamic Programming as well. If you are unaware of this methodology, I think you need some introductory text to make it possible to understand George Stoica's paper. Again; the best of luck
Question
What is the best programming language for game theory?
i want to know how I can show a stochastic process by a programming languages and find the solution by it? I study stationary Markov perfect equilibria in discounted stochastic games and want to show a model by a programming languages. But I'm not familiar with the programs in this field.
Standard choices would be MATLAB or Python. Maybe R.
However, I would encourage you to consider Julia. It's Julia is an up and coming language looking to replace MATLAB. Its syntax is quite similar to other scripting languages so it's easy to learn. One of its key features is that its loops are much faster. I do a lot of programming for stochastic equations and found that Julia can be much faster since in many cases nonlinear stochastic evolution cannot be fully vectorized (and hence you have to resort to loops, killing performance in languages which aren't compiled with type stability like C or Julia).
Question
I've been flipping through some literature on deriving the instantaneous state probabilities of a semi Markov stochastic process. However, I haven't been able to lay hands on a worked example involving non-exponential transitions. Could you please help with links to relevant references, published or unpublished? Thanks!
A basic reference is RA Howard (1971) Dynamic probabilistic systems Vol II.J Wiley
Question
Hello everybody,
I have three basic questions about AWGN. I always assumed them to be trivial, but I have not a clear answer for them if somebody asks me.
1st. Does always white Gaussian noise means wide-sense stationary noise? (Another equivalent form for this question: Do I have to specify that I mean wide-sense stationary noise even when it is white Gaussian already?)
2nd. Does even exist Gaussian noise without been white? (Another equivalent form for this question: A colored noise can also show a Gaussian distribution?)
And this takes me to another question, and it seems to be the hardest to answer:
3rd. How are related the spectral content of a stochastic process with it's statistical distribution?
Hi Luis,
1) White implies WSS.
2)As Kenneth said, white noise is a fictitious assumption, real noise is in fact colored. For instance, in an array system, the noise at the output of each array (when close together) couldn't be assumed to be white.
3)Spectral properties give mean and variance information. Stochastic properties require higher order moments.
Question
I am wondering about whether there is any easy way to find out p-value in distribution fitting of General Renewal Process (with power function) especially for Kijima-2 model. Restoration factors are generally in between 0 and 1, models cannot be reduced to nonhomogenous Poisson or perfect renewal processes.
Hello Onur,
Really important question. I hope you forgive the brevity of my comments and read the 3 reference I indicate
Although my interests are public health related, before getting in the specifics of the model itself, I would first consider these points:
The p-value inferential mechanism is not well-understood and thus worth summarizing. Under the null, the coefficient of the linear model is zero, tested against two other possibilities: a positive or a negative coefficient, given the data. The p-value (probability value) is the probability of obtaining a positive or negative estimated coefficient or other quantity when the null hypothesis is true. It is not the probability that the null is true. The interpretation of p-values requires knowing:
·       The distribution function (which we take to be continuous and thus is a density function), normalized so that the area under it is exactly 1.00. You have that distribution.
·       A choice regarding the one sided or two sided test itself (the two-sided test rules in two alternative; the one-sided test rules in only one alternative), This is something that you develop from theory (it seems that you are not doing data mining as you have a very clear idea as to the the stochastic process you intend to deal with).
·       A test statistic that  determines the probability associated with the p-value itself calculated from the data.
·       State a theoretical p-value (this is the choice of the probability number that – according to the researcher -- rules out the null as being due to chance).
BUT: Sterne and Smith (2001) state that:
If only positive findings are published, then they may be mistakenly considered to be of importance rather than being the necessary chance results produced by the application of criteria for meaningfulness based on statistical significance. As many studies contain long questionnaires collecting information on hundreds of variables, and measure a wide range of potential outcomes, several false positive findings are virtually guaranteed. The high volume and often contradictory nature of medical research findings, however, is not only because of publication bias. A more fundamental problem is the widespread misunderstanding of the nature of statistical significance, and hence the use of the p-value alone.
It follows that the p-value tells about one third of the answer. I suggest you consider the fuller aspects of the p-value by considering type 1 and Type 2 error rates. (a and b are probabilities alpha and beta)
Decision Rules to be considered extend the p-value as follows:
The null H0 is true (there is no effect) against the alternative HA is true (the effect occurs):
Reject the null Type 1 error, probability = a. HA is true with probability b.
Accept the null with probability 1 - a. Type 2 error, probability 1 - b.
My comments are a short but the idea should be useful after a bit of work I would encourage you to read these references before relying on the p-value alone. These papers give you the remaining third of the full answer:
Sterne JAC and Smith GD, Sifting the Evidence - What is wrong with significance tests? BMJ 322:226-231 (2001)
Ioannides JPA, Why Most Published Research Findings are False, PLOS Medicine, 2(80:2124 (Aug 2005) http://dx.doi.org/10.1371%2Fjournal.pmed.0020124
S Goodman and S Greenland, 2007, Assessing the Unreliability of the Medical Literature: A response to why most published research findings are false, Johns Hopkins University, Dept. of Biostatistics Working Paper No. 135, 2-28, 2007.
Best regards and thanks for stimulating some thinking!
Paolo
Question
Chong et al in Cell 158 (pp. 314-326) explain a possible mechanism for stochastic transcription bursting in bacteria.
Is there any evidence for stochastic transcription in mammalian cells? Are there any validated mechanisms for this process?
One piece of evidence for stochasticity comes from heterogenity measured via single-cell methods like single-cell RNA-seq and single-cell qPCR. However, to claim that this shows stochasticity means you have to make the assumption that the cells are truely all alike, and so the heterogeneity is a proxy for the stochasticity. This can be a big assumption. The only true way to get around this is to actually temporally measure gene expression, and so methods which require destroying the cells like RNA-seq and qPCR are limited in this respect.
Given this problem, really measuring stochasticity has to be done with live microscopy. Mammals are much harder than other organisms because you cannot easily live image the embryo. I linked the paper "Stochastic NANOG fluctuations allow mouse embryonic stem cells to explore pluripotency" which showed stochasticity in mouse ESC, but in culture.
However, if we broaden our scope there is evidence in other living organisms. A strong piece of evidence comes from "Probing the Limits to Positional Information" (linked) which found stochasticity in the Bicoid gradient in Drosophila.
We can even go as far as live zebrafish. While not a transcript, there is evidence for stochasticity in the transcription factor retinoic acid. I am an author on an Elife paper "Noise modulation in retinoic acid signaling sharpens segmental boundaries of gene expression in the embryonic zebrafish hindbrain" where we have direct measurements of the stochsticity of the retinoic acid gradient using a special type of microscopy called FLIM. A related study showed that there is downstream (spatial) stochasticity in Hox/Krox expression in the hindbrain, linked as "Noise drives sharpening of gene expression boundaries in the zebrafish hindbrain". Given amount of conservation of the early developmental processes between fish and mammals one can infer that there must be similar amounts of stochasticity in mammals.
----------------------------------------------------
Regarding mechanisms / where this all comes from, one place to start is Ekstrom's comment of stochastic vs chaotic, I linked "Lineage correlations of single cell division time as a probe of cell-cycle dynamics". They say that some of what is seen as stochastic is actually due to chaos. However, their method (essentially measuring the dimension of the phase space and show that it's low dimensional to argue that it's not stochastic) is similar to the one which has been widely used to show things like Jackson Pollock's work is a fractal along with many other interesting examples... so it may give false positives. It's a really tough question and this is just the start of unraveling it. And this is only looking at cell division, so that only shows that one portion of the underlying "noise" may be chaos instead of inherent stochasticity. I hope to see more research like this in the future.
But other mechanisms? People give theoretical reasons but I don't know of any specific validations. For example, many believe that at least some of the decay of mRNA transcripts is fundamentally stochastic but again I don't know of a specific validation of this.
Question
How can I simulate an ARMA process in Matlab?<span id="mce_marker" data-mce-type="bookmark"> </span><span id="__caret">_</span>How can I simulate an ARMA process in Matlab?<span id="mce_marker" data-mce-type="bookmark"> </span>
How can I simulate an ARMA process in Matlab?
ARMA (Autoregressive Moving Average Model)
Hi
I advice you to see these documents. You will find what you need.
I hope that I helped you, let us know if you have another questions or you need more details.
With best regards
Question
Let x(t) be a stochastic process. It may be continuous or discrete - I don't want to define it here.
Let x(t) be stationary, then its autocorrelation is defined as R(r) = E[x(t) x(t+r)].
The transition probability can be defined as
P(t, x0; t+r, x1) = prob(x(t) = x0 and x(t+r) = x1).
(for stationary process we may put t=0)
Are there any known relations between R(r) and P(t, x0; t+r, x1)?
Not in such a direct form. There is a relationship between the transition matrix P and the correlation but it is much more subtle; it has to do with the contracting spectrum of P.
More precisely, if X denotes the Markov process with stochastic matrix $P$, ie. $P(x,y) = Prob(X(t+1)=y| X(t) =x)$, $\pi$ an invariant probability and $f\in\ell^2(\pi)$, then the correlation $R_f(r) = E(f(X(t+r) X(t))- E(f(X(t+r))E(f(X(t))= (f, (P-\Pi)^r f)$ where $(\cdot, \cdot)$ denotes the scalar product in $\ell^2(\pi)$ and $\Pi$ is the stochastic matrix having all its lines equal to the invariant measure $\pi$. (Your question is the specal case corresponding to the function $f(x)=x$). Asymptotically, when $r\to\infty$ the corrleation $R_f(r)$ can be estimated through the spectral radius of the matrix $P-\Pi$ i.e. $\lim_{r\to\infty} \|P^r-\Pi\|^{1/r}$.
Question
I've solved the Kuramoto model for 100 oscillators and have calculated phase of each oscillator (theta) at each time step. Now by using these phases (thetas) I need to generate two/more signals to measure the synchronization among them as a function of coupling strength with my own method to check the capability of my method.
Dear Elman
thanks a lot for your response, help and guidance
Best Wishes
Question
I need to find a benchmark paper which develops a lattice model of jump-diffusion process. Actually, there are several papers in finance literature which creates a lattice form of jump-diffusion, but my case is special. In my case, jump size is constant instead of being a random variable. I think, this point creates a problem for me. I will be very glad if you suggest me any paper in this respect.
Thank you,
Fikri
I can suggest a diffusion model due to Karlin & Taylor (First Course book) which I have  sigma adapted with respect to the information structure by using Lagrangean Actions and Market Potential Actions and studied the String Theoretic Relativistic properties of the Dynamic Equilibrium which has wide scale applications in Econophysics and Quantitative Finance & Stock Market Engineering Physics. If you will you can take a look at my papers on String Theory and Stock markets and use of Haag's Theorem on my RG page. The second paper uses a jump diffusion quantum scaling model of nanostructure information and particle flow by Engineering the original diffusion controlled second walk Markov model of Radner & Rothschild (1975 - Journal of Economic Theory). SKM QC
Question
Researchers having expertise in Time Series analysis and Stochastic Processes  What is the difference between white noise and iid noise ?
Dear Sourav,
a noise sequence (et) is a white noise sequence if
• the expectation of each element is zero, E(et) = 0
• the variance of each element is finite, Var(et) < infinity
• the elements are uncorrelated, Cor(et, es) = 0
The noise sequence would be an iid noise if in addition the elements are not just uncorrelated but also independet. So therefore every iid noise is also white noise, but the reverse is just true for Gaussian white noise sequence.
Best, Stephan
Question
It seems in the history of climate science that most scientist spoke of the climate as being stochastic, until about the mid 1980s, where a shift occurred and the climate was more described as being chaotic. Obviously, the shift reflects the wishes of climate scientist to enable prediction and mathematical capture, but it appears only wishful thinking, at best. So, as a survey, what does the RG community think about this fundamental conundrum in the Sciences?
Setting all aside which is nonlinear and difficult to treat, and what either statistians or modellers who prefer deterministic (primitive or conceptual) modelling failed to properly describe in their terms, what remains are linear, statistically solvable problems. Bravo! If I lost a key in the dark, I will seek for it in the light circle of the street lamp - and probably come to the conclusion that I did not loose a key at all. Luisiana, have a look around to see that the environment we live in is full of structures that emerge from elementary rules of local interaction but can hardly be expected to exist at this level of description. If one wants to label it this way, it appears to be the "unreasonable effectiveness of mathematics in physical sciences" (Wheeler), which bears this higher level of structuring and description - among the mathematical disciplines notably number theory, which is 'unreasonably effective' in describung synchronization phenomena (low rational frequency relationships, for example, at top of the Farey tree of rational numbers). Cf. the paper of Lagarias ("number theory and dynamical systems") which I like to cite in this context.
Concerning GCMs, they contain all the ingedients needed, but - you're certainly right in this respect - may become intractable (or at least difficult to handle) when chaotic dynamics, homoclinic or heteroclinic orbits, etc. emerge. The "solution" of fixing the srews that parameterizations offer until dynamics become tractable just leads to the problem that I have quoted (as an example) of inadequate monsoon simulation. I can demonstrate this with the "small" GCM that I have run in the past in order to better understand these dynamics (a Mintz-Arakawa GCM). I have learned a lot when using this model, even of its failures galore (instabilities) I had to struggle with. I could not have developed my conceptual view on boreal summer monsoon dynamics without experimenting with that 'old-fashioned' model. So, at least for me, it was very helpful to have such a tool near at hand (which could be run even on a 386 PC at the beginning of the 1990s).
A nonlinear system of equations bears the corresponding linear solutions as well, and if the situation (initial and boundary conditions, parameter values) is such, it will find them - but reducing it to linearity in cancelling the nonlinear terms, you will never see the nonlinear solutions, of course, and thus perhaps come to the conclusion ... the key has not been lost :-) (my apologies). The problems I quoted of present-day large GCMs appear (to me) to be a result of "fixing the screws" until the models become tractable (an economic decision as well), and losing the interesting nonlinear dynamics this way that at least qualitatively correspond to the behaviour of mother nature.
Kind regards,
Peter
Question
GARCH is an abbreviation of Generalized Auto Regressive Conditional Heteroskedasticity. Let Xt be a stochastic process which can be decomposed as
Xt = σt・Wt (t = 0, 1, 2, ...).
Here {Wt} is a series of probability distribution with zero mean and constant variance (for example a normal distribution N(0,1)) and Ws and Wt are statistically independent if s ≠ t. The coefficient σt is a series of positive numbers which is determined by
σt = a0 + a1 Xt-1 + b1 σt-1 (a0, a1, b1 >0)
in the most simple case. Can this process be interpreted as instances of martingale? I am interested in the case a1 + b1 = 1.
I have only an elementary knowledge of stochastic process and martingale. I do not even know if this is a well known fact or not. Please explain me plainly.
Dear George,
thank you again. I will try to understand the paper you mentioned. It seems it is necessary to make a preliminary study before approaching the paper.
In the paper, I found a Lecture Note by Van der Vart on Time Series. I start with this lecture note, because I lack systematic learning. As it counts more than 200 pages, it will take me a lot of time. When I have finished major part of the note, and if I have still some quetions to ask, I will come back again.
Yoshinori
Question
Let X be a random variable with values on the (classical) Wiener space whose law is absolutely continuous w.r.t. the reference Wiener measure (and satisfying a proper notion of "mean zero" and "unit variance"). Let X_1,...X_n be i.i.d. copies of X. Does the density (w.r.t the Wiener measure) of (X_1+...+X_n)/ \sqrt{n} converge in L^1 to 1?
Thank you very much. I'll try to look at it.
Alberto
Question
Imagine a process that generates an item each Tg seconds on average, with an standard deviation stdev_Tg s. Let N be the random number of items generated during a fixed time interval T. If stdev_Tg = 0, then N = T/Tg is constant. Is there an analytical expression to compute the average N if stdev_Tg > 0 s? And stdev_N? What would be the distribution of N in function of the distribution of Tg?
If you are willing to assume that the inter-arrival times of consecutive items form a sequence of iid random variables, then your question becomes a standard basic question in Renewal Theory. The expected number of items, m(t), produced/arriving up to a fixed time t depends on the distribution F of the individual inter-arrival times and can be calculated as follows:
m(t) = SUM (from 1 to infinity) of F_n (t)
where F_n is the cumulative distribution function of the sum of n inter-arrival times (which is F convoluted with itself n times). In general, it is hard to find explicit expressions fo F_n, hence difficult to compute m(t) explicitely. It is however a consequence of the Law of Large Numbers that as t tends to infinity, the ratio t/N(t) converges (almost surely) to the expectation/average, A, of the inter-arrival time, so that for large t, N(t) is with high probability close t/A.
Similar, more complicated, expression (in terms F-n) can be given for the variance of N(t).
I recommend to you to look up Wikipedia and other sources on elementary Renewal theory.
Question
How can one simulate a stationary Gaussian process through its spectral density?
Thank you for answers. Let me explain more precisely my question: How can one simulate the following Ornstein Uhlenbeck stationary Gaussian process,
dX_t=X_0-\alpha X_tdt +dB_t, where X_0=\int_0^\infty e^{\alpha s} dB_s and B is a fractional brownian motion.
Question
I have been searching a canonical notation for describing large queuing system networks. I know the Kendall notation is used for describing queuing systems, however, the Kendall notation does not allow me to describe large queuing system networks with the purpose of establishing a taxonomy of this networks.
Could anybody please let me know if there exists a canonical notation for describing large queuing system networks?
Question
Hello, I am currently working on models where energy can be produced using either a clean or dirty technology and investment (in knowledge) reduces the average cost of the clean technology or backstop. A steady state involves using both the dirty and clean technologies when their marginal costs are equal.
I am thinking of including a stochastic process for change in energy prices such that investment in the backstop is feasible only when energy prices are above a certain level (that is to say, investment in knowledge now reduces the future average cost of the backstop but there is also a huge fixed cost in actually using the backstop). Theoretically, I believe that this would involve switching back and forth between clean and dirty technologies. I am looking for any ideas in how to model this. I am attaching my recent publication (basically including stochasticity as I said in my current model).
I am interested in collaborating! any ideas?
Supratim
Hi Joaquim, thanks for your answer! I am not aware of the Baum-Welch algorithm but would definitely look into it.
I would definitely be in touch if I need more help regarding the stochastic derivation of my model.
Supratim
Question
How to construct an approximate continuous state space for a Petri net system, or whether it is possible to construct an ordinary differential equations (ODEs) for a Petri net system?
You can have a look on the polyhedric techniques used for Petri net with dense time (and also for timed automata) in tools such as Rome (for Petri nets) or UPPAAL (for timed automata).
You can also get some inspiration on interval-based computing (I think IDD - a class of decision diagrams - exploit such features).
These are just my "two cents".
Question
Start with a reflexion domain  being a polygon A1 A2 ...An, with n summits
A simple computation using formula line 7 page 57 shows that if d(An, An-1)-->0 then P (B5(0)=An)-P(B5(0)=An-1)-->0 as n-->infinity
and P((B5(t+s)€E/B5(0)=An))-P((B5(t+s)€E/B5(0) =An-1))-->0 as n--> infinity
Which allow to derive the distribution of the relected process in a bounded domain with smooth boundaries.
Thanks
Bernard Bellot
l
Why don't you upload the article for easy visibility?
Question
WCE method introduces a spectral method based on tensor product of  orthonormal Hermite polynomials as a basis in Lp space of stochastic process that Finding expectation and variance are belong to its advantages. Is there any other stochastic bases (except wavelet ones and operational matrix on polynomials)  ?
Sorry, I have no idea; this is not in my area of expertise
Question
I am looking for seminal papers on this topic. The issue is that from my point of view, mathematical expectation and correlation moments (second order moments) are different for different time resolutions i.e. annual data is often IID data while monthly data exhibits auto correlation function of simple Markovian type but daily date usually has more complex correlation structure.
I need references that could help to clarify this point some of my colleagues stand for probabilistic self-similarity of hydrological processes so I think here we have some common misunderstanding of the concepts.
Here is a nice paper that describes the multifractal nature of rainfall at  temporal scales of 15 mins and 1 day:
Another paper is by Dr. Praveen Kumar which describes how to quantify fractal behavior in hydrological processes:
I hope you find the following references useful.
Question
Linear quadratic Gaussian control has been studied extensively for relatively low order stochastic processes. However, for highly random, unobservable processes. The LQGC gives a poor performance, especially for nonlinear MIMO systems. So, could developing controllers for such a process be built?
Thanks, Blug,
I read the references however, I think you're right.  I've been trying different methods. But, to give more clear descriptive talk, the problem here considering not only the control action but also the coupling between different states. and all this states are noisy. So, for this type of process what's the confined procedures for designing the controller.
Question
Actually I am working on multichannel eeg data obtained from scalp electrode of meditating and non meditating subjects. We want to quantify the changes that occur in ones brain signals when one meditates.
I have preprocessed the signals by bandpass filtering, normalization and artifact removal by wavelet thresholding. After that i have i have segmented the data set of each channel( we have 64 channels per subject and 64000 samples from each channel, the sampling frequency being 256 Hz). I have considered 1 second(ie. 256 samples) segments with 50 percent overlapping So in total we have 499 segments per channel per subject.
Then I decomposed each of the segments using wavelet decomposition and calculated the statistics such as mean, variance, kurtosis and skewness from each band per segment per channel per subject. But I am unable to form a feature vector that I can input into a classifier. Please help.
Our experience suggests that cross-power spectral density measures (which are the Fourier transform of the cross-correlations between the signals received from two distinct electrodes) are much more indicative of brain state than just power spectral density measure derived from a single electrode.
Question
n(t) is white Gaussian process.
T1 and T2 are the periods at which n(t) is sampled,
N1=n(m \cdot T1), N2=n(m \cdot T2).
S1(f) and S2(f) are the spectrum of N1 and N2 respectively.
What is the relation or difference between S1(f) and S2(f)? They are identical?