Science topic
Stochastic Processes - Science topic
Processes that incorporate some element of randomness, used particularly to refer to a time series of random variables.
Questions related to Stochastic Processes
Sumudu Transform:
The Sumudu transform is a generalized version of the Laplace and Fourier transforms. It has been used in diverse fields such as signal processing, image analysis, and mathematical biology. In recent years, the Sumudu transform has been applied to study the fractal properties of different systems. The fractal dimension is a measure of the complexity and self-similarity of fractal sets.
The Sumudu transform can be used to calculate the fractal dimension of different objects and systems. The basic idea is to use the scaling properties of the Sumudu transform to obtain a relation between the fractal dimension and the scaling exponent of the Sumudu transform. This relation can then be used to calculate the fractal dimension of different systems.
For example, the Sumudu transform has been used to study the fractal dimension of fractional Brownian motion, which is a self-similar stochastic process that is often used as a model for natural phenomena such as turbulence. The fractal dimension of fractional Brownian motion can be obtained by analyzing the scaling properties of its Sumudu transform. In general, the Sumudu transform can be used to study the fractal properties of different systems by providing a new way to analyze their scaling properties.
Caputo fractional derivatives:
Caputo fractional derivatives are a type of fractional derivative that take into account the initial conditions of a system. They are often used in modeling complex systems with anomalous diffusion, such as in fractals or porous media.
In these systems, the fractal dimension plays a key role in determining the behavior of the system over time. The fractal dimension describes how the system fills space, and can be thought of as a measure of how complex and irregular the system is.
When modeling these systems using Caputo fractional derivatives, the fractal dimension can be incorporated into the derivative itself, allowing for a more realistic and accurate representation of the system's behavior. This is done by replacing the usual order of differentiation with a fractional order that depends on the fractal dimension.
Fractal nonlocal derivatives:
Fractal nonlocal derivatives in fractal dimension refer to a mathematical concept which uses fractal geometry to define a nonlocal derivative operator. This operator is used to describe the behavior of a function on a fractal set, where traditional calculus may not apply because the fractal set has a non-integer dimension.
The idea behind fractal nonlocal derivatives is that the derivative of a function at a point on a fractal set is not just dependent on nearby points, but also on the global behavior of the function on the fractal set. This concept is important for understanding the behavior of complex systems that exhibit self-similarity and can be modeled using fractal geometry.
The use of fractal nonlocal derivatives has applications in fields such as physics, finance, and biology, where the behavior of systems on fractal sets is of interest. It is also an active area of research in mathematics, as it allows for the development of new tools to study and understand the behavior of functions on fractal sets.
Fractal differential equations:
Fractal differential equations are an important tool in studying fractals. These equations are formulated in terms of fractional calculus, an extension of classical calculus that deals with non-integer powers of differentiation and integration. Fractal differential equations are used to model physical, biological, and engineering systems that exhibit fractal behavior.
The term "fractal dimension" refers to the concept of measuring the complexity of a fractal object. It is a non-integer dimension, typically expressed as a real number between 1 and 2 for most fractals. Fractal differential equations can be formulated in terms of this dimension, allowing researchers to study the behavior of fractals in a more systematic way.
One example of a fractal differential equation is the so-called fractal heat equation. This equation describes how heat diffuses through a fractal medium, such as a fractal network of blood vessels or airways. Another example is the fractal wave equation, which describes the propagation of waves (such as light or sound) through a fractal medium.
Fractal differential equations have many applications in science and engineering. They have been used to model the behavior of porous materials, the electrical properties of fractal networks, and the dynamics of fluid flow through fractal geometries, among other things. In general, fractal differential equations provide a unique and powerful tool for understanding the complex behavior of fractal systems.
Fractional stochastic systems:
Fractional stochastic systems in fractal dimension are systems that exhibit both fractal geometry and randomness through the use of fractional calculus. Fractional calculus deals with non-integer orders of differentiation and integration, which enables modeling of phenomena that exhibit anomalous diffusion and memory effects. Fractal geometry pertains to objects that are self-similar at different scales, and characterized by a fractal dimension, which is a non-integer number between its topological and metric dimension.
Examples of fractional stochastic systems in fractal dimension could include the modeling of rainfall patterns, which exhibit fractal properties due to the self-similarity of the precipitation clusters, and can also be characterized as random processes. Another example is financial market modeling, which can be approached through fractional Brownian motion, a fractional diffusion process that can capture long-term dependence and volatility clustering of stock price time series.
The study of fractional stochastic systems in fractal dimension is an interdisciplinary field that combines mathematics, physics, and engineering, among others. It has diverse applications in various fields, such as signal processing, medical imaging, geophysics, and materials science, to name a few.
Fractal Picard iteration:
Fractal Picard iteration is a mathematical method used to find the fixed points of a self-similar mapping or contraction mapping. It involves repeatedly applying the mapping to an initial guess while keeping track of the intermediate results. The resulting sequence of iterates usually converges to the fixed point, which is the point that maps to itself under the mapping.
This method is especially useful for analyzing the behavior of fractals, which are objects that exhibit self-similarity at different scales. Fractal Picard iteration can be used to compute the attractors of fractal functions or to generate fractal patterns.
The procedure involves dividing the domain into smaller subdomains that are related by contractions. Each subdomain is then mapped to a smaller subset of the domain, which is then recursively subdivided and mapped again. The process is repeated several times until a self-similar pattern emerges.
Fractal Picard iteration is a powerful tool in mathematics, computer science, and physics, among other fields. It has many applications, including image compression, data analysis, and the modeling of complex systems such as turbulence and chaos.
Fractional differential equations:
Fractional differential equations in fractal dimension are mathematical models that describe the behavior of systems with fractal geometry using fractional calculus. In these equations, the order of the derivative is non-integer, and thus they are a powerful tool for modeling phenomena that exhibit complex, non-linear behavior.
Fractal geometry is characterized by structures that exhibit self-similarity at different scales. Fractional differential equations in fractal dimension allow us to model complex systems that exhibit this self-similarity, and to study their behavior over different scales.
Such equations have applications in physics, biology, finance, and engineering. They are used, for example, in modeling the behavior of porous materials, in predicting the spread of infectious diseases, in predicting the behavior of financial markets, and in modeling the conduction of heat in materials.
In structural stochastic analysis, the input can be modeled by random field or stochastic process. In a discrete state, there is randomness between the two input discrete points. For output, can it also be directly characterized by random field or stochastic process? However, for the output, two discrete points in a discrete state appear to be deterministic due to the constraints of the physical system.
I was looking over the internet and could not find a satisfactory answer: What is a "concrete" (i.e., in applications outside of Math.) solution of any (definite) stochastic integral or rather how to find such a solution ? Recall that in the stochastic integration the result (which we can eventually apply) of the integration is not a number nor another stochastic process but a random variable. So, how to get it and also how to find or approximate its probability distribution ?. Shall we integrate ALL the realizations of the integrated process or some of them to obtain a statistical sample of the solution or somewhat else. Where can I find this problem properly elaborated or who can
explain this ? Jerzy F.
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
Thank you in advance.
I am doing pricing of some financial instrument. I have simulated interest rate paths which follows the CIR process (see attached). Now, I want to find the probability of each part. This is under the assumption that we can treat each path as a state and inference a transition matrix P and a stationary distribution (pi(t)), s.t.
pi(t)*P = P
I would very much appreciate if any one with knowledge of these stochastic processes point me to a resource such as a paper, R or Python implementation code or any advice on how I could proceed.
Nowadays, the mainstream psychology is to use mathematical statistics to study psychology, which is a research method for groups, and its conclusions cannot be applied to individuals, and the main object of psychological research is the individual. So how do you study individuals? Is the time series and the stochastic process OK?
I am already using YUIMA package for estimating stochastic differential equations.
I am wondering which software/packages researchers use for estimating SDEs (other than YUIMA).
Dear researchers,
I am working on formulating hydrological model when runoff(output variable) is available at monthly time-step while rainfall(input variable) is at daily time-step.
I firstly wanted to explore mathematical models and techniques that can be used here. I have found MIDAS regression method, which forms relationship between mixed frequency data variables (output at monthly time step and input at daily time step). But the problem is variables in hydrological models are at the same time step. So that technique will not work, because the MIDAS model will have relation between variables sampled at different frequency.
So can anyone suggest relevant literature, in which both output and input variables of model are related at high frequency (say daily) but the model is learning through low frequency (monthly) output data and high frequency (daily) input data.
The birth and death probabilities are p_i and q_i respectively and (1-(p_i+q_i)) is the probability for no change in the process. zero ({0}) is an absorbing state and sate space is {0,1,2, ...}. What are the conditions for {0} to be recurrence (positive or null)? Is the set {1,2,3,...} transient? What we can say about duration of process until absorption and stationary distribution if it exists and etc?
Every comment is appreciated.
I have a short time series (5 observations) and would like to know both the best approach for modelling said data and the most reliable predictive option?
The data is a stochastic process, recording the amount of 'green space' converted from natural environment to built form [in m2 per km2]. There is no auto-corrrelation or seasonality, but the data is non-stationary [and cannot be coerced through differencing etc]. I have modelled the data using a Dynamic Linear Model, but the forecast predictions are not particularly reliable, therefore I wondered whether I had taken the wrong approach and there were appropriate alternatives?
I have also tried an ARIMA, but have similar issues to the DLM.
I just wondered if anyone had any advice?
Regards
John
It is known that the FPE gives the time evolution of the probability density function of the stochastic differential equation.
I could not see any reference that relates the PDF obtain by the FPE with trajectories of the SDE.
for instance, consider the solution of corresponding FPE of an SDE converges to pdf=\delta{x0} asymptotically in time.
does it mean that all the trajectories of the SDE will converge to x0 asymptotically in time?
The origin of gravitation, the origin of electric charge and the fundamental structure of physical reality are resolved, but these facts are not yet added to common knowledge. Also the structure of photons is resolved and the begin of the universe is explained. A proper definition of what a field is and how a field behaves have been given. These facts are explained in .
This model still leaves some open questions. The model does not explain the role of massive bosons. It does not explain the existence of generations of fermions. The HBM also does not provide an explanation for the fine details of the emission and absorption of photons. The model does not give a reason for the existence of the stochastic processes that generate the hopping paths of elementary particles. The model does not explain in detail how color confinement works. It also does not explain how neutral elementary particles can produce deformation. The referenced booklet treats many of its open questions in sections that carry this title.
The model suggests that we live in a purely mathematical model. This raises deep philosophical questions.
With other words, the Hilbert Book Model Project is far from complete. The target of the project was not to deliver a theory of everything. Its target was to dive deeper into the crypts of physical reality and to correct flaws that got adapted into accepted physical theories. Examples of these flaws are the Big Bang theory, the explanation of black holes, the description of the structure of photons, and the description of the binding of elementary particles into higher order modules.
The biggest discovery of the HBM project is the fact that it appears possible to generate a self-creating model of physical reality that after a series of steps shows astonishing resemblance to the structure and the behavior of observed physical reality.
A major result is also that all elementary particles and their conglomerates are recurrently regenerated at a very fast rate. This means that apart from black holes, all massive objects are continuously regenerated. This conclusion attacks the roots of all currently accepted physical theories. Another result is that the generation and binding of all massive particles are controlled by stochastic processes that own a characteristic function. Consequently the Hilbert Book Model does not rely on weak and strong forces that current field theories apply.
The HBM explains gravity at the level of quantum physics and thus bridges the gap between quantum theory and current gravitation theories.
The Hilbert Book Model shows that mathematicians can play a crucial role in the further development of theoretical physics. The HBM hardly affects applied physics. It does not change much in the way that observations of physical phenomena will be described.
Consider an experiment in which we prepare pairs of electrons. In each trial, one of the two electrons - let's name it the 'herald' - is sent to a detector C, and the other - let's name it 'signal' - to a detector D. The wave-function of the signal is therefore
(1) |ψ> = ψ(r) |1>,
i.e. in each trial of the experiment, when the detector C clicks, we know that a signal-electron is in the apparatus. Indeed, the detector D will report its detection.
Now, let's consider that the signal wave-packet is split into two copies which fly away from one another, one toward the detector DA, the other to the detector DB,
(2) |ψ> = 2-½ ψA(r) |1>A + 2-½ ψB(r) |1>B.
We know that the probability of getting a click in DA (DB) is ½, but in a given trial of the experiment we can't predict which one of DA and DB would click.
Then, let's ask ourselves what happens in a detector, for instance DA. The 'thing' that lands on the detector has all the properties of the type of particle named 'electron', i.e. mass, charge, spin, lepton number, etc. But, to the difference from the case in equation (1), the intensity of the wave-packet is now 1/2. It's not an 'entire' electron. Imagine that on a screen is projected a series of frames which interchange very quickly. The picture in the frame seems to be a table, but it is replaced very quickly by a blank frame, and so on. Then, can we say what we saw on the screen? A table, or blank?
The situation of the detector is quite analogous. So, will the detector report a detection, or will remain silent? What is your opinion?
For a deeper analysis see
what stage of flood damage management should i take
Hello everyone,
i want to prove the existence and uniqueness of SDE (stochastic differential equation ) which depends on a time parameter, a Levy Process and Omega.
The Problem ist found in the Book "David Applebaum: Levy Processes and Stochastic Calculus , 2nd edition " on the page 375 .
I actually proved the existence of such SDE under Lipschitz and Growth condition via the Theorem 6.2.3, but i dont know how to show the uniqueness?
Did someone has any ideas or hints to show the uniqueness ?
Thanks and best wishes
Hi every body
I am modelling a process that is the product of two stochastic process. Is there any way to estimate to find out the parameters of each of these processes separately?
I want to improve the specification performance of my MEMS Gyro, As we know, the measurement errors of a MEMS gyroscope usually contain deterministic errors and stochastic errors. I just focus on stochastic part and so we have:
y(t) = w(t)+b(t)+n(t)
where:
{w(t) is "True Angular Rate"}
{b(t) is "Bias Drift"}
{n(t) is "Measurement Noise"}
The bias drift and other noises are usually modeled in a filtering system to compensate for the outputs of gyroscope to improve accuracy. In order to achieve a considerable noise reduction, there's another solution that the true angular rate and bias drift are both modeled to set as the system state vector to design a KF.
Now if I want model the true angular rate, How could I do this? I just have a real dynamic test of gyro that includes above terms and I don't know how can I determine parameters required by the different models (such as Random Walk, 1st Gauss Markov or AR) for modeling ture angular rate from an unknown true angular rate signal!
I am eager to study stochastic processes and their application in finance. as I am a student in economics the concepts are completely unfamiliar for me. Any help would be appreciated. Can anyone suggest me the introductory textbook?
I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.
All the riders and drivers arriving in the system will wait for some amount of time until being matched.
The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.
The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.
I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.
![image](https://i.stack.imgur.com/teSyW.png)
However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a buffer) into this model.
One of the main stability theories for stochastic systems is stochastic Lyapanuv stability theory, it is the same as Lyapanuv theory for deterministic systems.
the main idea is that for the stochastic system:
dx=f(x)dt+g(x)dwt
the differential operator LV(infinitesimal generator- the derivative of the Lyapanuv function) be negative definite.
there is another assumption for this theory:
f(0)=g(0)=0
and this implies that at equilibrium point (here x_e=0) the disturbance vanishes automatically.
what I want to know is that is it a reasonable assumption?
i.e in engineering context, is it reasonable to assumed that the disturbance will vanish at the equilibrium point?
it seems that with solving the stationary form of forward Fokker Planck equation we can find the equilibrium solution of stochastic differential equation.
is the above statement true?is it a conventional way to find the equilibrium solution of a SDE? and do SDEs always have equilibrium solution?
for deterministic systems, with defining proper terminal constraint , terminal cost and local controller we can prove the recursive feasibility and stability of nonlinear system under model predictive control. For stochastic nonlinear system it is impossible to do that since we do not have bounded sets for states.
what is the framework for establishing the recursive feasibility and stability of MPC for stochastic nonlinear systems?
when we need to solve the Fokker Planck equation (Kolmogorov Forward equation) with finite difference, we need to solve it in a bounded domain, (regardless of the dimension of the FPE), for more accurate solution, which kinds of boundary condition should be considered?
1-Natural boundary condition:
which is a Dirichlet type boundary condition
the value of probability at the boundaries equal to zero
2-the Reflecting boundary condition:
which I think is the Robin type boundary condition
and the Flux at boundaries is zero?
In Mathematics, delay differential equation is one type of differential equation in which the derivative of the unknown function at a certain time and stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution. Therefore, share your valuable ideas on "How to distinguish between Delay differential equation and Stochastic differential equation?"
(This question is not mine, I copied here an issue raised by Hans van Leunen in one of my threads. I think it is worth of a separate discussion.)
Here are some of the problems Hans van Leunen posed:
- Why does the squared modulus of the wavefunction describe a probability density distribution? Probability density of what?
- What is the relation of these distributions with fields?
- How do they interact? Do you know the mathematics that describes these interactions?
- obviously physical reality is ruled by stochastic processes and not by all kinds of strange forces and force carriers.
I would add a problem of myself
- for which particles we may speak of a quantum field, and which ones are "too much classical" to be represented by a field? For instance, for atoms we may speak of a field, or only for elementary particles?
I trust that Hans would explain his views, and I also hope that the thread won't remain only mathematical. I mean, I think that it would be interesting to discuss the meaning of the concept of field in QFT.
in process control in engineering, of course in many situations we need to control a system under a performance index (optimal control), where the system is exposed to uncertainty ( parameter uncertainty or disturbance or noise). and sometimes we need some constrained on the state of the system.
there are two approaches: robust optimal control, stochastic optimal control.
when we use robust optimal control (because some bounds on the uncertainty is known) we consider the worst case scenario, and we can use optimal control and hard constraints on the states can be satisfied.I think this is a practical approach
on the other hand, when we can not specify some bounds on the uncertainty and the probability distribution of uncertainty is known, we must use stochastic optimal control. In this case, the hard constraint can not be defined, and we should use the definition of chance constraints, meaning the constraint can be satisfied with some level of probability.
now my question is, does such definition a practical definition in real-world application?and is it really applied in industry?
Most of the constraints are for safety. for example we want the temperature of the boiler to be bounded. it is dangerous if we want the temperature of the boiler to be bounded with some probability. so I want to know that, is chance constrain a practical definition in real-world application in engineering?
I have historic time series of 40 years of many weather variables. Call each variable's time series A, B, C ... Z for simplicity.
I want to use all 40 year time series for training with the intention of reproducing stochastic and synthetic time series.
Now i can use simple Markov chain or Monte Carlo approaches for individual variables with great success. However, the relationships between the variables will not be maintained.
I need all variables to relate, such that A has a strong connection to B, but not to C etc.
So when I stochastically generate A, I want that to influence B and not C.
What is the best method to simulate complex inter-dependencies?
Stretch goal: how can this be done in Python 3??
Thanks for any and all help!
Best,
Jamie
The universe must expand otherwise temporary local deformations would not perceive as attractive. The same mechanism that locally pumps volume into the field, will expand that field. The local addition starts spreading over the field. The mechanism is implemented by spherical pulse responses.
Stochastic processes generate the pulses. They create mass out of nothing. Here mass stands for local deformation. This deformation quickly fades away. The processes produce a continuous stream of massive objects that dilute into the increasing volume of the universe. This means that mass is a very transient property. That property must recurrently be regenerated. That is why all elementary particles are recurrently regenerated by regenerating their constituents, which are spherical pulse responses.
The embedding of the separate Hilbert spaces in which the elementary particles reside, into a non-separable Hilbert space drives the stochastic processes. Thus the volume stream comes from content of the separable Hilbert spaces that is added to the non-separable Hilbert space.
I need help in understanding the role of (random) sampling in implementation of a control system in Simulink. I need a basic, general example to visualize the role of the sampler in a control system, and the way it can be programmed (to be random/event-triggered etc).
Any help in this regard is very much appreciated
Thank you in advance
I need an advance because I have to create a mathematical model based on Markov stochastic process to indicate where a human step will be in the space. I have to start from this:
Pr ( S(t1) = S1 | S(t0) = S0) = Pr(step length, step frequency) Pr(change length step, change step frequency)
The first probability of the product is deterministic while the second one is based on empirical estimates. I have to make this product explicit using factors such as the step length, step frequency and the weight of the person.
The universe must expand otherwise temporary local deformations would not perceive as attractive. The same mechanism that locally pumps volume into the field, will expand that field. The local addition starts spreading over the field. The mechanism is implemented by spherical pulse responses.
Stochastic processes generate the pulses. They create mass out of nothing. Here mass stands for local deformation. This deformation quickly fades away. The processes produce a continuous stream of massive objects that dilute into the increasing volume of the universe. This means that mass is a very transient property. That property must recurrently be regenerated. That is why all elementary particle are recurrently regenerated by regenerating their constituents, which are spherical pulse responses.
The embedding of the separate Hilbert spaces in which the elementary particles reside, into a non-separable Hilbert space drives the stochastic processes. Thus the volume stream comes from content of the separable Hilbert spaces that is added to the non-separable Hilbert space.
real data that i can use it for Seismic Signals Segmentation analysis use any stochastic process
Hi
I am trying to find steady state solution of a stochastic differential equation
dy/dt=Aydt+B1ydV1+B2ydV2
where A, B1 and B2 are operators , dV1 and dV2 are color noises.
is there any way or any literature , where steady state solution (dy/dt=0) of a stochastic differential equation has been found out. your help will be appreciated.
for stochastic processes how can we define a region in which the state trajectories of an SDE always remain in? Of course, because of the unboundedness of the uncertainty, it is not possible to define this region (like the robust method) but intuitively there should be a pair (region, probability) such that states trajectory will not leave this region with a corresponding probability. how can we define and calculate this region and its probability for an SDE?
In which software can i run stochastic dominance analysis?
I have question regarding simulating under mentioned 1D Stochastic Differential Equation in R using Sim.DiffProc package:
dx1 = (b1*x1 − d1*x1) dt + Sqrt(b1*x1 + d1*x1) dW1(t)
I have taken this equation from book: Modeling with Ito Stochastic Differential Equations by E. Allen. In the deterministic and diffusion part of equation, b1 and d1 are model parameters representing birth and death rates (for single population approximation of two interacting populations compartment model). Relevant lines of my code are as under (note that i,ve used theta's to represent parameters in my code):
Code (1):
> fx <- expression( theta[1]*x1-theta[2]*x1 ) ## drift part
> gx <- expression( (theta[3]*x1+theta[4]*x1)^0.5 ) ## diffusion part
> fitmod <- fitsde(data=mydata,drift=fx,diffusion=gx,start = list(theta1=1,
+ theta2=1,theta3=1,theta4=1),pmle="euler")
Or should I model it like this
Code (2):
>fx <- expression( theta[1]*x1-theta[2]*x1 )
> gx <- expression( (theta[1]*x1+theta[2]*x1)^0.5 )
> fitmod <- fitsde(data=mydata,drift=fx,diffusion=gx,start = list(theta1=1,
+ theta2=1),pmle="euler")
I am not clear whether to use theta[1], theta[2], theta[3], theta[4] as I have used at first place above or should I code it like only using parameters theta[1] and theta[2] (done at second place above) because in original model the parameters b1 and d1(birth and death rates) appearing in the deterministic part are same as appearing in the diffusion part.
I don’t find a single example in Sim.DiffProc package documentation where there is any repetition of parameters just like I have done at second place.
Thanking in anticipation and best regards.
Saad Sharjeel.
Hi !
I am working on Stochastic resonance(SR), i wanna find SNR before and SNR after Stochastic resonace model is applied, because i wanna find SNR gain. Can anyone help me?
I am trying to discretize a stochastic process of two variables that are weakly correlated and I would like to calibrate the correlation coefficient using data.
Doing spatial analysis of ecological data, taking environmental and spatial variables and performing partitioning of variance there are obtained fractions that are associated with environment, environment plus space, and only space, among others. It is not clear for me, from literature, the role of this fractions on explaining or suggesting stochastic or neutral processes structuring the community. I know it has to be taken carefully, but I still do not get it completely. Any references for reading?
I think that there is this analogy in the field of linguistics as the part of information science and the field of modern statistical theory ( thermodynamics ). The calculations can be done via the Maximum Entropy principle.
It would be interesting to explore sequences in mother-infant dyad interactions, however, it seems to be that no one uses lag sequential analysis in Observer rather calculate Markov matrices...etc.
A continuum equation is used to analyze the model related to the stochastic growth of surface using poisson distribution. In this stochastic growth, the flat surface is continued to become rougher as time proceeded but the correlation length is always zero during the stochastic growth process. i am not able to understand why the correlation length will always be zero.
How I can find stationary Markov perfect equilibrium by numerical method in a stochastic dynamic game while the action space and state space both are uncountable and infinite but they are bounded.
I'm studying a paper that includes the model of a stochastic game. I attach it in this message. Thank you for every help.
What is the best programming language for game theory?
i want to know how I can show a stochastic process by a programming languages and find the solution by it? I study stationary Markov perfect equilibria in discounted stochastic games and want to show a model by a programming languages. But I'm not familiar with the programs in this field.
I've been flipping through some literature on deriving the instantaneous state probabilities of a semi Markov stochastic process. However, I haven't been able to lay hands on a worked example involving non-exponential transitions. Could you please help with links to relevant references, published or unpublished? Thanks!
Hello everybody,
I have three basic questions about AWGN. I always assumed them to be trivial, but I have not a clear answer for them if somebody asks me.
1st. Does always white Gaussian noise means wide-sense stationary noise? (Another equivalent form for this question: Do I have to specify that I mean wide-sense stationary noise even when it is white Gaussian already?)
2nd. Does even exist Gaussian noise without been white? (Another equivalent form for this question: A colored noise can also show a Gaussian distribution?)
And this takes me to another question, and it seems to be the hardest to answer:
3rd. How are related the spectral content of a stochastic process with it's statistical distribution?
Thank you in advance.
I am wondering about whether there is any easy way to find out p-value in distribution fitting of General Renewal Process (with power function) especially for Kijima-2 model. Restoration factors are generally in between 0 and 1, models cannot be reduced to nonhomogenous Poisson or perfect renewal processes.
Chong et al in Cell 158 (pp. 314-326) explain a possible mechanism for stochastic transcription bursting in bacteria.
Is there any evidence for stochastic transcription in mammalian cells? Are there any validated mechanisms for this process?
Let x(t) be a stochastic process. It may be continuous or discrete - I don't want to define it here.
Let x(t) be stationary, then its autocorrelation is defined as R(r) = E[x(t) x(t+r)].
The transition probability can be defined as
P(t, x0; t+r, x1) = prob(x(t) = x0 and x(t+r) = x1).
(for stationary process we may put t=0)
Are there any known relations between R(r) and P(t, x0; t+r, x1)?
Marcov chains are rather popular math tool in econometric studies, but are there any studies about using it in economic activities?
I've solved the Kuramoto model for 100 oscillators and have calculated phase of each oscillator (theta) at each time step. Now by using these phases (thetas) I need to generate two/more signals to measure the synchronization among them as a function of coupling strength with my own method to check the capability of my method.
I need to find a benchmark paper which develops a lattice model of jump-diffusion process. Actually, there are several papers in finance literature which creates a lattice form of jump-diffusion, but my case is special. In my case, jump size is constant instead of being a random variable. I think, this point creates a problem for me. I will be very glad if you suggest me any paper in this respect.
Thank you,
Fikri
Researchers having expertise in Time Series analysis and Stochastic Processes What is the difference between white noise and iid noise ?
It seems in the history of climate science that most scientist spoke of the climate as being stochastic, until about the mid 1980s, where a shift occurred and the climate was more described as being chaotic. Obviously, the shift reflects the wishes of climate scientist to enable prediction and mathematical capture, but it appears only wishful thinking, at best. So, as a survey, what does the RG community think about this fundamental conundrum in the Sciences?
GARCH is an abbreviation of Generalized Auto Regressive Conditional Heteroskedasticity. Let Xt be a stochastic process which can be decomposed as
Xt = σt・Wt (t = 0, 1, 2, ...).
Here {Wt} is a series of probability distribution with zero mean and constant variance (for example a normal distribution N(0,1)) and Ws and Wt are statistically independent if s ≠ t. The coefficient σt is a series of positive numbers which is determined by
σt = a0 + a1 Xt-1 + b1 σt-1 (a0, a1, b1 >0)
in the most simple case. Can this process be interpreted as instances of martingale? I am interested in the case a1 + b1 = 1.
I have only an elementary knowledge of stochastic process and martingale. I do not even know if this is a well known fact or not. Please explain me plainly.
Let X be a random variable with values on the (classical) Wiener space whose law is absolutely continuous w.r.t. the reference Wiener measure (and satisfying a proper notion of "mean zero" and "unit variance"). Let X_1,...X_n be i.i.d. copies of X. Does the density (w.r.t the Wiener measure) of (X_1+...+X_n)/ \sqrt{n} converge in L^1 to 1?
Imagine a process that generates an item each Tg seconds on average, with an standard deviation stdev_Tg s. Let N be the random number of items generated during a fixed time interval T. If stdev_Tg = 0, then N = T/Tg is constant. Is there an analytical expression to compute the average N if stdev_Tg > 0 s? And stdev_N? What would be the distribution of N in function of the distribution of Tg?
How can one simulate a stationary Gaussian process through its spectral density?
I have been searching a canonical notation for describing large queuing system networks. I know the Kendall notation is used for describing queuing systems, however, the Kendall notation does not allow me to describe large queuing system networks with the purpose of establishing a taxonomy of this networks.
Could anybody please let me know if there exists a canonical notation for describing large queuing system networks?
Hello, I am currently working on models where energy can be produced using either a clean or dirty technology and investment (in knowledge) reduces the average cost of the clean technology or backstop. A steady state involves using both the dirty and clean technologies when their marginal costs are equal.
I am thinking of including a stochastic process for change in energy prices such that investment in the backstop is feasible only when energy prices are above a certain level (that is to say, investment in knowledge now reduces the future average cost of the backstop but there is also a huge fixed cost in actually using the backstop). Theoretically, I believe that this would involve switching back and forth between clean and dirty technologies. I am looking for any ideas in how to model this. I am attaching my recent publication (basically including stochasticity as I said in my current model).
I am interested in collaborating! any ideas?
Supratim
How to construct an approximate continuous state space for a Petri net system, or whether it is possible to construct an ordinary differential equations (ODEs) for a Petri net system?
Start with a reflexion domain being a polygon A1 A2 ...An, with n summits
A simple computation using formula line 7 page 57 shows that if d(An, An-1)-->0 then P (B5(0)=An)-P(B5(0)=An-1)-->0 as n-->infinity
and P((B5(t+s)€E/B5(0)=An))-P((B5(t+s)€E/B5(0) =An-1))-->0 as n--> infinity
Which allow to derive the distribution of the relected process in a bounded domain with smooth boundaries.
Thanks
Bernard Bellot
l
WCE method introduces a spectral method based on tensor product of orthonormal Hermite polynomials as a basis in Lp space of stochastic process that Finding expectation and variance are belong to its advantages. Is there any other stochastic bases (except wavelet ones and operational matrix on polynomials) ?
I am looking for seminal papers on this topic. The issue is that from my point of view, mathematical expectation and correlation moments (second order moments) are different for different time resolutions i.e. annual data is often IID data while monthly data exhibits auto correlation function of simple Markovian type but daily date usually has more complex correlation structure.
I need references that could help to clarify this point some of my colleagues stand for probabilistic self-similarity of hydrological processes so I think here we have some common misunderstanding of the concepts.
How should one treat an infinite Levy measure?
Linear quadratic Gaussian control has been studied extensively for relatively low order stochastic processes. However, for highly random, unobservable processes. The LQGC gives a poor performance, especially for nonlinear MIMO systems. So, could developing controllers for such a process be built?
Actually I am working on multichannel eeg data obtained from scalp electrode of meditating and non meditating subjects. We want to quantify the changes that occur in ones brain signals when one meditates.
I have preprocessed the signals by bandpass filtering, normalization and artifact removal by wavelet thresholding. After that i have i have segmented the data set of each channel( we have 64 channels per subject and 64000 samples from each channel, the sampling frequency being 256 Hz). I have considered 1 second(ie. 256 samples) segments with 50 percent overlapping So in total we have 499 segments per channel per subject.
Then I decomposed each of the segments using wavelet decomposition and calculated the statistics such as mean, variance, kurtosis and skewness from each band per segment per channel per subject. But I am unable to form a feature vector that I can input into a classifier. Please help.
n(t) is white Gaussian process.
T1 and T2 are the periods at which n(t) is sampled,
N1=n(m \cdot T1), N2=n(m \cdot T2).
S1(f) and S2(f) are the spectrum of N1 and N2 respectively.
What is the relation or difference between S1(f) and S2(f)? They are identical?
I am facing a problem regarding the stationary distribution of mobility models in revising my paper.
In a general sense, my question is like this.
- Given a stochastic process {Xt: t>=0} such that the initial state X0 is uniformly distributed in the state space and for all t > 0, Xt = X0,namely, the process does not evolve with time. For this specific process, can we say that {Xt} has a stationary distribution that is uniformly distributed in the state space?
Hello,
Maybe this is a very easy question, maybe not. I have a time-descrete stochastic process X = (Xt). Each Xt has a different pdf, so it is not i.i.d. All Xt have the same sample space and the pdfs are constructed of the same sample size. So now I don't want to have the joint probability function, I want to have the pdf of all realisations of all Xt collected together, as if there was no difference in time. How can I get this "summed up" pdf out of the separate pdfs?
Thanks in advance