Science topic

Markov Processes - Science topic

Explore the latest questions and answers in Markov Processes, and find Markov Processes experts.
Questions related to Markov Processes
  • asked a question related to Markov Processes
Question
2 answers
Dear Researchers,
Is there any tool available for BDMP. If so kindly give the details how to get it.
Relevant answer
Answer
The moduleTREE of the GRIF software suite is also based on Boolean driven Markov processes :
- logic provided by Fault tree
- leaves modelled by small Markov processes
A free demo version is available at : https://grif.totalenergies.com/en/grif.
A description of this kind of BDMP is given in :
Best regardds
  • asked a question related to Markov Processes
Question
6 answers
When Markov matrix are multiplied by input to get output, how inputs are defined and how they are different from matrix coefficients? I want to predict the presence of something based on previous data.
Relevant answer
Answer
You can input any one of the states, as it's independent of your input.
  • asked a question related to Markov Processes
Question
10 answers
Dear researchers,
Pardon me for being a novice here. In the image attached, eq 3.1 represents the transition matrix (it's pretty clear). I am not able to comprehend the eq 3.2, alpha*P = alpha, as well as the further equations.
I have the P matrix with me as an outcome of one of my projects. How should I calculate alpha and the elements a1, a2, a3....etc?
I would request some of your valuable guidance and help.
Relevant answer
Answer
Hi Mritunjay,
One of the properties of Markov chains, is that if approximated properly the state transition matrix becomes stationary. i.e. In the limit, the State Transition Probability Matrix (P) tends to a stationary matrix (A) which does not change as a function of time. This A is essentially the "GLOBAL State Transition Matrix" while P is a "LOCAL" version of the same.
This can be explained as, for a starting probability vector of states (Pi_0), you get the subsequent probability vector as; Pi_1 = Pi_0*P,. Similarly, Pi_2 = Pi_0*P*P, and Pi_n = Pi_0*P^n.= Pi_0*A = Alpha.
Now for any subsequent state, Pi_(n+1) = Pi_0*P^n*P. We have, the last 2 terms (=P^n*P=P^(n+1)) = A as that is the limit. This Pi_(n+1) = Pi_0*A = Pi_n in the limit.
Explaining equation 3.2, Alpha*P = Pi_0*P^n*P = Pi_0*P^(n+1) = Pi_0*A = Alpha.
Mathematically, in the example shown, the values of a_1,....,a_n are computed analytically. In a true situation, this is not done. Instead, taking sufficient data (length) when modelling P ensures a sort-off statistical stationarity, i.e. the P is actually A (with very small variations).
Hopefully this helps, and you can follow the subsequent discussion.
You can read my paper, " Online Discovery and Classification of Operational Regimes from an Ensemble of Time Series Data" which has a description on how Markov modelling is actually used for real data.
Cheers
  • asked a question related to Markov Processes
Question
4 answers
I am trying to estimate the most likely number ok K using MCMC (Markov Chain Monte-Carlo Inference Of Clusters From Genotype Data) function in Geneland R-package by Gilles Guillot et al. I am a lit bit confused when it comes to varnpop and freq.model arguments
In the package reference manual https://cran.r-project.org/web/packages/Geneland/Geneland.pdf  one may read:
varnpop = TRUE *should not* be used in conjunction with freq.model = "Correlated"
From the other hand, other manual http://www2.imm.dtu.dk/~gigu/Geneland/Geneland-Doc.pdf recommends example of MCMC usage which looks like this:
MCMC(coordinates=coord, geno.dip.codom=geno, varnpop=TRUE, npopmax=10, spatial=TRUE, freq.model="Correlated", nit=100000, thinning=100, path.mcmc="./")
I am not sure how to reconcile these two contradictory pieces of information, any suggestions?
Relevant answer
Answer
Dear all, I am experiencing the same problem with correlated allele freqs. model in Geneland; whether the MCMC do not converge or converge with too many and not credible clusters. However, the uncorrelated freqs. model seems to perform well...
It's been any advance in this question since 2018? Cheers.
  • asked a question related to Markov Processes
Question
7 answers
I'm trying to find any open source sentiment analysis program based on Hidden Markov Models. Even if it was not open source and it has good documentation it will be great.
Thank you.
  • asked a question related to Markov Processes
Question
4 answers
Hello everybody.
The reward is necessary to tell the machine ( agent ) which state-action pairs are good, and which are bad.
Please help me to understand the behavior of the discount factor or reward in terms of reinforcement learning.
What I don't understand is why the discounted reward is necessary? Why should it matter whether a good state is reached soon rather than later?
Relevant answer
Answer
Hi . I will definitely try to read these slides. Thanks a lot
  • asked a question related to Markov Processes
Question
3 answers
I have more than 400 different events that occur during two years, some of them can occur 4000 times an others no more than 50 times. These events are not equally distributed, e.g., one at 15:00:02, other at 15:10:45 and other at 15:45:56. I cannot ensure that these events are independents, they maybe can be relationed. I want to analyze these events and try to find a pattern of events.
Type of data:
{timestamp1, event A (string value)}, {tiemstamp2, event B (string value)} ->Between timestamp1 and timestamp2 there is no event and is not equally distributed. Event A could inflluence event B or not.
I would like to know what type of methodology I can apply. I have been reading about SAX time series, Markov model, Hidden Markov model, DTW (Dynamic Time Warping), Time wrap for discrete events, Renewal proccess model and continous-time Markov process. However I think that these algorithms don't fit to my problem.
I hope that someone can help me. Thanks in advaced.
Relevant answer
Answer
If the data is different in distribution, a spectrum analysis of the data can be adopted using the spectrum density function, or a consistent estimate of the density density function or the periodiagram function.
  • asked a question related to Markov Processes
Question
2 answers
I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.
All the riders and drivers arriving in the system will wait for some amount of time until being matched.
The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.
The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.
I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.
However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a buffer) into this model.
Relevant answer
Answer
Can you please explain in more detail the process of matching. The enclosed picture is the standard random walk with two competing waiting exponentially distributed independent times: If wins the 1 type - we go one step to the right, in the oposite case -we go to the left. No service is sketched. Thus as much as possible precise description of the service is needed. Now, the main doubt is caused by lack of the interpretation of negative positions: Isn't it the difference of the numbers of arrived riders and drivers?
Also, writing these words:
GQ: "The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching. "
it is not explained what are the "outnumbers". My English is too weak to understand the context. Can this be explained in more simple wards like this:
The state is characterized by current values of two numbers: of riders and the drivers in the waiting room. At the instant of matching k\tmes delta t the numbers are becomming less by the miniumum f the two (hence one becomes zero) . . . .
Note that this s a kind of guess what was meant by you!
Joachim
  • asked a question related to Markov Processes
Question
2 answers
Hello all ,
I have four stochastic matrix, and I want to estimate (or mesure) the correlation beteween them?
Is Khi-deux test for indepence can be used? tests based on distance or what kind of tests ?
Thanks
Relevant answer
Answer
Thank you very much Joachim Domsta and
Paul Marcoux
for your responses,
Yes, in fact I have data set of three financial markets , by which I realized an ARDL (cointegration approach), I found a long term relationships among them, but I treid to factorize the data set into four states fo each variable, so I found a new sequential data set, then, I fitted a Markov chain model to each new set, finally, three stochastix matrices have been estimated, so, following your suggestion dears: Joachim Domsta and
Paul Marcoux
, is a khi-squared test sufficient to estimate independance (correlation) between these matrices ?
Thanks
  • asked a question related to Markov Processes
Question
4 answers
Without data collection, how the properties of Markov process are determined by using a known probability distribution?
Relevant answer
Answer
The general solution at infinity is either an ergodic process (mixing across all cells), mixing across sets of subcells or absorbtion (the process halts at a finite time). Each of these is due to the characteristics of the Chapman-Kolmogorov transition matrix in an obvious way
  • asked a question related to Markov Processes
Question
8 answers
I have a system that is a Markov chain. It is in principle possible to calculate the trasition matrix, given a set of parameters of the system. Calculation may take long but it can be done for some reasonable sets of parameters. Sometimes what I get is a trasition matrix that is to big to be managed by my digital computer (it requires to much memory - but I didn't try to save it on my hard drive). Are there any methods of algorithmically manipulating large Markov chains "by pieces", so to speak? Or perhaps there are some other methods of calculating various properties of systems that correspond to very large Markov chains?
In my case, I get absorbing Markov chains and I would like to calculate various properties, like probability of absorption or expected number of steps to being absorbed etc. Of course, there are appropriate formulae but they are useless if you cannot make the computation.
Relevant answer
Answer
You may consider SPNP like software packages to automatically generate and solve large Markov chains. Alternatively use hierarchical/fixed-point iterative methods via software package such as SHARPE. Many real-life large problems are solved using these methods in my latest book: Reliability and Availability Modeling by Cambridge University Press, 2017.
  • asked a question related to Markov Processes
Question
3 answers
I am trying to locate the whereabouts of the specific source code for the SAS program mentioned in the paper below on page 102:
Unfortunately the site is no longer working and the authors are not contactable. If anyone can help, I would be most grateful.
Relevant answer
Answer
good question,thank you
  • asked a question related to Markov Processes
Question
2 answers
Knowing the structure will assist me to model situations where risk is a major factor
Relevant answer
Answer
Usually reward depends on current state and action but can also depend on future .state.
  • asked a question related to Markov Processes
Question
3 answers
Deep learning has proven to be beneficial for complex tasks
such as classifying images. However, this approach has been mostly applied
to static datasets.
Let's say we have a sequence of actions dataset. There are 10 different actions, but let say for simplicity that we have only a1 and a2 actions. The data are not stationary. For some time we have one distribution of actions sequences probabilities and then another. For my  task  RBM can model a distribution of stationary subsequences very well. However, I want to switch to new RBM as soon as distribution properties are changed. Also, I want to be able, after a training, to find a cluster(RBM?) to what this sequence of actions applies best. Is it realistic?
I have the raw idea that when the current RBM performance begins to degrade significantly we can switch to a new model. And we can chose the right RBM by the best performance if no one is not good enough(isn't clear how to measure this correctly) then - create new.  
Do you have any ideas or hints, maybe some working alternatives?
Relevant answer
Answer
If you would consider algebraic (not statistically based) segmentation of a non-stationary time series - you could take a look at R.Palivonaite, K.Lukoseviciute, M.Ragulskis. Algebraic segmentation of short nonstationary time series based on evolutionary prediction algorithms. Neurocomputing. 2013, vol. 121, p.354-364.
  • asked a question related to Markov Processes
Question
2 answers
I am trying to use Gilbert & Elliot correlated error model with Markov model in LTE-A to calculate the probability of packet loss. Like, other researchers, I use a simplified equation to calculate the above probability.
Is there any technical issue that I can not use this traditional method in LTE-A ?
Relevant answer
  • asked a question related to Markov Processes
Question
14 answers
One of the reviewers of our paper is of the view that: "Counting number of non-deterministic transitions doesn't give you any quantitative information about time or throughput."  whereas PRISM has a whole MDP benchmark that deals with (network) performance mostly: http://www.prismmodelchecker.org/benchmarks/props-mdp.php
Do you agree/disagree with his comment? Particularly, a good research paper/book for/against this comment will be highly appreciated. 
Relevant answer
Answer
Great. Thanks a lot for your detailed answers and resolving the confusion. 
  • asked a question related to Markov Processes
Question
5 answers
Pavement condition deteriorates over time. Pavement maintenance decisions are dependent on the condition of pavements and the availability of funds. Predicting future condition of pavements is necessary to manage pavement rehabiliatation
Relevant answer
Answer
Thanks Professor Pavlov. I think both suggestions are valuable. However, professor Reza seems to relate more to the situation. I appreciate the input from both of you. I do recommend that others take a look at the Multi-attribute objective value function . I had used it in decision analysis of alternative solutions
  • asked a question related to Markov Processes
Question
4 answers
There is a confusion in defining repair rate. From Richard Brown's book, Power Distribution Reliability, it is said that repair rate is the inverse of repair duration, which is defined as the mean time needed to repair the system SINCE it fails (page 164). But in the Markov process example (page 204), he used repair rate as the rate of transition from switched state (1b to 0 and 2b to 0) to normal state, not from the fail state to normal state (1a to 0 and 2a to 0). I attached the screenshoot of his book.
Relevant answer
Answer
Hi Johanno,
Frankly, in the material you have exposed, there is no statement about the repair rate. Basically, you are right, one should take into account the sum of the two times for passage from nb to  na which  (for n=1) equals to 
THEAVERAGETIME1 = 1/\sigma1 + 1/\mu1,
so, the rate equals, according to the author's definition  the inverse value, i.e. 
THEREPAIRRATE1= \sigma1 \cdot \mu1 / ( \sigma1 + \mu1).
It is very hard to guess, why the sigmas are omitted. Suggestion - ask prof. Brown:)
Best regards.
  • asked a question related to Markov Processes
Question
8 answers
I am trying to analyse the dynamics changes in pattern of trade specialization using the transition matrices. But have no idea on how to estimate the initial probabilities and the subsequent one step or n-step transition matrix for four states. My data consists of revealed comparative advantage index. Particularly trying to understand and apply the method use in the paper attached herewith.
Relevant answer
Answer
OK! the easiest way is to assume uniform probability distribution and calculate the probability T(y,a,x) = (number of times execution of "a" from state "x" resulted in "y")/(total number of times action "a" was executed from state "x"). For example you apply 2 Newton force on a block of wood placed at position 0 meters (and repeat this experiment 100 times). 78 times (out of 100) the final position of the block is 3 meters, 10 times (out of 100) the final position is 3.2 meters, and 12 times (out of 100) the final position is 2.8 meters. Then the transition probabilities of the three outcomes are (0.78, 0.1,and 0.12) respectively. Note here that the data you need is the outcome of the 100 trials (or however many trials have been carried out)
  • asked a question related to Markov Processes
Question
2 answers
The measurement data are steering angle and steering angle velocity and from this data, the driving maneuver should be recognized (the maneuvers are lane changes to left and right, turn left and right lane keeping).
Relevant answer
Answer
Hi,
It would be worth reading: "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models ".
Regards,
  • asked a question related to Markov Processes
Question
4 answers
Please suggest some good methods.
Relevant answer
Answer
Hi, please check my papers like:
"Unsupervised Segmentation of Random Discrete Data Hidden With Switching Noise Distributions"
Or Wojciech Pieczynski papers dealing with this issue (titles usually contain "Nonstationary").
Best,
  • asked a question related to Markov Processes
Question
2 answers
In perfect integrate model of single neuron, there is no leaky channel. When we set a threshold for spike generation as 'theta', excitatory input rate is 'r_e', each step of excitatory input would increase the membrane potential for on uniform amplitude 'A'. To simplify the question, we set A = 1.
In this case: input rate = r_e, output rate = r_e*A/theta = r_e/theta
so the generation of output is a gamma process.
BUT, in real neuron, there should be both excitatory input and inhibitory input to keep the network balance.
So, each inhibitory input would cancel a excitatory input. And excitatory input and inhibitory input are both from Poisson distribution. Based on perfect model, excitatory and inhibitory input cause exactly same amplitude A effect but in reversed direction.
In this case, let's assume k inputs would generate a spike, so the inputs between two spikes are 2k+m, excitatory input: k+m, inhibitory input is k, and net input is m, which cause the membrane potential rise to threshold.
When we are computing the transient probability of the new neuron model, we set probability of increase as r_e/(r_e+r_i) and probability of decrease as r_i/(r_i+r_e). Then this is a binomial distribution.
Resulted Transient Probability is the product of a probability of Poisson Process (to generate 2k+m inputs in total) and probability of binomial distribution (to have k+m excitatory inputs and k inhibitory inputs)
I was wondering to simplify the process by making a net input (m spikes) from Poisson Process
Or I would say, the new net_rate = r_e-r_i
so we don't need transient probability (Markov process) to understand the transient process from one spike to another, this make the problem again a pseudo-excitatoryinput-only perfect model.
Does this applicable? My supervisor said I should not cancel a excitatory input with a inhibitory input, because Poisson process subtract a Poisson process doesn't make sense. But if I cancel randomly (with the right number), this is again a Poisson Process.
Relevant answer
Answer
The process you describe is called a birth-and-death process, and has been extensively studied in the context e.g. of queuing theory. Most recently, I think this has been used by e.g. Mike Shadlen to account for his neurons that integrate to threshold in order to do decision making.
You can't make the reduction to a Poisson process with a rate that is the difference. While the averages fit, the distribution would be substantially narrower, and so the firing probabilities would be much less. As an example, think about a process with r_e=r_i. Your equivalent Poisson process would have a rate of 0, so be identically equal to 0, but the model with two separate processes of excitation and inhibition would still have probability to fire.
  • asked a question related to Markov Processes
Question
2 answers
I want to develop a Markov decision process Model where risk is factor in model formulation.
Relevant answer
Answer
Hello Kizito.
Given your question, I think your doubt lies on the double stochastic layer of a HMM.
...The first layer (hidden) is just an ordinary Markov Chain. Its outputs in time (transient states) will be used as probabilities for its emissions in time (visible states).
One possible approach is to use "risk" as probabilities for the emissions according to your initial setup.
Next, I leave some useful links so you can go further.
A good page to understand the basics (also they have some code): http://www.shokhirev.com/nikolai/abc/alg/hmm/hmm.html
A good way to create your models and use then in practice: R packages: "HMM", hmm.discn, HiddenMarkov
  • asked a question related to Markov Processes
Question
8 answers
If Markov and Martingale are properties of Brownian. What is Wiener then? How Markov is different from Martingale.
Relevant answer
Answer
In most sources, the Brownian Motion and the Wienner Process are the same things. However, in some sources the Wiener process is the standard Brownian motion while a general Brownian Motion is of a form αW(t) + β.
A Brownian  Motion or Wienner process, is both a Markov process and a martingale. These two properties are very different. In fact, they have little in common. A random process X_t, adapted to a filtration F_t, is a martingale with respect to the filtration if conditional expectation of the increment E( X_t - X_s | F_s) =0 for all t>s (conditional increment equals 0).
The same process has Markov property if 
E( X_t  | F_s) = E( X_t | X_s) =0 for all t>s (future distribution is independent of past).
  • asked a question related to Markov Processes
Question
1 answer
X(t)=B(t)N (A(t): binary Markov; N is white Gaussian) 
A(t) is a binary Markov process;
N is white Gaussian variable;
Is the product between A(t) and N a spherically invariant process? Thanks
Relevant answer
Answer
Maybe the attached paper should be useful for answering your question.
  • asked a question related to Markov Processes
Question
2 answers
   For M/M/1 queueing inventory system with NO more than one outstanding replenishment, Professor Daduna et al. gave deep insight for it by giving the sufficient and necessary conditions to obtain the product solution. In practice, queueing inventory systems with more than one outstanding replenishment are usually seen. How can I model them? 
Relevant answer
Answer
Here a quick but maybe not the best answer:
For example in the "classical" inventory system with (r,Q) policy and with 1 replenishment: The Inventory states are {0,1,2,...,R}
Here the system can recognize if an order was triggered: when the inventory state is less than a fixed parameter r.
For multiple replenishment orders we can add an additional parameter to the system:  Number of outstanding orders.
In  terms of the paper. we will have the new environment states
K:={(k1,k2) | k- number of outstanding orders, k2  number of items in the inventory}
The Matrix V can be defined in such a way that the system will try to replenish as long as k1 stays positive. After each replenishment the size of the inventory will be changed and the value k1 decremented.
  • asked a question related to Markov Processes
Question
7 answers
Hi, I am doing research on Ad Hoc networks. I want to analyze the behaviour of nodes. Therefore, I have planned to use the markov chain process to monitor the states of the nodes. If any one is doing research related to this please share your commands?
Relevant answer
Answer
If it is node classification you may use matlab
  • asked a question related to Markov Processes
Question
1 answer
Do you think it is a good or bad idea to use a partially observable Markov decision process (POMDP) planner instead of a plan library in the belief-desire-intention (BDI) architecture? The advantage of a POMDP planner could be that plans are more applicable. But the disadvantage is the complexity of POMDP planning. It would be nice to allow the BDI architecture to deal with partially observable domains.
Relevant answer
Answer
The proceedings of the Adaptive Agents and Multi-Agent Systems conference use to include articles on this subject. For instance, see: http://link.springer.com/chapter/10.1007/978-3-540-32274-0_17
  • asked a question related to Markov Processes
Question
10 answers
Otherwise, here is my specific problem, I would also be interested in collaborating:
Is there any possibility to represent a probabilistic distribution on a set of exclusive hypotheses (Let us say a set of classes or system states: w_1, w_2,...., w_K such as
p(w_1)+p(w_2)+...+p(w_K)=1) by a single parameter (that preserves continuity)? This is possible via some parametric functions in the case of K=2. Is there any possibility for higher values of K?
Finding a continuous bijection between R and [0,1]^p would solve the problem.
Relevant answer
Answer
.
seems to me you are looking for space-filling curves ; they are indeed continuous.
the problem when using them for proximity query (which seems to be what you are aiming for) is that two points close on the curve are indeed close in the representation space (continuity ...) but the reverse is not true : due to the space filling behaviour, the curve "makes many turns" and two points close in representation space can be mapped very far away from each other (just look at the figures, it is obvious)
now, space filling curves have been used for proximity queries (find approximate nearest neighbours of a given point, for instance, of find if two points are close in representation space) but at the cost of using many different such space-filling curves simultaneously (see reference attached)
for a commonly used space-filling curve, see the Z-curve :
.
  • asked a question related to Markov Processes
Question
13 answers
I'm looking for a code to estimate time varying transition probabilities in Markov Switching GARCH models for empirical study.
Relevant answer
Answer
Well, I programmed both, a Markov Switching GARCH and a MS with TVP, not at the same time, though. I assume in your program you have a coefficient for each of the transition probabilities, p_11 and p_22 (with p_12=1-p_11 and p_21=1-p_22).
Replace them by a cumulative density function (e.g. of the normal distribution): p_11=cdf(a+b1*F1+..+bn*Fn) where the Fn are the n macroeconomic factors.
  • asked a question related to Markov Processes
Question
7 answers
I want to know if a Markov process far from equilibrium corresponds to a non-equilibrium thermodynamics process or whether they have something in common?
Relevant answer
Answer
Actually, a time-homogenous Markov Chain rigorously converges to a unique probability distribution, which can be fixed e.g. by the detailed balance condition. If this distribution is chosen to be the Boltzmann distribution, the MC converges to the Gibbs equilibrium measure, i.e. thermal equilibrium. You can also choose this distribution to be something completely different, e.g. a steady-state non-equilibrium distribution corresponding to some physical system. This is how you make a connection to non-equilibrium thermodynamics.