Science topic

# Markov Chains - Science topic

Explore the latest questions and answers in Markov Chains, and find Markov Chains experts.
Questions related to Markov Chains
• asked a question related to Markov Chains
Question
Hey Members, I'm running quantile regression with panel data using STATA, i find that there are two options :
1- Robust quantile regression for panel data: Standard using (qregpd )
2- Robust quantile regression for panel data with MCMC: using (adaptive Markov chain Monte Carlo)
Can anyone please explain me the use of MCMC ? how can i analyse the output of Robust quantile regression for panel data with MCMC ? thanks
• asked a question related to Markov Chains
Question
This is a continuous time continuous space Markov chain. If possible i would like to do this in R programming language. I have attached a sample data named NDP.
Thank you .
A similar approach to what Dr Cheng suggested is discussed in the back in the attached screenshot available for download from the z-library. Best wishes, David Booth
• asked a question related to Markov Chains
Question
Shanon entropy rate can be calculated for the hidden markov model (HMM) but is it possible to do the same for Markov chain.
If it's finite, ergodic, and stationary, then yes, a Markov chain has an entropy (rate) that is the average information gained in one step. If H(i) is the entropy associated with all transitions outward from State i taken in one step, then the Markov chain entropy rate is is Sum(i) of P(i)H(i), where P(i) is the stationary probability of State i.
• asked a question related to Markov Chains
Question
Dear Researchers,
Is there any tool available for BDMP. If so kindly give the details how to get it.
The moduleTREE of the GRIF software suite is also based on Boolean driven Markov processes :
- logic provided by Fault tree
- leaves modelled by small Markov processes
A free demo version is available at : https://grif.totalenergies.com/en/grif.
A description of this kind of BDMP is given in :
Best regardds
• asked a question related to Markov Chains
Question
Let me find an expert to discuss about the possibilities of using the Markov Chain in Fuel Price Modeling
• asked a question related to Markov Chains
Question
I am familiar with the concept of stochastic ordering for two random variables and how we can say if a markov matrix is stochastically monotone. What I'm interested in is if there is a concept for ranking two separate markov matrices.
To illustrate suppose we have two stochastically monotone markov matrices A and B which preserve the ordering of x≿y. Under what circumstances can we say (if any) that matrix A is preferred to matrix B in stochastic order?
Note: The definitions I am using are from this slide deck: http://polaris.imag.fr/jean-marc.vincent/index.html/Slides/asmta09.pdf
Depends on what your mean by two different matrices. If the two different matrices are derived from two different data sets, then no, the comparison is completely meaningless. If your matrices are using the same data but different sets of model parameters then of course you can compare the matrices since this is simple a test of of the fit of one set of parameters vs. another set.
• asked a question related to Markov Chains
Question
Dear all,
I am looking for any helpful resources on monte carlo markov chain simulation. Either pdf, book or stata do file or R script would be a great help for me. Any starting point where I can learn mcmc asap.
Would be really happy if someone can share Stata do.file or R scripts for monte carlo markov chain.
Thank you very much
Not about MCMC per se, but worth looking: https://czekster.github.io/markov/
• asked a question related to Markov Chains
Question
I have 30 Years export data and i have to do Markov chain analysis.
Depends on the sort of the data. I see no reason why you could not put all 30 years into a matrix and do an analysis on them. Would be simplest to start out with a one step Markov chain (where the current year depends only on the last and the last only on the next to the last year and on similarly over all thirty years.
• asked a question related to Markov Chains
Question
Hello everybody
According to what mentioned in the attached snapshot from the appendix of this article
My question is, if we already have the transition probability matrix P, how could we calculate numerically v^=α0(I−P)−1, as the limit of the recurrence v(t+1)=v(t)P+α0?
I have got the full text of your book. It looks great.
Regards,
Ahmed
• asked a question related to Markov Chains
Question
i) What kind of objective may be acheived by applying Markov chain analysis?
ii)How would be the arrangement of data?
first you understand the transition probability
• asked a question related to Markov Chains
Question
I would like to have a deeper insight into Markov Chain, its origin, and its application in Information Theory, Machine Learning and automated theory.
Yes whilst a Markov chain is a finite state machine, it is distinguished by its transitions being stochastic, i.e. random, and described by probabilities.
Kind Regards
Qamar Ul Islam
• asked a question related to Markov Chains
Question
So, I am wanting to take network data from various time periods of the same network and map it in a way that will allow for clear representation of each time period. For example, taking network data for a 2013-2015 time period, a 2015-217 time period and then a 2020-2021 time period. Importantly, the amount of data will be different and the time periods won't strictly be even (i.e. there might be a 1 year gap between time period a and b and then 5 years between b and c). Also, I think it is important to highlight that these would be 'time-slices' of the same overall network, rather than distinct, unrelated networks.
I am hoping to present these different time-slices in one or two ways. First, I am wanting to be able to place a series of network maps next to each other based on temporal data to show how the network is changing over time. Second, if possible, I would like to produce a video/animation that shows the network changing over time.
I have been doing lots of reading on possible ways to achieve this type of analysis. I have been looking into using stochastics, specifically Markov models or Stochastic Actor-Orientated Modelling (SAOM), both of which I have seen used for similar projects. Only problem is my maths is good, but needs some work before I could comfortably use these approaches, so if stochastics is the way forward, any suggestions for good tutorials?
I have also been looking into Social Sequence Analysis, specifically the use of Network Methods, as outlined by Cornwell (https://www.cambridge.org/core/books/social-sequence-analysis/network-methods-for-sequence-analysis/FFF842AF37364167E23AD03E50650336) which seems promising.
I feel as though I am reading a lot and getting a bit lost in all of the literature. Any advice would be greatly appreciated!
I thought it would be worth commenting on this in case anyone comes across this question and wants an answer.
After more extended reading I came across temporal network theory and it has solved all of my problems. If you are wanting to do something similar to what I described in my original question then I strongly recommend reading Holme and Saramäki's book Temporal Network Theory (https://www.springer.com/gp/book/9783030234942) and then look at this excellent tutorial about how to create temporal networks in R: https://programminghistorian.org/en/lessons/temporal-network-analysis-with-r or this guide to Teneto if you're a Python person: https://teneto.readthedocs.io/en/latest/what_is_tnt.html
• asked a question related to Markov Chains
Question
Is it possible to analyse 30 years of data in Markov chain analysis by using any software? Usually it is easily possible to analyse the 10 years data in LINGO software. So for 30 years of data what will be the method or it has any other software?
But Sir I wanted to know about the export data? Is analysis through markov chain possible to see the direction of trade for 30 years of data? Mohamed-Mourad Lafifi
• asked a question related to Markov Chains
Question
Hi everyone. I took a basic course on Markov Chains, and know a little about Monte Carlo Stimulations and Methods. But I never got to the part of spreadsheets.
If anyone can help direct me to a few non-technical, not to hard to read books on Monte Carlo Stimulations I would be grateful.
The following some interesting books
Introducing Monte Carlo Methods with R
Springer-Verlag New YorkChristian Robert, George Casella (auth.)Year:2010
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Springer-Verlag LondonEnrico Zio (auth.)Year:2013
Handbook of Monte Carlo Methods (Wiley Series in Probability and Statistics)
WileyDirk P. Kroese, Thomas Taimre, Zdravko I. BotevYear:2011
Essentials of Monte Carlo Simulation: Statistical Methods for Building Simulation Models
Springer-Verlag New YorkNick T. Thomopoulos (auth.)Year:2013
Simulation and the Monte Carlo Method
WileyReuven Y. Rubinstein, Dirk P. KroeseYear:2017
Best Luck
• asked a question related to Markov Chains
Question
I have data for a seven-year monthly time series and apply Markov chains
You probably could statt with with an easier method- Exponential Smoothing.
• asked a question related to Markov Chains
Question
I am looking to apply supervised machine analytics on a dataset that shows the location-based movement of patients in a house. The database needs to show the indoor movements of patients. The dataset may have both numerical or categorical values.
An example of this is the demonstration of the time the person woke up, any chores/activities the patient did, the time the patient went to sleep, etc. I would prefer the dataset to be labeled defining the abnormal and normal occasions.
Ideally, I will be applying supervised machine learning algorithms to the patient's movements. For future work unsupervised machine learning analytics can be applied.
Should you know of any of the specific types of datasets, please let me know.
Thanks
Hi Donna, thanks for this. However, I have been looking for the data collected from these types of sensors. There are usually motion detection sensors applied to patients' houses to collect information based on when there is motion detected in each room.
This type of data is usually found on Kaggle or UCI machine learning websites, but there is not anything similar available.
• asked a question related to Markov Chains
Question
Could anyone provide me with an example or some references?
According to Wikipedia [1,2], the FGR is a formula that describes the probability of a transition per unit time, from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak time-independent perturbation.
Wikipedia and Hyperphysics links [1,2] state that FGR is applicable when the final state is discrete if there is some decoherence in the process, like relaxation or collision of the atoms, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth. Otherwise, the final state should be a continuum.
Topics covered by GFR are diverse, optical transitions, scattering cross-sections and decay rates in nuclear reactions and impurity scattering in solids.
Although the rule is named after Prof. E. Fermi, historically important to point out that Profs. P. Dirac and W. Heitler [1] contributed most to its development.
In this RG thread, there is a brief summary of how to calculate the probabilities:
• asked a question related to Markov Chains
Question
When Markov matrix are multiplied by input to get output, how inputs are defined and how they are different from matrix coefficients? I want to predict the presence of something based on previous data.
You can input any one of the states, as it's independent of your input.
• asked a question related to Markov Chains
Question
Dear researchers,
Pardon me for being a novice here. In the image attached, eq 3.1 represents the transition matrix (it's pretty clear). I am not able to comprehend the eq 3.2, alpha*P = alpha, as well as the further equations.
I have the P matrix with me as an outcome of one of my projects. How should I calculate alpha and the elements a1, a2, a3....etc?
I would request some of your valuable guidance and help.
Hi Mritunjay,
One of the properties of Markov chains, is that if approximated properly the state transition matrix becomes stationary. i.e. In the limit, the State Transition Probability Matrix (P) tends to a stationary matrix (A) which does not change as a function of time. This A is essentially the "GLOBAL State Transition Matrix" while P is a "LOCAL" version of the same.
This can be explained as, for a starting probability vector of states (Pi_0), you get the subsequent probability vector as; Pi_1 = Pi_0*P,. Similarly, Pi_2 = Pi_0*P*P, and Pi_n = Pi_0*P^n.= Pi_0*A = Alpha.
Now for any subsequent state, Pi_(n+1) = Pi_0*P^n*P. We have, the last 2 terms (=P^n*P=P^(n+1)) = A as that is the limit. This Pi_(n+1) = Pi_0*A = Pi_n in the limit.
Explaining equation 3.2, Alpha*P = Pi_0*P^n*P = Pi_0*P^(n+1) = Pi_0*A = Alpha.
Mathematically, in the example shown, the values of a_1,....,a_n are computed analytically. In a true situation, this is not done. Instead, taking sufficient data (length) when modelling P ensures a sort-off statistical stationarity, i.e. the P is actually A (with very small variations).
Hopefully this helps, and you can follow the subsequent discussion.
You can read my paper, " Online Discovery and Classification of Operational Regimes from an Ensemble of Time Series Data" which has a description on how Markov modelling is actually used for real data.
Cheers
• asked a question related to Markov Chains
Question
I am trying to estimate the most likely number ok K using MCMC (Markov Chain Monte-Carlo Inference Of Clusters From Genotype Data) function in Geneland R-package by Gilles Guillot et al. I am a lit bit confused when it comes to varnpop and freq.model arguments
In the package reference manual https://cran.r-project.org/web/packages/Geneland/Geneland.pdf  one may read:
varnpop = TRUE *should not* be used in conjunction with freq.model = "Correlated"
From the other hand, other manual http://www2.imm.dtu.dk/~gigu/Geneland/Geneland-Doc.pdf recommends example of MCMC usage which looks like this:
MCMC(coordinates=coord, geno.dip.codom=geno, varnpop=TRUE, npopmax=10, spatial=TRUE, freq.model="Correlated", nit=100000, thinning=100, path.mcmc="./")
I am not sure how to reconcile these two contradictory pieces of information, any suggestions?
Dear all, I am experiencing the same problem with correlated allele freqs. model in Geneland; whether the MCMC do not converge or converge with too many and not credible clusters. However, the uncorrelated freqs. model seems to perform well...
It's been any advance in this question since 2018? Cheers.
• asked a question related to Markov Chains
Question
Hello,
Currently, I am dealing with a synthetic generation of wind speed data using the Markov chain. After forming transition and cumulative transition matrix, it is said that uniform random numbers are generated between 0 and 1. these random values are compared with the elements of a cumulative probability transition matrix. after that using an equation synthetic data are generated But I don't understand what are the actual calculation steps? Can someone tell me the detail steps with an example???? It will be really helpful if anyone can give me some ideas regarding the calculation steps.
Dear Zhao Guoxi,
A transition probability matrix P is defined to be a doubly stochastic matrix if each of its columns sums to 1. That is, not only does each row sum to 1 because P is a stochastic matrix, each column also sums to 1. Thus, for every column j of a doubly stochastic matrix, we have that ∑ ipij = 1…
Best regards
• asked a question related to Markov Chains
Question
I would predict land use demand using the Markov Chain (MC) model?
Does somebody could provide a simple approach? like software that can make a prediction of MC model? Or any other approach and manage MC model relatively simple.
Batu
Try to "land change modeler" software to predict land use demand using the Markov Chain
• asked a question related to Markov Chains
Question
Hello,
In TreeAge, each individual Markov node can be selected one at a time and Markov cohort reports can be generated. From these reports at individual nodes, a survival curve can be generated.
In our model, a decision tree with two arms eventually results in multiple Markov models for each arm (ie, each arm ends in about 6 Markov nodes each, for a total of 12 Markov nodes in the whole model). Notably, the Markov chains within each arm are all clones of a single Markov chain (ie, they all have the same health states). Is there a way to generate a survival curve for an arm as a whole, instead of individual Markov nodes within the arms, thereby allowing us to compare the arms?
Thanks,
David
Not an answer to your query. But if you haven't found the solution, wouldn't it be sufficient to compare the survival curves of two nodes from each arm, which are at the same level?
• asked a question related to Markov Chains
Question
Biological behaviors require intricate coordination between different components, including sensorimotor pathways, muscle groups, and the skeleton. Some of the behaviors for a specific organism can be identified. If their locomotion patterns are in a state-dependent manner, it might be suitable to utilize the Markov model to simulate it. I am wondering what is worth noting during the Markov modeling for locomotion.
And is there any other ideas for the modeling of biological movements?
Suresh Babu Thanks so much for your kind suggestions. I shall check the relevant papers.
As for the modeling strategies for locomotion patterns, do you have any good advice alternatively?
• asked a question related to Markov Chains
Question
Now I have a task to classify the imbalanced time series datasets using ML classifiers, such as Logistic Regression, Decision Tree, SVM, and KNN. I am not allowed to use the Deep Learning tools, such as CNN and RNN. The time series data is measurements of the Force-Displacement Curve from a production line. The dataset is extremely imbalanced (minority class: majority class= 1:100). So I want to use Data Augmentation techniques to enlarge the size of the minority class, in order to optimize the performance of the classifiers and avoid overfitting.
I have tried many tools in feature-space, such as oversampling, undersampling, SMOTE, ADASYN and so on. But their performance is not so perfect. And I wish to generate synthetic time series data using Data Augmentation techniques, based on the initial data. Similar to Image Recognition, I have also tried the exiting methods which have been applied to images, such as scaling, rotation, and jittering. But they are also not so useful.
So I want to ask if there are any other DA techniques to generate synthetic time series data. I have only some initial ideas, such as using DTW, Fournier Transform, Markov Chain and so on, but no papers or code about applying them.
Can anyone help me? I really appreciate your help. Thank you!
one simple way is to use class weights in training
• asked a question related to Markov Chains
Question
Model game as a discrete-time Markov chain
A Markov chain is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules. The defining characteristic of a Markov chain is that no matter how the process arrived at its present state, the possible future states are fixed. So yes, model games are discrets Markov chains :)
• asked a question related to Markov Chains
Question
For stochastic adaptive systems important is to consider local ergodic system, Markov chain, local Monte Carlo simulation and maximum entropy method to be more efective.
Under the conditions of local controllability, local ergodicity, follows for linearized case , the possibility of local stabilizability.
• asked a question related to Markov Chains
Question
I have more than 400 different events that occur during two years, some of them can occur 4000 times an others no more than 50 times. These events are not equally distributed, e.g., one at 15:00:02, other at 15:10:45 and other at 15:45:56. I cannot ensure that these events are independents, they maybe can be relationed. I want to analyze these events and try to find a pattern of events.
Type of data:
{timestamp1, event A (string value)}, {tiemstamp2, event B (string value)} ->Between timestamp1 and timestamp2 there is no event and is not equally distributed. Event A could inflluence event B or not.
I would like to know what type of methodology I can apply. I have been reading about SAX time series, Markov model, Hidden Markov model, DTW (Dynamic Time Warping), Time wrap for discrete events, Renewal proccess model and continous-time Markov process. However I think that these algorithms don't fit to my problem.
I hope that someone can help me. Thanks in advaced.
If the data is different in distribution, a spectrum analysis of the data can be adopted using the spectrum density function, or a consistent estimate of the density density function or the periodiagram function.
• asked a question related to Markov Chains
Question
in my project, I deal with stochastic data by markov model through a short study period, hence I will do a simulation to reach sufficient sample size, however my question how can do that and what the best performance to estimate transition probability matrix some of the papers use norm but I do not understand how did that.
The simulation of a Markov chain is described, for example, in m y book "Probability and Stochastic Modeling", pp. 135-136.
• asked a question related to Markov Chains
Question
I want to use the concept of Markov chain in vehicular ad-hoc network. Which scenario of vehicular ad-hoc network can modeles with the help of the Markov chain process?
Both can be tried
Start discrete first.
In digital networks, time can be taken as discrete.
• asked a question related to Markov Chains
Question
When performing Markov Chain experiments, the transition probabilities must be obtained to modeling the problem. What is the best way to calculating these probabilities.
Thanks to all
• asked a question related to Markov Chains
Question
Markov chain analysis in possible in which any statistical software ?
Thanks
My packages known as SHARPE and SPNP can do that for you. Not just steady state but transient, cumulative transient and derivatives of these can be computed for very large Markov chains. Also Markov chains can be generated starting with a high level description.
• asked a question related to Markov Chains
Question
This map is produced by CA-Markov module to predict the future land use map of an study area. Input land use maps were 2004 and 2008 with 9 land use types. 3 of them (housing, commercial and industrial) were with 10 class of probability of growth (created using Weight of evidence method) with class 1 least probability and class 10 highest probability of growth. And other 6 classes just extracted from land use map of 2008 ( as there were). So Markov chain transitional area matrix, basis land use map (land use map of 2008) and raster group file of 9 classes (as explained) were other input files, and asked to project for next 4 years (2012).
Why the result is like this?
Thank you
I am also facing the same problem. How I can resolve that error
• asked a question related to Markov Chains
Question
Does anyone have suggestions for books on Markov chains, possibly covering topics including matrix theory, classification of states, main properties of absorbing, regular and ergodic finite Markov chains?
Hi,
I suggest you to see links and attached file on topic.
Lectures on finite Markov chains - IME-USP
General mixing time bounds for finite Markov chains via the absolute ...
Finite Markov Chains – Quantitative Economics
A brief introduction to Markov chains – Towards Data Science
Best regards
• asked a question related to Markov Chains
Question
I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.
All the riders and drivers arriving in the system will wait for some amount of time until being matched.
The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.
The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.
I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.
However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a buffer) into this model.
Can you please explain in more detail the process of matching. The enclosed picture is the standard random walk with two competing waiting exponentially distributed independent times: If wins the 1 type - we go one step to the right, in the oposite case -we go to the left. No service is sketched. Thus as much as possible precise description of the service is needed. Now, the main doubt is caused by lack of the interpretation of negative positions: Isn't it the difference of the numbers of arrived riders and drivers?
Also, writing these words:
GQ: "The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching. "
it is not explained what are the "outnumbers". My English is too weak to understand the context. Can this be explained in more simple wards like this:
The state is characterized by current values of two numbers: of riders and the drivers in the waiting room. At the instant of matching k\tmes delta t the numbers are becomming less by the miniumum f the two (hence one becomes zero) . . . .
Note that this s a kind of guess what was meant by you!
Joachim
• asked a question related to Markov Chains
Question
Hello
I have some doubts with Markov chains used in predictions of land use change. If there is any specialist that could guide me, I would be very grateful.
• asked a question related to Markov Chains
Question
Hi dear all,
I have a problem with Markov Chain Modeling.
My problem is described as below:
I want to analyze the response time for three type of customers in a bank. We have three priorities that customers from group one always have higher priority than other types. Therefore, this problem can be modeled as a 3D-Markov Chain with infinite state space.
Does Anybody know how I can analyze 3D Markov Chain with infinite states space? I really appreciate if you can help me to introduce any methods(exact or approximate) to solve this kind of problems?
Thank you,
• asked a question related to Markov Chains
Question
Hello all ,
I have four stochastic matrix, and I want to estimate (or mesure) the correlation beteween them?
Is Khi-deux test for indepence can be used? tests based on distance or what kind of tests ?
Thanks
Thank you very much Joachim Domsta and
Paul Marcoux
Yes, in fact I have data set of three financial markets , by which I realized an ARDL (cointegration approach), I found a long term relationships among them, but I treid to factorize the data set into four states fo each variable, so I found a new sequential data set, then, I fitted a Markov chain model to each new set, finally, three stochastix matrices have been estimated, so, following your suggestion dears: Joachim Domsta and
Paul Marcoux
, is a khi-squared test sufficient to estimate independance (correlation) between these matrices ?
Thanks
• asked a question related to Markov Chains
Question
I would like to find the state transition matrix for a set of data. My goal is to represent the behavior that was captured by the data using markov chains. Would there any good tutorial/matlab code that can help me in that?
I also suggest that you look at PyEmma:
or MSMtools:
• asked a question related to Markov Chains
Question
i have 20 raster with 5 class and want to calculate transition matrix for my Agent-based model
I prefer using R
Hi Aydin,
I recommend this tutorial to quantify Landcover's changes in space and time and the use of markov chains: http://benbestphd.com/landscape-ecology-labs/lab2.html
Also, for this purpose you can use the lulcc package : https://spiral.imperial.ac.uk/bitstream/10044/1/41199/3/gmd-8-3215-2015.pdf
And this paper:
• asked a question related to Markov Chains
Question
Without data collection, how the properties of Markov process are determined by using a known probability distribution?
The general solution at infinity is either an ergodic process (mixing across all cells), mixing across sets of subcells or absorbtion (the process halts at a finite time). Each of these is due to the characteristics of the Chapman-Kolmogorov transition matrix in an obvious way
• asked a question related to Markov Chains
Question
If anyone can help me it would be extremely appreciated !
Dear Dr Berdouzi,
I thought the following manuscript might help you. A pdf-file can be found on the internet: https://arxiv.org/pdf/1703.04459.pdf
It is by Damm et al. (2017) and titled "Numerical solution of Lyapunov equations related to Markov jump linear systems".
It also contains a Matlab script (in chapter 3.4: "An implementation" on page 5).
Yours sincerely,
Rainer
• asked a question related to Markov Chains
Question
We are trying to solve some very specific algorithm development issues pertaining to Markov models for large state spaces namely:
The process of building a Markov chain with covariates (parameterised model) and creating a logistic link from multidimensional data.
If anyone has knowledge or experience of building these modes or has access to sample code ideally in R, we have specific issues for which would appreciate your assistance.
Dear Doctor,
You may look for CRAN page. For example,
• asked a question related to Markov Chains
Question
I have been working with the R package "tRophicPosition" which uses bayesian probability to calculate trophic positions of consumers through stable isotope data.
I've used the full two baseline model and to compare sexes of the same species of consumer according to Mann-Whitney test there is no difference (p > 0.05) on d15N, d13C values and TP calculated with Post (2002) equation.With that being said, I supposed there shouldn´t be differences on the TP calculated by the model. However, when I use the model to compare I used the function in the script:
compareTwoDistributions(Female.TP, Male.TP, test = ">")
[1] 0.557
As I understand in the manual (Attached) there is a probability of 56% that females TP is higher than males.
I was not very pleased with this result since I believe there is no differences in SIA values, (i.e. SIBER ellipses almost fully overlap, and same size). I decided to run the test again BUT now using the female dataset for both groups males and females, expecting the model told me both sexes had the same TP but I obtained this
compareTwoDistributions(Female.TP, Male.TP, test = ">")
[1] 0.45
So again there is differences even though it is the same dataset? Do Markov Chain Montecarlo simulations can make that difference even using the 4 chain model? (4000 iterations)
I then used Mann-Whitney U-test to compare Male.TP to Female.TP obtained by the model and gave me p values < 0.05 for both scenarios (same and real dataset).
Am I doing something wrong? I'm new to Bayesian statistics, I'd like to know if anyone here has the same issue.
I also attached two graphs of when I used the same dataset for both groups
Thank you!
Thank you very much for your answer Edgar, do you think you can have a look on the manual I attached? Because in there they get a p value of 0.96 or so, and they explain that it is very likely one sample > the other sample.
I did try to use the same data in order to get no difference or something like that. I used the same dataset and gave me the p = 0.443
• asked a question related to Markov Chains
Question
I need simulate the process of occurrence and amount of precipitation using Markov chain and Gamma distribution. can help me with matlab script?
I'v read a lot about Markov chain and gamma distribution for rain simulation and I'v written that script in matlab but I am not sure about it. so I want to compare my written script with a correct one.
can you do this for me, and check my script?
• asked a question related to Markov Chains
Question
I have a system that is a Markov chain. It is in principle possible to calculate the trasition matrix, given a set of parameters of the system. Calculation may take long but it can be done for some reasonable sets of parameters. Sometimes what I get is a trasition matrix that is to big to be managed by my digital computer (it requires to much memory - but I didn't try to save it on my hard drive). Are there any methods of algorithmically manipulating large Markov chains "by pieces", so to speak? Or perhaps there are some other methods of calculating various properties of systems that correspond to very large Markov chains?
In my case, I get absorbing Markov chains and I would like to calculate various properties, like probability of absorption or expected number of steps to being absorbed etc. Of course, there are appropriate formulae but they are useless if you cannot make the computation.
You may consider SPNP like software packages to automatically generate and solve large Markov chains. Alternatively use hierarchical/fixed-point iterative methods via software package such as SHARPE. Many real-life large problems are solved using these methods in my latest book: Reliability and Availability Modeling by Cambridge University Press, 2017.
• asked a question related to Markov Chains
Question
Dear Jayanta,
A Markov chain can be used to generate synthetic series of dry and wet months (days). In wet months (days) an appropriate probability distribution can be used to generate rainfall amounts. To set up a Markov chain model, you firstly need to determine the probability of a wet month (day) given a previous wet month (day), the probability of a wet month (day) given a previous dry month (day), etc.. You then will have four probabilities: wet-wet, wet-dry, dry-wet and dry-dry. A random number generator (between 0 and 1) can be used to generate probabilities, e.g. when the wet-wet probability is 0.6 and the random number generated 0.5, the next month will be wet; when the random number is 0.7, the next month will be dry.
This concept can be extended to different classes of rainfall amounts, e.g. <= 0.1 mm/d (dry), 0.1-2 mm/d, 2-5 mm/d and > 5 mm/d resulting in 4 x 4 transition probabilities. Obviously, the class boundaries and number of classes needed will depend on the climatological conditions (and season) of the study area.
Good luck,
Martijn
• asked a question related to Markov Chains
Question
I need a reference for using the two methods in one project.
The following article may be useful for you:
"Impacts of climate change on runoffs in East Azerbaijan, Iran" Zarghami et al. 2011.
• asked a question related to Markov Chains
Question
probability of transition ,modeling, Markov chains
• asked a question related to Markov Chains
Question
Modeling, Markov chains
• asked a question related to Markov Chains
Question
I am looking for recent subjects in the area of using Markov chains in queueing models or theory for the thesis of a master student in mathematics.
Mohamed I Riffi
these are real topics that my graduates have done in recent years
• asked a question related to Markov Chains
Question
For instance you are investigating gaps in 2 independent variables and you expect it to close up soon. How will Markov Chain n-step transition matrix be able to interpret when these gaps are likely to closeup?
YES! More precisely, for your purposes you are replacing the original transition probability matrix say P by the one Q with changed exits from the chosen set A subset S . This MAKES the set A absorbing.
Let me provide a very-very simple example: S = {0,1} and
P= [ 1/2, 1/2; 1/2, 1/2] .
Then the first entrance into 1 from 0 is geometrical on 1,2,3... with ratio 1/2, i.e. Pr{ T \le t} = 1 - \sum{n=t+1}infty 1/2n = 1 - 1/2t , t=0,1,2,.... For calculating this distribution according to my suggestion, you build a NEW matrix:
Q = [1/2, 1/2; 0, 1], Then the t-th power of Q equals
Qt = [ 1/2t , 1 - 1/2t ; 0 , 1 ] t=0,1,2,....
Thus Pr{Xt = 1| X0 = 0} = 1- 1/2t which agrees with the above easily expected result for Pr{ T \le t}.
Regards, Joachim
• asked a question related to Markov Chains
Question
Most of papers/researchers regarding cyclicity using Markov Chains are from 1970's to 1990's. However, in the recent literature only a few papers deals with this method. To what extent the use of Markovian processes can control or explain cyclothems or coarsening-, and fining-upward cylces?. Analysis of cyclicity in recent literature is restricted to use astronomical forcing cycles (Milankovitch bands) of known periodicities by using spectral or fourier analysis, so then, we can assume that these are the today-acepted methods for assess cyclicity?, what do you think about it?.
I still think improved version of Markov Chain I.e. partial independence or quasi independence method are very useful in analysing cyclicity in the sequence.The only thing that matters was the collection of data and I personally used these methods in the borehole data given by the agencies working on the coal reserves. It is very good and one can get result within no time.If it is correlated with Harper's binomial method than nothing beats it's usefulness.I do not think it is outdated.For your information there is another method which correlate the borehole sequence in space is cross association method.One can look my paper with Prof Tewari appeared in Jour.Asian Geology in 2009.
• asked a question related to Markov Chains
Question
I would like to know the probability that a person who inspects the food then eats it, and visa versa. the data is not normally distributed so cannot use traditional progression analysis (at least, i don't know how). The markov chain has been suggested, however i am not sure how to do this. does anyone have a good source on this? or an alternative?
many thanks,
thank you all for the replies! and A. Panis, any special tests you could suggest in R program?
• asked a question related to Markov Chains
Question
In "Monte Carlo methods", chapter 7, Hammersley, the convergence condition for monte carlo to solve linear equations Ax=b is that the maximum absolute row sum norm of matrix H=A-I must be less than 1. Other material indicates that the condition is that spectral radius of matrix H should be less than 1. Other material says that matrix A should be diagonally dominant, These conditions are not equivalent as far as I can see; the spectral radius condition appears to be less strict than the other two. Could some one tell me which one is the true convergence condition?
This paper may help:
Ji, Hao, Michael Mascagni, and Yaohang Li. "Convergence Analysis of Markov Chain Monte Carlo Linear Solvers Using Ulam--von Neumann Algorithm." SIAM Journal on Numerical Analysis 51, no. 4 (2013): 2107-2122.
• asked a question related to Markov Chains
Question
I would like to use a simple model for some (prediction/estimation), Markov chain is very good candidate, however I wonder if there are some other models to do these stuffs.
Regards.
Handling Bayes' rule based on Reproducing Kernel Hilbert Spaces (RKHS), Kalman Filter (KF) and Recursive Least Squares (RLS) techniques leads to Kernel Kalman Rule (KKR) as an alternative to Hidden Markov Models (HMMs) for data prediction/estimation/filtering tasks.
Check out the following ordered references for more details:
• asked a question related to Markov Chains
Question
Can anyone help me how to use a Markov Chain for forecasting a time series?
You must discretize the time series, i.e. translate each observation into the occurence of one of a set of categories - e.g. very low, low, medium, high, very high.
Let t be today's date; y(t)' is today's observation - e.g. if today's value is "medium" then y(t)'=(0 0 1 0 0)
If M is the transition matrix then a 1-step-ahead forecast is f(t+1) = M*y(t), 2-steps ahead is f(t+2) = M^2*y(t), h-steps-ahead is f(t+h) = M^h*y(t)
NOTE that the forecasts are probability distributions so you might use the mode or median as a point forecast, or the mean, if you have given numerical values to the categories.
The above is my best guess. Google "Markov chain modeling for very-short-term wind power forecasting" to see how Carpinone et al. (2015) actually did it.
• asked a question related to Markov Chains
Question
Hello Everyone,
Need quick help! I have 5 different rates for recurrence from 5 studies with different followup. How do I combine them to find one probability to be used in cost-effectiveness model.
Study Rate Time period
A 52.3% 5 years
B 23.2% 2 years
C 17.3% 2 years
D 9.0% 2 years
E 20.9 5 years
Now, I need to find the average rate to convert into probability to use it in a 3 month Markov chain model.
Any direction will be really helpful.
Regards,
Syeda.
See slides 14-25
• asked a question related to Markov Chains
Question
This Fall, I am teaching my master students an introductory course on Markov chains. I am looking for an easy/clear method to explain the theoretical basis of simulating a Markov chain; i.e.; theoretically how to mathematically explain simulating a Markov chain.
I know one can easily simulate a Markov chain using Mathematica or the R package "markovchain", but I need to do it manually by drawing random numbers from unif[0,1]. Thanks so much for your answer.
Mohamed Riffi
For discrete-time Markov chains, it is quite simple. Suppose M is the transition matrix of the process, such that p(t+1) = Mp, and hence each column of M represents the conditional distribution of X(t+1) given X(t).
At each time, observe the current state of the chain, and then take the cumulative sum of the X(t)'th column of M. Now, take a uniform random variable on the unit interval, say U. If U falls between 0 and the first element of the cumulative sum, assign X(t+1) = 1. If U falls between the first element of the cumulative sum and the second, assign X(t+1) = 2. In general, if U falls between the k-1'th element of the cumulative sum and the k'th, assign X(t+1) = k.
You can see that by construction, the distribution of the induced process {X(t)} is exactly the distribution of the desired chain.
Something similar (but conceptually somewhat more involved) works for continuous-time Markov chains. If you need more information on that, look up the Gillespie algorithm. It's basically a stochastic variant of the Euler method, adapted for stochastic processes.
• asked a question related to Markov Chains
Question
I introduce the new model for information diffusion based on node behaviors in social networks. I simulate the models and find interesting result from it. I want to evaluate it with one formal method and find Interactive Markov Chain. Can I use it to evaluate my model?
I find this paper :
A Survey on Information Diffusion in Online Social
Networks: Models and Methods
• asked a question related to Markov Chains
Question
This is an example in Durrett's book "Probability theory: theory and examples", it's about the coupling time in Markov chains, but I can't see the reason behind it.
The trick is played by two persons A and B. A writes 100 digits from 0-9 randomly, B choose one of the first 10 numbers and does not tell A. If B has chosen 7,say, he counts 7 places along the list, notes the digits at the location, and continue the process. If the digit is 0 he counts 10. A possible sequence is underlined in the list:
3 4 7 8 2 3 7 5 6 1 6 4 6 5 7 8 3 1 5 3 0 7 9  2  3 .........
The trick is that, without knowing B's first digit, A can point to B's final stopping location. He just starts the process from any one of the first 10 places, and conclude that he's stopping location is the same as B's. The probability of making an error is less than 3%.
I'm puzzled by the reasoning behind the example, can anyone explain it to me ?
The key point is that if A and B ever use the same position in their respective sequences, then from that point on their sequences are identical.  The reason that the trick works is that there is a fairly high probability that this will happen.
Coupling in Markov chains works in a similar way.
• asked a question related to Markov Chains
Question
I have longitudinal data (repeated measures on each subject over 't' time periods) with a binary response variable & various continuous/categorical covariates. I wish to build a forecasting model that tells the outcome for the time ahead: t+1, t+2... etc, while simultaneously regressing on the predictors, until time t.
I want my model to use the information from the covariates at present time t, to forecast the response for the time ahead.
I believe that my model will predict the outcome with a probability associated with it, something like a Markov model + regression, that gives the state transition probability, also taking into consideration the covariates that affect the state.
Any help on how to structure the problem and/or implement it in R/SAS will be helpful.
you can use the cquad R-package to analyze the data having binary response variable
see the details in
or just install cquad package in R and seek help and analyze your data,
the forcasting will be done if you have some values available on covariates using "predict" function.
• asked a question related to Markov Chains
Question
We are working on Queuing models with bulk arrivals and  batch services using fuzzy parameters and we know how arrivals times and services times are distributed.
Thank you all for your suggestions. I think it will be useful for my research
• asked a question related to Markov Chains
Question
I want to optimize a portfolio based on regime switching model. Before optimization i need to detect the hidden regimes in the data. Let us suppose i want to start my analysis with GDP. What i want is to know the steps that are involved in identifying the regimes in a variable. There are many packages which do it.
The first step is to download data for GDP. Then how should i proceed to detect the regimes. I will appreciate if some one just list the steps. For now i am not interested in the model or calculation. I just need to know what are the general steps in detecting the regimes in a variable.
Suppose that the GDP is observed at fixed intervals of time of length ∆. Let the total number of observation is N. Let S(i) be the observation at the end of i-th interval. Now calculate the return
• r(i) =(S(i)−S(i−1))/S(i−1).
Choose n much smaller than N. A standard choice is n=20 whereas N is of the order 10^3 or more. For every i>n, calculate the moving mean ̄r(i) and moving standard deviation sd(i) of {r(i-1), r(i-2), ..., r(i-n)}
using
• ̄r(i) =average of {r(i-1), r(i-2), ..., r(i-n)} and
• sd(i)= √ (1/(n − 1) Σ0<j<n+1 (r(i-j)- ̄r(i))2).
The empirical drift μ(i) and volatility σ(i)  are given by
• μ(i)=  ̄r(i)/∆
• σ(i) = [sd(i)]/ √(∆).
Thus one obtains two different time series {μ(i)} and {σ(i)} respectively. The range of these are divided into finitely many disjoint parts, called regimes. After you decide your partitioning, for every i, look up where do μ(i) and σ(i)  belong. Furthermore, for every sub-interval, you may assign a representative value.
• asked a question related to Markov Chains
Question
Looking for methods to model the state transitions of a multi-state process. Thanks in advance!
It depends on how many elements/states of the system are put into the modeling, what  the preliminary information about the mutual influence of the states between the elements andt their states are to be considered, what the outer conditions influence the evolution of the states of single elements, etc. For generalities, any book on random (stochastic) processes might be useful, however from the question a suggestion follows, that probably some theory of semi-Markov processes were useful, if the evolution is non-Markovian and the number of states is not to high.  A pretty wide fashion of such processes with applications is given in the monograph
Semi-Markov Processes: Applications in System Reliability and Maintenance, by F. Grabski:
for introduction, one can read  free available
Also other works can be found via MrGoogle under key words:
semi markov maintenance processes   /  semi markov social sciences  /  semi markov economy markets
e.g. Semi-Markov Risk Models for Finance, Insurance and Reliability
Authors: Jacques Janssen,Raimondo Manca
for introduction to some problems one can reach free available from RG work by the authors et al.:
Regards
• asked a question related to Markov Chains
Question
Most of the literature on the recursive maximum likelihood estimates of parameters of a partially observed model seems to be in discrete time, i.e. on Hidden Markov Models (HMMs).
There is quite a strong result for HMMs in
Tadic, V. B. (2010). Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models. IEEE Transactions on Information Theory, 56(12), 6406–6432. http://doi.org/10.1109/TIT.2010.2081110
I'm wondering whether there is similar work in continuous-time models. If they exist, I can't seem to find them. Maybe the problem is still open.
Thank you for any hints!
"Efficient descriptor-vector multiplications in stochastic automata networks"
"Efficient vector-descriptor product exploiting time-memory trade-offs"
"SANGE – Stochastic Automata Networks Generator
A tool to efficiently predict events through structured Markovian models"
• asked a question related to Markov Chains
Question
I was able to get transition and emission probabilities of HMM using inbuilt commands ("hmmtrain") of matlab, now I want ot find initial state probabilities. Can anybody help me getting those?
• asked a question related to Markov Chains
Question
I am trying to use Mixture Density Network model (Bishop, 1994) to learn a probabilistic mapping, i.e. a conditional distribution in the form P(t|x), where t and x are real-valued variables. The training data consists of (x,t) pairs.
Since this is an experimental problem, I know the underlying distribution of the data. I know P(t|x) is (approximately) a mixture of gaussians with 2 components for every x, and the prior probability of these components are almost equal. So, I set the number of mixture components in the MDN to 2.
The problem is that the training algorithm converges to an unsatisfactory solution which is too far from the actual underlying distribution. The learned P(t|x), for every x, is a mixture with two components:
In one of the components, the mean is relatively close to the means of the components of the actual distribution, and it has a high prior probability (more than 0.9)
In the other one, the mean is too far from both of the means of the actual distribution, and it has a low prior probability (less than 0.1).
In other words, I can say that MDN here acts like two MLP networks, one with (relatively) low error and high probability, and the other with high error and low probability. With this analogy, what I expected was two networks with almost equal probabilities.
Conjugate gradient algorithm has been used for optimization. I have repeated this with different initial random parameters, but the same problem exists.
Is something probably wrong in my implementation, or this is a an expectable difficulty with MDNs, maybe because of the optimization method?
ad hoc experience report: for me training MDN's (also with SOMs) seems to work better when using e.g. 5 times as many units than the assumed number of modes in the distribution. example: using ten units in a bimodal problem might end up with 2-4 components with large mixture coefficients (effective components), the rest not contributing to the output. nonetheless, these contribute during training.
the reasoning: the problem is decomposed using all the mixture components but there are many possible different such decompositions, and most of them are unlikely to perfectly match the "ideal" decomposition we assume from analytical knowledge.
• asked a question related to Markov Chains
Question
I am trying to use Gilbert & Elliot correlated error model with Markov model in LTE-A to calculate the probability of packet loss. Like, other researchers, I use a simplified equation to calculate the above probability.
Is there any technical issue that I can not use this traditional method in LTE-A ?
• asked a question related to Markov Chains
Question
Both systems are capable of independently catering to  the full load.The online system system drives the output and on its failure the switchover logic system switched to the hot standby system which are fed identical input and should produce identical outputs. The system encounters a downtime when one of the following happens:
a)  Online system fails AND switch over logic Fails so cannot switch to the healthy system
b) Both system fails simulatneously due to common cause failures.
i Have attached a preliminary markov chain diagram,
If you are trying to use the processors to validate the response, your use of two systems has an inherent logic flaw because the systems cannot determine which system provided the incorrect answer (failure) in case of  different responses. A minimum of 3 systems is required to determine 'odd man out'. This hardware logic is used by NASA for space missions and a similar concept in Stratus computers for business critical applications.
If you are simply trying to maintain five 9's availability to maintain the environment, both systems do not have to be running at the same time. But they do have to 'checkpoint' critical application data (context) between systems and maintain a 'heart beat' signal in order to determine if the other is still alive. In case of one system failure, the other system would be able to start the applications and use the check point data to pick up at the failure point. This type of fault tolerance and high availability was pioneered by Tandem computers, but there was a similar HA environment for IBM RS6000, Sun, HP and DEC systems.
By going to geographically diverse locations or simply locating the systems on different power grids and network connectivity, you dramatically reduce single points of failure. This can be a simple as different ends of the data center.
This basic concept can be scaled to any n+1 architecture, including MPP, SMP or single processor systems. Telecommunications, Banking, high volume transaction processing have used this type of systems architecture for years.
While this is a gross over-simplification of the 'odd man out' computing and N+1 systems architecture, these are both well established systems architectures with proven software designs to drive reliability to less than 10 seconds of down time in a year.
To speed up your analysis and development of a solution, you might look at some of the Linux 'hot standby' projects that exist. Unless you are doing process control work that has the potential of harming humans or other life, you might look at some of the off the shelf solutions or aligning yourself with one of the open source HA projects.
• asked a question related to Markov Chains
Question
Pavement condition deteriorates over time. Pavement maintenance decisions are dependent on the condition of pavements and the availability of funds. Predicting future condition of pavements is necessary to manage pavement rehabiliatation
Thanks Professor Pavlov. I think both suggestions are valuable. However, professor Reza seems to relate more to the situation. I appreciate the input from both of you. I do recommend that others take a look at the Multi-attribute objective value function . I had used it in decision analysis of alternative solutions
• asked a question related to Markov Chains
Question
I'm looking at the effect of size and gender on the sequential behaviour of white sharks. I have data listing frequency and duration of behaviours for a given ethogram I designed.
I want to create a transition matrix to obtain a kinematic diagram and run a Markov chain analysis, but i am unsure on how to obtain the transition matrix from my raw data?