Science topic
Markov Chains - Science topic
Explore the latest questions and answers in Markov Chains, and find Markov Chains experts.
Questions related to Markov Chains
Hello Researchers and all,
Are you interested in Digital Twin, Cyber Attack and Dynamic Bayesian Network. Do you want to know what can happen to a digital twin based Industrial organization if a Malware and DDoS attack occur? What could be its impact on dynamic situation? If your answer is yes, Here I am sharing with you one of my paper link named, " Analyzing the impact of Cyber Attack on the performance of Digital Twin Based Industrial Organizations" which is got published in Journal of Industrial Information and Integration (Elsevier, Q1, IF = 15.7). You will get a good idea about:
- Digital Twin,
- Cyber attack propagation,
- Markov chain and
- Dynamic Bayesian Network
Not only this, We also discussed different prevention mechanism and resilience mechanism to keep your digital twin mostly functional under Malware and DDoS Attack. If you want to work further on this topic with different cyber attack and prevention mechanism, you will get an idea from this paper how to do this.
Here is the link of the paper:
From this link, you will get a 50 days' free access to the article. Anyone clicking on this link before August 03, 2024 will be taken directly to the final version of this fantastic article on ScienceDirect, which you are welcome to read or download. No sign up, registration or fees are required.
You are welcome to read, download and cite this article and flourish your research skill on cyber attack, Digital twin and Dynamic Bayesian Network.
How a hidden markov chain can solve an image segmentation problem?
Hello every one ,
I need, with respect to Markov chains model, what represents time progression in a segmentation solution.
Regards.
Hello Everyone,
For example i have 3 states Markov model i.e Healthy, Sick and Death. i collected data in the last year. data form is like yes or no for each. Simple example is Healthy state having {yes,no,yes,no,yes,no,no,yes} , Sick={ no,yes,no,es..} and Death={no,yes,no,yes,yes..}. and after one again i collected data in the form of these three states.So my doubt is how to find state transition probabilities when data format like above example.
Thank-you
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
Thank you in advance.
Hey Members, I'm running quantile regression with panel data using STATA, i find that there are two options :
1- Robust quantile regression for panel data: Standard using (qregpd )
2- Robust quantile regression for panel data with MCMC: using (adaptive Markov chain Monte Carlo)
Can anyone please explain me the use of MCMC ? how can i analyse the output of Robust quantile regression for panel data with MCMC ? thanks
This is a continuous time continuous space Markov chain. If possible i would like to do this in R programming language. I have attached a sample data named NDP.
Thank you .
Shanon entropy rate can be calculated for the hidden markov model (HMM) but is it possible to do the same for Markov chain.
Dear Researchers,
Is there any tool available for BDMP. If so kindly give the details how to get it.
Let me find an expert to discuss about the possibilities of using the Markov Chain in Fuel Price Modeling
I am familiar with the concept of stochastic ordering for two random variables and how we can say if a markov matrix is stochastically monotone. What I'm interested in is if there is a concept for ranking two separate markov matrices.
To illustrate suppose we have two stochastically monotone markov matrices A and B which preserve the ordering of x≿y. Under what circumstances can we say (if any) that matrix A is preferred to matrix B in stochastic order?
Note: The definitions I am using are from this slide deck: http://polaris.imag.fr/jean-marc.vincent/index.html/Slides/asmta09.pdf
Dear all,
I am looking for any helpful resources on monte carlo markov chain simulation. Either pdf, book or stata do file or R script would be a great help for me. Any starting point where I can learn mcmc asap.
Would be really happy if someone can share Stata do.file or R scripts for monte carlo markov chain.
Thank you very much
I have 30 Years export data and i have to do Markov chain analysis.
Hello everybody
According to what mentioned in the attached snapshot from the appendix of this article
My question is, if we already have the transition probability matrix P, how could we calculate numerically v^=α0(I−P)−1, as the limit of the recurrence v(t+1)=v(t)P+α0?
i) What kind of objective may be acheived by applying Markov chain analysis?
ii)How would be the arrangement of data?
I would like to have a deeper insight into Markov Chain, its origin, and its application in Information Theory, Machine Learning and automated theory.
So, I am wanting to take network data from various time periods of the same network and map it in a way that will allow for clear representation of each time period. For example, taking network data for a 2013-2015 time period, a 2015-217 time period and then a 2020-2021 time period. Importantly, the amount of data will be different and the time periods won't strictly be even (i.e. there might be a 1 year gap between time period a and b and then 5 years between b and c). Also, I think it is important to highlight that these would be 'time-slices' of the same overall network, rather than distinct, unrelated networks.
I am hoping to present these different time-slices in one or two ways. First, I am wanting to be able to place a series of network maps next to each other based on temporal data to show how the network is changing over time. Second, if possible, I would like to produce a video/animation that shows the network changing over time.
I have been doing lots of reading on possible ways to achieve this type of analysis. I have been looking into using stochastics, specifically Markov models or Stochastic Actor-Orientated Modelling (SAOM), both of which I have seen used for similar projects. Only problem is my maths is good, but needs some work before I could comfortably use these approaches, so if stochastics is the way forward, any suggestions for good tutorials?
I have also been looking into Social Sequence Analysis, specifically the use of Network Methods, as outlined by Cornwell (https://www.cambridge.org/core/books/social-sequence-analysis/network-methods-for-sequence-analysis/FFF842AF37364167E23AD03E50650336) which seems promising.
I feel as though I am reading a lot and getting a bit lost in all of the literature. Any advice would be greatly appreciated!
Is it possible to analyse 30 years of data in Markov chain analysis by using any software? Usually it is easily possible to analyse the 10 years data in LINGO software. So for 30 years of data what will be the method or it has any other software?
Hi everyone. I took a basic course on Markov Chains, and know a little about Monte Carlo Stimulations and Methods. But I never got to the part of spreadsheets.
If anyone can help direct me to a few non-technical, not to hard to read books on Monte Carlo Stimulations I would be grateful.
I have data for a seven-year monthly time series and apply Markov chains
I am looking to apply supervised machine analytics on a dataset that shows the location-based movement of patients in a house. The database needs to show the indoor movements of patients. The dataset may have both numerical or categorical values.
An example of this is the demonstration of the time the person woke up, any chores/activities the patient did, the time the patient went to sleep, etc. I would prefer the dataset to be labeled defining the abnormal and normal occasions.
Ideally, I will be applying supervised machine learning algorithms to the patient's movements. For future work unsupervised machine learning analytics can be applied.
Should you know of any of the specific types of datasets, please let me know.
Thanks
Could anyone provide me with an example or some references?
When Markov matrix are multiplied by input to get output, how inputs are defined and how they are different from matrix coefficients? I want to predict the presence of something based on previous data.
Dear researchers,
Pardon me for being a novice here. In the image attached, eq 3.1 represents the transition matrix (it's pretty clear). I am not able to comprehend the eq 3.2, alpha*P = alpha, as well as the further equations.
I have the P matrix with me as an outcome of one of my projects. How should I calculate alpha and the elements a1, a2, a3....etc?
I would request some of your valuable guidance and help.
I am trying to estimate the most likely number ok K using MCMC (Markov Chain Monte-Carlo Inference Of Clusters From Genotype Data) function in Geneland R-package by Gilles Guillot et al. I am a lit bit confused when it comes to varnpop and freq.model arguments
In the package reference manual https://cran.r-project.org/web/packages/Geneland/Geneland.pdf one may read:
varnpop = TRUE *should not* be used in conjunction with freq.model = "Correlated"
From the other hand, other manual http://www2.imm.dtu.dk/~gigu/Geneland/Geneland-Doc.pdf recommends example of MCMC usage which looks like this:
MCMC(coordinates=coord, geno.dip.codom=geno, varnpop=TRUE, npopmax=10, spatial=TRUE, freq.model="Correlated", nit=100000, thinning=100, path.mcmc="./")
I am not sure how to reconcile these two contradictory pieces of information, any suggestions?
Hello,
Currently, I am dealing with a synthetic generation of wind speed data using the Markov chain. After forming transition and cumulative transition matrix, it is said that uniform random numbers are generated between 0 and 1. these random values are compared with the elements of a cumulative probability transition matrix. after that using an equation synthetic data are generated But I don't understand what are the actual calculation steps? Can someone tell me the detail steps with an example???? It will be really helpful if anyone can give me some ideas regarding the calculation steps.
I would predict land use demand using the Markov Chain (MC) model?
Does somebody could provide a simple approach? like software that can make a prediction of MC model? Or any other approach and manage MC model relatively simple.
Thanks a million in advance!
Batu
Hello,
In TreeAge, each individual Markov node can be selected one at a time and Markov cohort reports can be generated. From these reports at individual nodes, a survival curve can be generated.
In our model, a decision tree with two arms eventually results in multiple Markov models for each arm (ie, each arm ends in about 6 Markov nodes each, for a total of 12 Markov nodes in the whole model). Notably, the Markov chains within each arm are all clones of a single Markov chain (ie, they all have the same health states). Is there a way to generate a survival curve for an arm as a whole, instead of individual Markov nodes within the arms, thereby allowing us to compare the arms?
Thanks,
David
Biological behaviors require intricate coordination between different components, including sensorimotor pathways, muscle groups, and the skeleton. Some of the behaviors for a specific organism can be identified. If their locomotion patterns are in a state-dependent manner, it might be suitable to utilize the Markov model to simulate it. I am wondering what is worth noting during the Markov modeling for locomotion.
And is there any other ideas for the modeling of biological movements?
Thanks for your potential advice in advance.
Now I have a task to classify the imbalanced time series datasets using ML classifiers, such as Logistic Regression, Decision Tree, SVM, and KNN. I am not allowed to use the Deep Learning tools, such as CNN and RNN. The time series data is measurements of the Force-Displacement Curve from a production line. The dataset is extremely imbalanced (minority class: majority class= 1:100). So I want to use Data Augmentation techniques to enlarge the size of the minority class, in order to optimize the performance of the classifiers and avoid overfitting.
I have tried many tools in feature-space, such as oversampling, undersampling, SMOTE, ADASYN and so on. But their performance is not so perfect. And I wish to generate synthetic time series data using Data Augmentation techniques, based on the initial data. Similar to Image Recognition, I have also tried the exiting methods which have been applied to images, such as scaling, rotation, and jittering. But they are also not so useful.
So I want to ask if there are any other DA techniques to generate synthetic time series data. I have only some initial ideas, such as using DTW, Fournier Transform, Markov Chain and so on, but no papers or code about applying them.
Can anyone help me? I really appreciate your help. Thank you!
For stochastic adaptive systems important is to consider local ergodic system, Markov chain, local Monte Carlo simulation and maximum entropy method to be more efective.
I have more than 400 different events that occur during two years, some of them can occur 4000 times an others no more than 50 times. These events are not equally distributed, e.g., one at 15:00:02, other at 15:10:45 and other at 15:45:56. I cannot ensure that these events are independents, they maybe can be relationed. I want to analyze these events and try to find a pattern of events.
Type of data:
{timestamp1, event A (string value)}, {tiemstamp2, event B (string value)} ->Between timestamp1 and timestamp2 there is no event and is not equally distributed. Event A could inflluence event B or not.
I would like to know what type of methodology I can apply. I have been reading about SAX time series, Markov model, Hidden Markov model, DTW (Dynamic Time Warping), Time wrap for discrete events, Renewal proccess model and continous-time Markov process. However I think that these algorithms don't fit to my problem.
I hope that someone can help me. Thanks in advaced.
in my project, I deal with stochastic data by markov model through a short study period, hence I will do a simulation to reach sufficient sample size, however my question how can do that and what the best performance to estimate transition probability matrix some of the papers use norm but I do not understand how did that.
I want to use the concept of Markov chain in vehicular ad-hoc network. Which scenario of vehicular ad-hoc network can modeles with the help of the Markov chain process?
When performing Markov Chain experiments, the transition probabilities must be obtained to modeling the problem. What is the best way to calculating these probabilities.
Markov chain analysis in possible in which any statistical software ?
Thanks
This map is produced by CA-Markov module to predict the future land use map of an study area. Input land use maps were 2004 and 2008 with 9 land use types. 3 of them (housing, commercial and industrial) were with 10 class of probability of growth (created using Weight of evidence method) with class 1 least probability and class 10 highest probability of growth. And other 6 classes just extracted from land use map of 2008 ( as there were). So Markov chain transitional area matrix, basis land use map (land use map of 2008) and raster group file of 9 classes (as explained) were other input files, and asked to project for next 4 years (2012).
Why the result is like this?
Thank you
Does anyone have suggestions for books on Markov chains, possibly covering topics including matrix theory, classification of states, main properties of absorbing, regular and ergodic finite Markov chains?
I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.
All the riders and drivers arriving in the system will wait for some amount of time until being matched.
The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.
The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.
I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.
![image](https://i.stack.imgur.com/teSyW.png)
However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a buffer) into this model.
Hello
I have some doubts with Markov chains used in predictions of land use change. If there is any specialist that could guide me, I would be very grateful.
Hi dear all,
I have a problem with Markov Chain Modeling.
My problem is described as below:
I want to analyze the response time for three type of customers in a bank. We have three priorities that customers from group one always have higher priority than other types. Therefore, this problem can be modeled as a 3D-Markov Chain with infinite state space.
Does Anybody know how I can analyze 3D Markov Chain with infinite states space? I really appreciate if you can help me to introduce any methods(exact or approximate) to solve this kind of problems?
Thank you,
Mehrdad
Hello all ,
I have four stochastic matrix, and I want to estimate (or mesure) the correlation beteween them?
Is Khi-deux test for indepence can be used? tests based on distance or what kind of tests ?
Thanks
I would like to find the state transition matrix for a set of data. My goal is to represent the behavior that was captured by the data using markov chains. Would there any good tutorial/matlab code that can help me in that?
i have 20 raster with 5 class and want to calculate transition matrix for my Agent-based model
I prefer using R
Without data collection, how the properties of Markov process are determined by using a known probability distribution?
If anyone can help me it would be extremely appreciated !
We are trying to solve some very specific algorithm development issues pertaining to Markov models for large state spaces namely:
The process of building a Markov chain with covariates (parameterised model) and creating a logistic link from multidimensional data.
If anyone has knowledge or experience of building these modes or has access to sample code ideally in R, we have specific issues for which would appreciate your assistance.
I have been working with the R package "tRophicPosition" which uses bayesian probability to calculate trophic positions of consumers through stable isotope data.
I've used the full two baseline model and to compare sexes of the same species of consumer according to Mann-Whitney test there is no difference (p > 0.05) on d15N, d13C values and TP calculated with Post (2002) equation.With that being said, I supposed there shouldn´t be differences on the TP calculated by the model. However, when I use the model to compare I used the function in the script:
compareTwoDistributions(Female.TP, Male.TP, test = ">")
[1] 0.557
As I understand in the manual (Attached) there is a probability of 56% that females TP is higher than males.
I was not very pleased with this result since I believe there is no differences in SIA values, (i.e. SIBER ellipses almost fully overlap, and same size). I decided to run the test again BUT now using the female dataset for both groups males and females, expecting the model told me both sexes had the same TP but I obtained this
compareTwoDistributions(Female.TP, Male.TP, test = ">")
[1] 0.45
So again there is differences even though it is the same dataset? Do Markov Chain Montecarlo simulations can make that difference even using the 4 chain model? (4000 iterations)
I then used Mann-Whitney U-test to compare Male.TP to Female.TP obtained by the model and gave me p values < 0.05 for both scenarios (same and real dataset).
Am I doing something wrong? I'm new to Bayesian statistics, I'd like to know if anyone here has the same issue.
I also attached two graphs of when I used the same dataset for both groups
Thank you!
I need simulate the process of occurrence and amount of precipitation using Markov chain and Gamma distribution. can help me with matlab script?
I have a system that is a Markov chain. It is in principle possible to calculate the trasition matrix, given a set of parameters of the system. Calculation may take long but it can be done for some reasonable sets of parameters. Sometimes what I get is a trasition matrix that is to big to be managed by my digital computer (it requires to much memory - but I didn't try to save it on my hard drive). Are there any methods of algorithmically manipulating large Markov chains "by pieces", so to speak? Or perhaps there are some other methods of calculating various properties of systems that correspond to very large Markov chains?
In my case, I get absorbing Markov chains and I would like to calculate various properties, like probability of absorption or expected number of steps to being absorbed etc. Of course, there are appropriate formulae but they are useless if you cannot make the computation.
Please help me to calculate monthly rainfall probability.
I need a reference for using the two methods in one project.
probability of transition ,modeling, Markov chains
I am looking for recent subjects in the area of using Markov chains in queueing models or theory for the thesis of a master student in mathematics.
Thanks a lot in advance.
Mohamed I Riffi
For instance you are investigating gaps in 2 independent variables and you expect it to close up soon. How will Markov Chain n-step transition matrix be able to interpret when these gaps are likely to closeup?
Most of papers/researchers regarding cyclicity using Markov Chains are from 1970's to 1990's. However, in the recent literature only a few papers deals with this method. To what extent the use of Markovian processes can control or explain cyclothems or coarsening-, and fining-upward cylces?. Analysis of cyclicity in recent literature is restricted to use astronomical forcing cycles (Milankovitch bands) of known periodicities by using spectral or fourier analysis, so then, we can assume that these are the today-acepted methods for assess cyclicity?, what do you think about it?.
I would like to know the probability that a person who inspects the food then eats it, and visa versa. the data is not normally distributed so cannot use traditional progression analysis (at least, i don't know how). The markov chain has been suggested, however i am not sure how to do this. does anyone have a good source on this? or an alternative?
many thanks,
In "Monte Carlo methods", chapter 7, Hammersley, the convergence condition for monte carlo to solve linear equations Ax=b is that the maximum absolute row sum norm of matrix H=A-I must be less than 1. Other material indicates that the condition is that spectral radius of matrix H should be less than 1. Other material says that matrix A should be diagonally dominant, These conditions are not equivalent as far as I can see; the spectral radius condition appears to be less strict than the other two. Could some one tell me which one is the true convergence condition?
I would like to use a simple model for some (prediction/estimation), Markov chain is very good candidate, however I wonder if there are some other models to do these stuffs.
Regards.
Can anyone help me how to use a Markov Chain for forecasting a time series?
Hello Everyone,
Need quick help! I have 5 different rates for recurrence from 5 studies with different followup. How do I combine them to find one probability to be used in cost-effectiveness model.
Study Rate Time period
A 52.3% 5 years
B 23.2% 2 years
C 17.3% 2 years
D 9.0% 2 years
E 20.9 5 years
Now, I need to find the average rate to convert into probability to use it in a 3 month Markov chain model.
Any direction will be really helpful.
Regards,
Syeda.
This Fall, I am teaching my master students an introductory course on Markov chains. I am looking for an easy/clear method to explain the theoretical basis of simulating a Markov chain; i.e.; theoretically how to mathematically explain simulating a Markov chain.
I know one can easily simulate a Markov chain using Mathematica or the R package "markovchain", but I need to do it manually by drawing random numbers from unif[0,1]. Thanks so much for your answer.
Mohamed Riffi
I introduce the new model for information diffusion based on node behaviors in social networks. I simulate the models and find interesting result from it. I want to evaluate it with one formal method and find Interactive Markov Chain. Can I use it to evaluate my model?
This is an example in Durrett's book "Probability theory: theory and examples", it's about the coupling time in Markov chains, but I can't see the reason behind it.
The trick is played by two persons A and B. A writes 100 digits from 0-9 randomly, B choose one of the first 10 numbers and does not tell A. If B has chosen 7,say, he counts 7 places along the list, notes the digits at the location, and continue the process. If the digit is 0 he counts 10. A possible sequence is underlined in the list:
3 4 7 8 2 3 7 5 6 1 6 4 6 5 7 8 3 1 5 3 0 7 9 2 3 .........
The trick is that, without knowing B's first digit, A can point to B's final stopping location. He just starts the process from any one of the first 10 places, and conclude that he's stopping location is the same as B's. The probability of making an error is less than 3%.
I'm puzzled by the reasoning behind the example, can anyone explain it to me ?
I have longitudinal data (repeated measures on each subject over 't' time periods) with a binary response variable & various continuous/categorical covariates. I wish to build a forecasting model that tells the outcome for the time ahead: t+1, t+2... etc, while simultaneously regressing on the predictors, until time t.
I want my model to use the information from the covariates at present time t, to forecast the response for the time ahead.
I believe that my model will predict the outcome with a probability associated with it, something like a Markov model + regression, that gives the state transition probability, also taking into consideration the covariates that affect the state.
Any help on how to structure the problem and/or implement it in R/SAS will be helpful.
We are working on Queuing models with bulk arrivals and batch services using fuzzy parameters and we know how arrivals times and services times are distributed.
I want to optimize a portfolio based on regime switching model. Before optimization i need to detect the hidden regimes in the data. Let us suppose i want to start my analysis with GDP. What i want is to know the steps that are involved in identifying the regimes in a variable. There are many packages which do it.
The first step is to download data for GDP. Then how should i proceed to detect the regimes. I will appreciate if some one just list the steps. For now i am not interested in the model or calculation. I just need to know what are the general steps in detecting the regimes in a variable.
Looking for methods to model the state transitions of a multi-state process. Thanks in advance!
Most of the literature on the recursive maximum likelihood estimates of parameters of a partially observed model seems to be in discrete time, i.e. on Hidden Markov Models (HMMs).
There is quite a strong result for HMMs in
Tadic, V. B. (2010). Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models. IEEE Transactions on Information Theory, 56(12), 6406–6432. http://doi.org/10.1109/TIT.2010.2081110
I'm wondering whether there is similar work in continuous-time models. If they exist, I can't seem to find them. Maybe the problem is still open.
Thank you for any hints!
I was able to get transition and emission probabilities of HMM using inbuilt commands ("hmmtrain") of matlab, now I want ot find initial state probabilities. Can anybody help me getting those?
Thanks in advance.
I am trying to use Mixture Density Network model (Bishop, 1994) to learn a probabilistic mapping, i.e. a conditional distribution in the form P(t|x), where t and x are real-valued variables. The training data consists of (x,t) pairs.
Since this is an experimental problem, I know the underlying distribution of the data. I know P(t|x) is (approximately) a mixture of gaussians with 2 components for every x, and the prior probability of these components are almost equal. So, I set the number of mixture components in the MDN to 2.
The problem is that the training algorithm converges to an unsatisfactory solution which is too far from the actual underlying distribution. The learned P(t|x), for every x, is a mixture with two components:
In one of the components, the mean is relatively close to the means of the components of the actual distribution, and it has a high prior probability (more than 0.9)
In the other one, the mean is too far from both of the means of the actual distribution, and it has a low prior probability (less than 0.1).
In other words, I can say that MDN here acts like two MLP networks, one with (relatively) low error and high probability, and the other with high error and low probability. With this analogy, what I expected was two networks with almost equal probabilities.
Conjugate gradient algorithm has been used for optimization. I have repeated this with different initial random parameters, but the same problem exists.
Is something probably wrong in my implementation, or this is a an expectable difficulty with MDNs, maybe because of the optimization method?