Science topic

# Markov Processes - Science topic

Explore the latest questions and answers in Markov Processes, and find Markov Processes experts.

Questions related to Markov Processes

In discrete systems, such as Markov chains, Shannon entropy can be used to explain the uncertainty and complexity of the system. In continuous systems, such as pure jump Markov processes, does the corresponding differential entropy have a clear physical meaning? If so, how can differential entropy be used to interpret continuous systems?

House-selling is one of the typical tasks of the Optimal Stopping problems. Offers come in daily for an asset, such as a house, that you wish to sell. Let Xi denote the amount of the offer received on day i. X1,X2,... are independent random variables, according uniform distribution on the interval (0...1). Each offer costs an amount C>0 to observe. When you receive an offer Xi, you must decide whether accept it or to wait for a better offer. The reward sequence depends on whether or not recall of past observations is allowed. If you may not recall past offers, then Di(X1,...,Xi)=Xi – i*C. If you are allowed to recall past offers, then Di(X1,...,Xi)=max(X1,...,Xi) – i*C. These tasks may be extended to infinite horizon (i is unlimited). So, there 4 different task statements :

- without recall, infinite horizon
- without recall, finite horizon
- with recall, infinite horizon
- with recall, finite horizon

First three tasks are quite simple, but I was unable to prove solution of the last task (in strict form, although I found a solution). If anyone knows her solution, please write it or send an article (link to the article) where it is written. Thank you in advance.

An original Markov process is described by a square matrix M (nxn) whose entries M i, j verify the following two conditions:

i- All inputs M i,j are real and belong to the closed interval [0,1]

ii-The sum of the entries for all columns (or all rows) is equal to 1.

However, since the Markov era, many attempts to improve M, and hence M-strings, have been suggested by adding one or two more conditions.

We guess the best improvement is to add one or two of the following physical conditions:

*iii- Constant remaining after each time jump dt, i.e.,

B i, i = RO , where RO is an element of [0,1] for all i = 1,2, ... n, which means that the main diagonal consists of constant entries.

For example if RO is equal to 0 , ie the matrix M is a null principal diagonal matrix which corresponds to the assumption of a null residue after each step or temporal jump dt for all the free nodes.

**iv- symmetry condition :

Mi,j=Mj,i for all i,j.

Condition-iv transforms the stochastic transition matrix M into a doubly stochastic transition matrix superior than the original Markov matrix which is just a single stochastic transition matrix.

Dear Researchers,

Is there any tool available for BDMP. If so kindly give the details how to get it.

When Markov matrix are multiplied by input to get output, how inputs are defined and how they are different from matrix coefficients? I want to predict the presence of something based on previous data.

Dear researchers,

Pardon me for being a novice here. In the image attached, eq 3.1 represents the transition matrix (it's pretty clear). I am not able to comprehend the eq 3.2, alpha*P = alpha, as well as the further equations.

I have the P matrix with me as an outcome of one of my projects. How should I calculate alpha and the elements a1, a2, a3....etc?

I would request some of your valuable guidance and help.

I am trying to estimate the most likely number ok K using MCMC (Markov Chain Monte-Carlo Inference Of Clusters From Genotype Data) function in Geneland R-package by Gilles Guillot et al. I am a lit bit confused when it comes to varnpop and freq.model arguments

In the package reference manual https://cran.r-project.org/web/packages/Geneland/Geneland.pdf one may read:

varnpop = TRUE *

**should not*** be used in conjunction with freq.model = "Correlated"From the other hand, other manual http://www2.imm.dtu.dk/~gigu/Geneland/Geneland-Doc.pdf recommends example of MCMC usage which looks like this:

MCMC(coordinates=coord, geno.dip.codom=geno,

**varnpop=TRUE**, npopmax=10, spatial=TRUE,**freq.model="Correlated"**, nit=100000, thinning=100, path.mcmc="./")I am not sure how to reconcile these two contradictory pieces of information, any suggestions?

I'm trying to find any open source sentiment analysis program based on Hidden Markov Models. Even if it was not open source and it has good documentation it will be great.

Thank you.

Hello everybody.

The reward is necessary to tell the machine (

**agent**) which state-action pairs are good, and which are bad.Please help me to understand the behavior of the

**discount factor or reward**in terms of**reinforcement learning**.What I don't understand is why the

**discounted reward**is necessary? Why should it matter whether a good state is reached soon rather than later?I have more than 400 different events that occur during two years, some of them can occur 4000 times an others no more than 50 times. These events are not equally distributed, e.g., one at 15:00:02, other at 15:10:45 and other at 15:45:56. I cannot ensure that these events are independents, they maybe can be relationed. I want to analyze these events and try to find a pattern of events.

Type of data:

{timestamp1, event A (string value)}, {tiemstamp2, event B (string value)} ->Between timestamp1 and timestamp2 there is no event and is not equally distributed. Event A could inflluence event B or not.

I would like to know what type of methodology I can apply. I have been reading about SAX time series, Markov model, Hidden Markov model, DTW (Dynamic Time Warping), Time wrap for discrete events, Renewal proccess model and continous-time Markov process. However I think that these algorithms don't fit to my problem.

I hope that someone can help me. Thanks in advaced.

I am going to develop a queueing model in which riders and drivers arrive with inter-arrival time exponentially distributed.

All the riders and drivers arriving in the system will wait for some amount of time until being matched.

The matching process will pair one rider with one driver and takes place every Δt unit of time (i.e., Δt, 2Δt, 3Δt, ⋯). Whichever side outnumbers the other, its exceeding portion will remain in the queue for the next round of matching.

The service follows the first come first served principle, and how they are matched in particular is not in the scope of this problem and will not affect the queue modelling.

I tried to formulate it as a double-ended queue, where state indicates the exceeding number in the system.

![image](https://i.stack.imgur.com/teSyW.png)

However, this formulation didn't incorporate the factor Δt in it, it is thereby not in a batch service fashion. I have no clue how I can formulate this Δt (somewhat like a

**buffer**) into this model.Hello all ,

I have four stochastic matrix, and I want to estimate (or mesure) the correlation beteween them?

Is Khi-deux test for indepence can be used? tests based on distance or what kind of tests ?

Thanks

Without data collection, how the properties of Markov process are determined by using a known probability distribution?

I have a system that is a Markov chain. It is in principle possible to calculate the trasition matrix, given a set of parameters of the system. Calculation may take long but it can be done for some reasonable sets of parameters. Sometimes what I get is a trasition matrix that is to big to be managed by my digital computer (it requires to much memory - but I didn't try to save it on my hard drive). Are there any methods of algorithmically manipulating large Markov chains "by pieces", so to speak? Or perhaps there are some other methods of calculating various properties of systems that correspond to very large Markov chains?

In my case, I get absorbing Markov chains and I would like to calculate various properties, like probability of absorption or expected number of steps to being absorbed etc. Of course, there are appropriate formulae but they are useless if you cannot make the computation.

I am trying to locate the whereabouts of the specific source code for the SAS program mentioned in the paper below on page 102:

Unfortunately the site is no longer working and the authors are not contactable. If anyone can help, I would be most grateful.

Knowing the structure will assist me to model situations where risk is a major factor

Deep learning has proven to be beneficial for complex tasks

such as classifying images. However, this approach has been mostly applied

to static datasets.

Let's say we have a sequence of actions dataset. There are 10 different actions, but let say for simplicity that we have only a1 and a2 actions. The data are not stationary. For some time we have one distribution of actions sequences probabilities and then another. For my task RBM can model a distribution of stationary subsequences very well. However, I want to switch to new RBM as soon as distribution properties are changed. Also, I want to be able, after a training, to find a cluster(RBM?) to what this sequence of actions applies best. Is it realistic?

I have the raw idea that when the current RBM performance begins to degrade significantly we can switch to a new model. And we can chose the right RBM by the best performance if no one is not good enough(isn't clear how to measure this correctly) then - create new.

Do you have any ideas or hints, maybe some working alternatives?

I am trying to use Gilbert & Elliot correlated error model with Markov model in LTE-A to calculate the probability of packet loss. Like, other researchers, I use a simplified equation to calculate the above probability.

Is there any technical issue that I can not use this traditional method in LTE-A ?

One of the reviewers of our paper is of the view that:

**"Counting number of non-deterministic transitions doesn't give you any quantitative information about time or throughput."**whereas PRISM has a whole MDP benchmark that deals with (network) performance mostly: http://www.prismmodelchecker.org/benchmarks/props-mdp.phpDo you agree/disagree with his comment? Particularly, a good research paper/book for/against this comment will be highly appreciated.

Pavement condition deteriorates over time. Pavement maintenance decisions are dependent on the condition of pavements and the availability of funds. Predicting future condition of pavements is necessary to manage pavement rehabiliatation

There is a confusion in defining repair rate. From Richard Brown's book, Power Distribution Reliability, it is said that repair rate is the inverse of repair duration, which is defined as the mean time needed to repair the system SINCE it fails (page 164). But in the Markov process example (page 204), he used repair rate as the rate of transition from switched state (1b to 0 and 2b to 0) to normal state, not from the fail state to normal state (1a to 0 and 2a to 0). I attached the screenshoot of his book.

I am trying to analyse the dynamics changes in pattern of trade specialization using the transition matrices. But have no idea on how to estimate the initial probabilities and the subsequent one step or n-step transition matrix for four states. My data consists of revealed comparative advantage index. Particularly trying to understand and apply the method use in the paper attached herewith.

The measurement data are steering angle and steering angle velocity and from this data, the driving maneuver should be recognized (the maneuvers are lane changes to left and right, turn left and right lane keeping).

Please suggest some good methods.

I want to develop a Markov decision process Model where risk is factor in model formulation.

If Markov and Martingale are properties of Brownian. What is Wiener then? How Markov is different from Martingale.

X(t)=B(t)N (A(t): binary Markov; N is white Gaussian)

A(t) is a binary Markov process;

N is white Gaussian variable;

Is the product between A(t) and N a spherically invariant process? Thanks

For M/M/1 queueing inventory system with NO more than one outstanding replenishment, Professor Daduna et al. gave deep insight for it by giving the sufficient and necessary conditions to obtain the product solution. In practice, queueing inventory systems with more than one outstanding replenishment are usually seen. How can I model them?

Hi, I am doing research on Ad Hoc networks. I want to analyze the behaviour of nodes. Therefore, I have planned to use the markov chain process to monitor the states of the nodes. If any one is doing research related to this please share your commands?

Do you think it is a good or bad idea to use a partially observable Markov decision process (POMDP) planner instead of a plan library in the belief-desire-intention (BDI) architecture? The advantage of a POMDP planner could be that plans are more applicable. But the disadvantage is the complexity of POMDP planning. It would be nice to allow the BDI architecture to deal with partially observable domains.

Otherwise, here is my specific problem, I would also be interested in collaborating:

Is there any possibility to represent a probabilistic distribution on a set of exclusive hypotheses (Let us say a set of classes or system states: w_1, w_2,...., w_K such as

p(w_1)+p(w_2)+...+p(w_K)=1) by a single parameter (that preserves continuity)? This is possible via some parametric functions in the case of K=2. Is there any possibility for higher values of K?

Finding a continuous bijection between R and [0,1]^p would solve the problem.

I'm looking for a code to estimate time varying transition probabilities in Markov Switching GARCH models for empirical study.

I want to know if a Markov process far from equilibrium corresponds to a non-equilibrium thermodynamics process or whether they have something in common?