Article

Markov Chains

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The chemical master equation (CME) is a stochastic model that describes how the probability distribution of molecular counts of chemical species in a reacting system varies as a function of time [7,11]. The CME describes a continuous-time Markov jump process: each state represents the molecular counts of all component species, and transitions between states correspond to changes in molecular counts via chemical reactions [12]. Analytic solutions to the CME are important in biological engineering for a number of reasons. ...
... The gluing technique has a number of potential advantages over existing methods. The simplest approach that one might take to compute an analytic stationary distribution of a continuous-time Markov jump process is to solve for a left null vector of the transition rate matrix [12]. However, the dimension of the matrix is almost always infinite, often making the calculation exceedingly difficult. ...
... This set of state spaces provides a natural starting point for studying Mélykúti et al.'s gluing technique [13,20] for two main reasons. First, analytic solutions are already known for the stationary distributions on path-like and circular state spaces [12,27,28]. Second, one-vertex gluing involves only simple arithmetic and is computationally efficient. ...
Preprint
Full-text available
Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model that describes how the probability distribution of a chemically reacting system varies with time. Knowing analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterise state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing stationary distributions that satisfy user-specified constraints. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and chemical reactions with two-dimensional state spaces. Subject Areas Systems biology, synthetic biology, biomathematics, bioengineering
... Markov chains are mathematical models that probabilistically describe the uncertain behaviour of a dynamical system [19]. We here consider Markov chains that can only be in a finite number of states, and that can only change state at discrete steps in time. ...
... Informally, their aim is to answer the questions "How long will it take until the system enters a state in A?" and "What is the probability of ever visiting a state in A?", respectively. Under some regularity conditions, closed-form solutions to these questions are available in the literature [19,9]. ...
... Since h * A satisfies (19), it clearly is a solution to (13). Hence, by Lemma 8 and Proposition 10, it holds that ...
Preprint
Full-text available
We consider the problem of characterising expected hitting times and hitting probabilities for imprecise Markov chains. To this end, we consider three distinct ways in which imprecise Markov chains have been defined in the literature: as sets of homogeneous Markov chains, as sets of more general stochastic processes, and as game-theoretic probability models. Our first contribution is that all these different types of imprecise Markov chains have the same lower and upper expected hitting times, and similarly the hitting probabilities are the same for these three types. Moreover, we provide a characterisation of these quantities that directly generalises a similar characterisation for precise, homogeneous Markov chains.
... In Section 3, we develop techniques to calculate mean absorption and conditional mean absorption times for a CTMC via the fundamental matrix of the embedded DTMC. The general theory of both continuous and discrete time Markov chains can be found in Allen (2003), Karlin and Taylor (1963) and Norris (1997). Consider a general time homogeneous CTMC on a countable state space with initial distribution π. ...
... As a result of the properties of the equivalence relation of communication and the memoryless property of Markov chains, if X n ∈ C i for some i and n, then the long-term behavior of X n is determined by the theory of finite irreducible Markov chains. This theory can be found in many books including Allen (2003), Karlin and Taylor (1963), Kemeny and Snell (1976) and Norris (1997) and is not the focus of this article. However, in Section 4, we will extend the results of Section 2 to finite DTMCs with at least one equivalence class comprised of transient states and more than one closed equivalence class. ...
... Together, these facts mean that the chain eventually behaves like an irreducible chain. Irreducible (or regular) chains are covered in many sources including Allen (2003), Karlin and Taylor (1963), Kemeny and Snell (1976) and Norris (1997), but are not the focus here. Instead we will focus on the mean time to absorption in the set of closed equivalence classes and the conditional mean time to absorption in one or more closed classes. ...
Article
A proof of a general theorem for the calculation of conditional mean duration of a finite absorbing discrete time Markov chain is presented. In the simplest case, this result is equivalent to one suggested in the book of Kemeny and Snell (1976). In addition, we prove that the mean duration and mean conditional duration of a finite absorbing continuous time Markov chain can be calculated via the fundamental matrix of the embedded discrete time chain. These results are also extended to certain non-absorbing Markov chains. Applications are presented to illustrate the utility of these results.
... Considering Markov chain as column stochastic, irreducible and aperiodic, the steady state probability vector is obtained as [204]: (5.29) where, π = [π 1 , ..., π (L+1) K ] T , b = [1, 1, ..., 1] T and B ij = 1, ∀ i, j and I ∈ R (L+1) K ×(L+1) K is the identity matrix. According to the construction of Markov chain, an outage occurs when there is no change in the buer status. ...
... The steady state probability vector π for this matrix is [204], ...
... The Markov matrix A, of an aperiodic and irreducible Markov chain, is aperiodic, irreducible and column stochastic [67]. The steady-state probability vector dened as π is computed using [204], ...
Thesis
In this thesis, a suite of schemes is presented to enhance the performance of cooperative communication networks. In particular, techniques to improve the outage probability, end-to-end delay and throughput performances are presented. Firstly, a buffer-aided cooperative communication is studied and analyzed for packet selection and relay selection. A three-node network is considered in the beginning and the phenomenon of packet diversity is taken into consideration to overcome bad channel conditions of the source to relay (SR) and relay to destination (RD) links. The closed-form expressions for the computation of the outage probability along with the delay, throughput and diversity gain are derived. Then, packet selection is studied along with relay selection for buffer-aided amplify and forward (AF) cooperative relaying networks. The proposed protocol is analyzed for both symmetric and asymmetric channel conditions and buffer size using multiple antennas at relays and compared against the existing buffer-aided schemes. Markov chain (MC) is used to derive the closed-form expressions for outage probability, diversity gain, delay and throughput. Next, the performance of SNR based hybrid decode-amplify-forward relaying protocol is observed. When SR link is the strongest, data is transmitted to the selected relay and checked against the predefined threshold at the relay. If it is greater than the threshold, data is decoded and stored in the corresponding buffer. Otherwise, it is amplified and stored in the respective buffer. When RD link is the strongest, data is transmitted to the destination. MC based theoretical framework is used to derive an expression for the outage probability, the average end-to-end delay and throughput. Then, relay selection schemes considering the instantaneous link quality along with buffer status in the relay selection are proposed. A scheme is proposed that simultaneously considers buffer status and link quality. Then, we discuss multiple links with equal weights using a general relay selection factor. It includes the weight of the link as the first metric and the link quality, or priority, as the second metric for different cases of the same weight. The proposed scheme is evaluated for symmetric and asymmetric channel conditions. Moreover, we propose a specific parameter, termed as the bu�er-limit, which controls the selection of SR or RD links and also have its impact on the average delay and throughput. In this scheme, the outage probability is traded with the average end-to-end queuing delay or the average throughput by adjusting the values of the buffer-limit. The MC based framework is employed to derive the closed-form expressions for the outage probability, average end-to-end queuing delay and the average throughput. The suggested schemes are compared to the existing bufferaided relay selection schemes. Lastly, we consider the energy constraint cooperative network and propose a generalized approach to study the performance of energy harvesting relaying schemes. The unified modeling of generalized energy harvesting relaying (GEHR) scheme covers the non-energy harvesting schemes and the well-known energy harvesting schemes, i.e., time switching based relaying (TSR) and power splitting based relaying (PSR). Moreover, the scheme also caters the hybrid of both TSR and PSR schemes. The closed-form expressions for the outage probability and ergodic capacity and average throughput are formulated for non-mixed Rayleigh fading and mixed Rayleigh-Rician fading channels. Each case is analyzed for AF and decode and forward relaying models. Comprehensive Monte-Carlo simulations confirm all theoretical results.
... Markov Chains (MC) is a powerful modelling formalism to describe behavioural properties of systems with simple primitives [22]. It was proposed in early 20 th century, however, it was applied in the context of time shared systems only by mid 1960, in MIT, for scalability purposes [28,31]. ...
... A CTMC is a stochastic process having the Markov property [22]. This property is also known as memoryless property and it is usually defined as: ...
Preprint
Full-text available
Cyber-Physical Systems (CPS) are present in many settings addressing a myriad of purposes. Examples are Internet-of-Things (IoT) or sensing software embedded in appliances or even specialised meters that measure and respond to electricity demands in smart grids. Due to their pervasive nature, they are usually chosen as recipients for larger scope cyber-security attacks. Those promote system-wide disruptions and are directed towards one key aspect such as confidentiality, integrity, availability or a combination of those characteristics. Our paper focuses on a particular and distressing attack where coordinated malware infected IoT units are maliciously employed to synchronously turn on or off high-wattage appliances, affecting the grid's primary control management. Our model could be extended to larger (smart) grids, Active Buildings as well as similar infrastructures. Our approach models Coordinated Load-Changing Attacks (CLCA) also referred as GridLock or BlackIoT, against a theoretical power grid, containing various types of power plants. It employs Continuous-Time Markov Chains where elements such as Power Plants and Botnets are modelled under normal or attack situations to evaluate the effect of CLCA in power reliant infrastructures. We showcase our modelling approach in the scenario of a power supplier (e.g. power plant) being targeted by a botnet. We demonstrate how our modelling approach can quantify the impact of a botnet attack and be abstracted for any CPS system involving power load management in a smart grid. Our results show that by prioritising the type of power-plants, the impact of the attack may change: in particular, we find the most impacting attack times and show how different strategies impact their success. We also find the best power generator to use depending on the current demand and strength of attack.
... The theory regarding stationary distributions and the long-term behaviour of continuous-time chains is classical. Yet standard texts (e.g., [111,8,16,6]) on this subject assume irreducibility of the chain, an assumption which guarantees a unique stationary distribution. This condition is often difficult to verify in practice or not met in applications [113,84,57]. ...
... Proof. Part (i) can be found in any textbook on Markov chains (e.g., [6,8,111]). For a proof of (ii), see [80,Theorem 2.44 (ii)]. ...
Preprint
Full-text available
Computing the stationary distributions of a continuous-time Markov chain involves solving a set of linear equations. In most cases of interest, the number of equations is infinite or too large, and cannot be solved analytically or numerically. Several approximation schemes overcome this issue by truncating the state space to a manageable size. In this review, we first give a comprehensive theoretical account of the stationary distributions and their relation to the long-term behaviour of the Markov chain, which is readily accessible to non-experts and free of irreducibility assumptions made in standard texts. We then review truncation-based approximation schemes paying particular attention to their convergence and to the errors they introduce, and we illustrate their performance with an example of a stochastic reaction network of relevance in biology and chemistry. We conclude by elaborating on computational trade-offs associated with error control and some open questions.
... The proof of this classical upper-bound is left to the reader, it is essentially based on the optimal stopping theorem and on the monotone convergence theorem (see, for instance, [7], p139). ...
... A sample of the algorithm for the O.U. exit time with parameters θ = 0.1 and σ = 1. We observe the diffusion process starting at x = 2 in the interval[2,7] with = 10 −3 and γ = 10 −6 . ...
Preprint
In order to approximate the exit time of a one-dimensional diffusion process, we propose an algorithm based on a random walk. Such an algorithm so-called Walk on Moving Spheres was already introduced in the Brownian context. The aim is therefore to generalize this numerical approach to the Ornstein-Uhlenbeck process and to describe the efficiency of the method.
... It is well known that one can obtain many graph properties from the Laplacian or its pseudo-inverse, e.g., the average length h(i, j) of a random walk starting from node i before reaching node j and the average round-trip commute time c(i, j) [18], [23], [12], [15], [19], [20], [4] (even for strongly connected directed graphs): ...
Preprint
The Laplacian matrix and its pseudo-inverse for a strongly connected directed graph is fundamental in computing many properties of a directed graph. Examples include random-walk centrality and betweenness measures, average hitting and commute times, and other connectivity measures. These measures arise in the analysis of many social and computer networks. In this short paper, we show how a linear system involving the Laplacian may be solved in time linear in the number of edges, times a factor depending on the separability of the graph. This leads directly to the column-by-column computation of the entire Laplacian pseudo-inverse in time quadratic in the number of nodes, i.e., constant time per matrix entry. The approach is based on "off-the-shelf" iterative methods for which global linear convergence is guaranteed, without recourse to any matrix elimination algorithm.
... • Proposition 3.2.2 can as a generalized version of the classical ergodic theorem which is also proved using a decomposition into excursions (see, for example, [51]). ...
Thesis
Le présent travail se veut une contribution à l’extension du domaine des applications de la théorie des chemins rugueux à travers l’étude de la convergence des processus discrets, qui permet un nouveau regard sur plusieurs problèmes qui se posent dans le cadre du calcul stochastique classique. Nous étudions la convergence en topologie rugueuse, d’abord des chaînes de Markov sur graphes périodiques, ensuite des marches de Markov cachées, et ce changement de cadre permet d’apporter des informations supplémentaires sur la limite grâce à l’anomalie d’aire, invisible en topologie uniforme. Nous voulons montrer que l’utilité de cet objet dépasse le cadre des équations différentielles. Nous montrons également comment le cadre des chemins rugueux permet d’en- coder la manière dont on plonge un modèle discret dans l’espace des fonctions continues, et que les limites des différents plongements peuvent être différenciées précisément grâce à l’anomalie d’aire. Nous définissons ensuite les temps d’occupation itérés pour une chaîne de Markov et montrons, en utilisant les sommes itérées, qu’ils donnent une structure combinatoire aux marches de Markov cachées. Nous proposons une construction des chemins rugueux en passant par les sommes itérées et la comparons à la construction classique, faite par les intégrales itérées, pour trouver à la limite deux types de chemins rugueux différents, non-géométrique et géométrique respectivement. Pour finir, nous illustrons le calcul et la construction de l’anomalie d’aire et nous donnons quelques résultats supplémentaires sur la convergence des sommes et temps d’occupation itérés.
... A Markov chain is a probabilistic model encoding a sequence of possible events: the probability of each one of them depends only on the state attained in the previous event [30]. ...
Conference Paper
Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing long-term security for genomic data, yet the only available tool for this purpose is GenoGuard~\citehuang_genoguard:_2015. By relying on \em Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the real-world security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (low-entropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5% of it available as side information. In the case of a harder-to-guess (high-entropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using state-of-the-art genomic sequence inference methods, obtaining up to 15% improvement in accuracy.
... The modelled economic evaluation will simulate the impact of increased physical activity and movement skill competency on overall well-being over the lifetime of the cohort compared with usual practice. A Markov model [72] consisting of health states associated with different levels of physical activities/ movement skill competency will be used to accrue costs and benefits over the time horizon. The long-term improved outcome may translate into cost savings which offset the increased cost associated with the implementation of iPLAY project. ...
... Condition (H1) ensures the existence of a time τ such that, with positive probability and uniformly with respect to any initial positions x and y in X , the trajectories issued from x intersect at time τ the trajectories issued from y at some random times s ∈ [0, τ ]. Even though they are not comparable in general, this path crossing condition has connections with the notion of irreducibility of Markov processes, which means basically that for any x and y there is a deterministic time at which the trajectories issued from x reach y with positive probability [12]. ...
Preprint
We propose a simple criterion, inspired from the irreducible aperiodic Markov chains, to derive the exponential convergence of general positive semi-groups. When not checkable on the whole state space, it can be combined to the use of Lyapunov functions. It differs from the usual generalization of irreducibility and is based on the accessibility of the trajectories of the underlying dynamics. It allows to obtain new existence results of principal eigenelements, and their exponential attractiveness, for a nonlocal selection-mutation population dynamics model defined in a space-time varying environment.
... A Markov chain is a probabilistic model encoding a sequence of possible events: the probability of each one of them depends only on the state attained in the previous event [30]. ...
Preprint
Full-text available
Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing long-term security for genomic data, yet the only available tool for this purpose is GenoGuard (Huang et al., 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the real-world security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (low-entropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5\% of it available as side information. In the case of a harder-to-guess (high-entropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using state-of-the-art genomic sequence inference methods, obtaining up to 15% improvement in accuracy.
... The following overview on Markov Chains theory [41,42] Let I be a countable set, I = {i, j, k, ...}, where each i ∈ I is a state and the set I is called state-space. In addition, a probability space (Ω , F , P) is defined, where Ω is a set of outcomes, F is a set of subsets of Ω and for A ∈ F , P(A) is the probability of A. Further, a row vector λ = (λ i , i ∈ I) is called a measure if λ i ≥ 0, for all i ∈ I. ...
Thesis
Full-text available
The current work is devoted to the mathematical modelling of the development of fish respiratory organs, called gills or branchiae. The model organism chosen for the task is the Japanese rice fish (Oryzias latipes), more colloquially known as medaka. Their gills are analysed in the attempt to answer three main developmental questions via mathematical modelling, with possible applications beyond the scope of this thesis. Firstly, how many stem cells are needed to build the organ? What kind of heterogeneities exist among these stem cells? And, finally, what properties and relations with each-other do these stem cells have, that give the organ its shape? Relying on experimental data from our collaborators in the group of Prof. Lazaro Centanin, Centre for Organismal Studies, Heidelberg University, we use a variety of methods to study the aforementioned aspects. These methods were selected, adapted and developed based on the goal of each project and on the available data. Thus, a combination of stochastic and deterministic techniques are employed throughout the thesis, including Gillespie-type simulations, Markov chains theory and compartmental models. The study of stem cell numbers and heterogeneities is approached via stochastic simulations extended from the algorithm of Gillespie, and further improved by Markov chains methods. Results suggest that not only very few stem cells are sufficient to build and maintain the organ but, more importantly, these stem cells are heterogeneous in their division behaviour. In particular, they rely on alternating activation and quiescence phases, such that once a stem cell has divided, it becomes activated and divides multiple times before allowing another one to take the lead. For the study of growth and shape of gills, multiple deterministic models based on different assumptions and investigating various hypotheses have been developed. All these models have a compartmental structure, with increasing number of compartments governed by indicator functions which, in turn, depend on explicit or implicit algebraic equations. For each model, the existence, uniqueness and non-negativity of solutions are proved, the analytical solutions are found and their regularity is discussed. The models are compared based on their ability to reproduce part of the data, and the best one is selected. The chosen model is then applied to further data and speculations on hypotheses supporting the model are made. Results suggest that the main stem cell types, responsible for growing the organ, slow down their proliferation in time, either due to ageing or to the lack of sufficient nutrients. The main results and strengths of this thesis consist of the high variety of models developed and methods employed, their capability to answer important biological questions and, even more, to uncover new insights on mechanisms previously unknown.
... By Markov's inequality and the proof of [15,Theorem 1.8.3], we have ...
Preprint
Using a recent breakthrough of Smith, we improve the results of Fouvry and Kl\"uners on the solubility of the negative Pell equation. Let $\mathcal{D}$ denote the set of fundamental discriminants having no prime factors congruent to $3$ modulo $4$. Stevenhagen conjectured that the density of $D$ in $\mathcal{D}$ such that the negative Pell equation $x^2-Dy^2=-1$ is solvable with $x,y\in\mathbb{Z}$ is $58.1\%$, to the nearest tenth of a percent. By studying the distribution of the $8$-rank of narrow class groups $\mathrm{CL}^+(D)$ of $\mathbb{Q}(\sqrt{D})$, we prove that the infimum of this density is at least $53.8\%$.
... Since the state transitions of the FSM depends only on the last input (ACK or NACK) from the Destination, and since these inputs are associated with probabilities, then this FSM has a Markov Chain [Norris 1997] property. That is, by knowing that the decoding result on the receiver depends on the channel conditions, and assuming that the channels S-D, S-R and R-D are independent, then we can evaluate the average probabilities that the decoder issues NACK. ...
Thesis
Full-text available
Nowadays, mobile communications are characterized by a fast-increasing demand for internet-based services (voice, video data). Video services constitutes a large fraction of the internet traffic today. According to a report by Cisco, 75% of the world's mobile data traffic will be video-based by 2020. This ever-increasing demand in delivering internet-based services, has been the main driver for the development of the 4G digital cellular network, where packet- switched services are the primary design target. In particular, the overall system needs to ensure high peak data rates to the user and low delay in the delivery of the content, in order to support real time applications such as video streaming and gaming. This has motivated, in the last decade, a renewed and raising interest and research in wireless radio access technology. Wireless channel suffers from various physical phenomena like path-loss, shadowing, fading, interference, etc. In the most recent technologies, these effects are contrasted using Automatic Repeat re-Quest (ARQ) protocol, which consist on the retransmission of the same signal from the same node. ARQ protocol is usually combined with channel codes at the physical layer, which is known as Hybrid Automatic Repeat re-Quest (HARQ) protocol. Another improvement for communications over wireless channels is achieved when Relays are used as intermediate nodes for helping the communication between a Source and a Destination, which is known as cooperative communication. Both techniques, cooperation and HARQ, if individually applied, significantly improve the performance of the communication system. One open question is whether their combination would bring the sum of the singular improvements, or be only marginally beneficial. In the literature we can find many studies for the combination of these two techniques, but in our thesis we focus mainly on this interaction at the level of the physical layer (PHY) and the medium access control layer (MAC). We use example protocols on a network of three nodes (Source, Destination and Relay). For the theoretical analysis of these systems we focus on Finite State Markov Chains (FSMC). We discuss the case where Relay works in Decode-and-Forward (DCF) mode, which is very common in the literature, but our analysis focuses more strongly on the case where the Relay works in Demodulate-and-Forward (DMF) mode, because of its simplicity of implementation and its efficiency. This case is much more rarely addressed in the available literature, because of the higher complexity required by its analysis. Usually, the interaction between the two techniques has been studied using deterministic protocols, but in our analysis we will focus on both, deterministic and probabilistic protocols. So far, probabilistic protocols, where the retransmitting node is chosen with a given probability, have been mainly proposed for higher layers of communication systems, but, in contrast, this thesis studies probabilistic protocols on the physical layer and MAC layer, which give more insight on the analysis and performance optimization. The probabilistic protocols contains very few parameters (only 2) that can be optimized for best performance. Note that these parameters can be computed to mimic the behavior of a given deterministic protocol, and the result of the probabilistic protocol after optimization can only improve over this one. Moreover, the performance of our optimized probabilistic protocol is checked against results of the literature, and the comparison shows that our protocol performs better. In the end, there is also discussed the issue of relay selection. In a scenario of several candidate Relays, we propose a criterion for choosing the best Relay. The performance obtained by this criterion is compared to that obtained with the reference criteria in the literature.
... Then, event values with the continuous parameter space can be estimated by solving the Markov chain with a machine learning model. Here, a Markov chain [25] refers to a chain of states in which the probability of each state depends only on the previous state. This property is called the Markov property. ...
Preprint
Full-text available
In team-based invasion sports such as soccer and basketball, analytics is important for teams to understand their performance and for audiences to understand matches better. The present work focuses on performing visual analytics to evaluate the value of any kind of event occurring in a sports match with a continuous parameter space. Here, the continuous parameter space involves the time, location, score, and other parameters. Because the spatiotemporal data used in such analytics is a low-level representation and has a very large size, however, traditional analytics may need to discretize the continuous parameter space (e.g., subdivide the playing area) or use a local feature to limit the analysis to specific events (e.g., only shots). These approaches make evaluation impossible for any kind of event with a continuous parameter space. To solve this problem, we consider a whole match as a Markov chain of significant events, so that event values can be estimated with a continuous parameter space by solving the Markov chain with a machine learning model. The significant events are first extracted by considering the time-varying distribution of players to represent the whole match. Then, the extracted events are redefined as different states with the continuous parameter space and built as a Markov chain so that a Markov reward process can be applied. Finally, the Markov reward process is solved by a customized fitted-value iteration algorithm so that the event values with the continuous parameter space can be predicted by a regression model. As a result, the event values can be visually inspected over the whole playing field under arbitrary given conditions. Experimental results with real soccer data show the effectiveness of the proposed system.
... We now assume b(i) ≥ 1. By Lemma A.2, X κ (·, i) is non-explosive, hence the forward Kolmogorov equation holds true [35]. It follows that, provided that r i ≥ 2 (which is equivalent to b(i) ≥ a(i) + 1), for any integer v(i) ∈ [a(i) + 1, b(i)] ...
Article
Full-text available
We show that discrete distributions on the d-dimensional non-negative integer lattice can be approximated arbitrarily well via the marginals of stationary distributions for various classes of stochastic chemical reaction networks. We begin by providing a class of detailed balanced networks and prove that they can approximate any discrete distribution to any desired accuracy. However, these detailed balanced constructions rely on the ability to initialize a system precisely, and are therefore susceptible to perturbations in the initial conditions. We therefore provide another construction based on the ability to approximate point mass distributions and prove that this construction is capable of approximating arbitrary discrete distributions for any choice of initial condition. In particular, the developed models are ergodic, so their limit distributions are robust to a finite number of perturbations over time in the counts of molecules.
... By Remark 7.3, K φ0 is an irreducible aperiodic Markov kernel with invariant measure π φ0 proportional to φ * 0 φ 0 . By the basic convergence theorem for finite Markov chains (e.g., [47,Thm. 1.8.5]), lim L→∞ K L φ0 (x, y) = π φ0 (y). ...
Preprint
For a relatively large class of well-behaved absorbing (or killed) finite Markov chains, we give detailed quantitative estimates regarding the behavior of the chain before it is absorbed (or killed). Typical examples are random walks on box-like finite subsets of the square lattice $\mathbb Z^d$ absorbed (or killed) at the boundary. The analysis is based on Poincar\'e, Nash, and Harnack inequalities, moderate growth, and on the notions of John and inner-uniform domains.
... In this section we highlight the application of a stochastic model of daily rainfall as a tool to provide further insight into the S2S predictability of ISMR at the station scale and as a downscaling tool. The hidden Markov model (HMM) is a class of dynamic Bayesian networks, made popular in speech recognition but later applied to model daily rainfall due to its probabilistic nature and Markovian property (Hughes and Guttorp, 1994;Norris, 1997;Robertson et al., 2004a). ...
Article
Full-text available
This paper reviews research done by the authors and their collaborators at IRI and beyond over the past decade on predictability and prediction of Indian summer monsoon rainfall (ISMR) on seasonal and sub-seasonal timescales. Empirical analyses of the daily ISMR characteristics at local scales pertinent to agriculture, based on IMD gridded data, reveal that the number of rainy days in the season is more predictable than the seasonal rainfall total; furthermore, this "weather-within-climate" predictability undergoes an important seasonal modulation and is highest in the early and late phases of the monsoon and lowest in the July-August core monsoon period. New research in calibrated multi-model seasonal forecasting of ISMR is presented based on the North American Multi-Model Ensemble and gridded IMD data, using the 2018 forecasts as a case study; these forecasts were issued in real-time in tercile-category probability format and were updated for the remainder of the 2018 monsoon season at the beginning of each calendar month from June to September. Sub-seasonal multimodel probabilistic predictions of ISMR in the weeks 2-3 range (8-21 day lead times) are constructed and analyzed, using the onset of the 2018 monsoon as an example; the hindcast skill of these week 2-3 gridded ISMR forecasts is shown to be substantial in the early and late stages of the monsoon season, consistent with the empirical findings from IMD data. Lastly, a hidden Markov model (HMM) of daily rainfall variability at a network of stations over monsoonal India is used to interpret the organized variation of rainfall across the multiple temporal scales that characterize ISMR.
... The proof of this classical upper-bound is left to the reader, it is essentially based on the optimal stopping theorem and on the monotone convergence theorem (see, for instance, [8], p139). ...
Preprint
In order to approximate the exit time of a one-dimensional diffusion process, we propose an algorithm based on a random walk. Such an algorithm was already introduced in both the Brownian context and in the Ornstein-Uhlenbeck context. Here the aim is therefore to generalize this efficient numerical approach in order to obtain an approximation of both the exit time and position for either a general linear diffusion or a growth diffusion. The efficiency of the method is described with particular care through theoretical results and numerical examples.
... Considering Markov chain as a-periodic, irreducible and column stochastic, the vector for the probability of steady state is given by [24]: ...
Conference Paper
Full-text available
Buffer-aided cooperative relaying is often investigated either using decode and forward (DF) or amplify and forward (AF) relaying rules. However, it is seldom investigated using the hybrid decode-amplify-forward (HDAF) relaying rule. In this work, the performance of signal-to-noise ratio (SNR) based HDAF relaying rule is followed for buffer-aided cooperative relaying. Relay with the best possible corresponding channel is determined for reception or transmission. When source to relay hop is the most powerful, data is forwarded to chosen relay and its SNR is compared against the predefined SNR threshold at the relay. If it is greater than the threshold, the decoded data is saved in the corresponding buffer. Otherwise, the amplified data is saved in the respective buffer. When relay to destination link is the most powerful, data is forwarded to the destination. The famous Markov chain analytical model is used to illustrate the progression of the buffer state and to get the outage probability expression. Mathematical and simulation outcomes support our findings and prove that the outage probability performance of the proposed technique beats the existing SNR based buffer-aided relaying protocols based on DF and AF relaying rules by 2.43 dBs and 8.6 dBs, respectively.
... We followed the Markov chain based system model [15] of the non-homogeneous buffers used in [16] for the outage probability investigation. The total number of states in a Markov chain is ...
Conference Paper
Full-text available
Despite significant performance gains, buffer-aided cooperative communication incurs an increased latency. It is generally handled by prioritizing relay-to-destination link selection. The contribution of this paper is twofold , firstly, we study the buffer threshold based buffer-aided relay selection scheme in Nakagami-m fading channels with non-homogeneous buffers at relays. Secondly, we present the outage analysis of buffer occupancy based amplify and forward (AF) relaying by introducing a modified threshold in terms of signal to noise ratio at relay and destination, which enables the Markov chain (MC) based analysis of DF relaying to work for AF relaying with slight modifications. Using this approach, we evaluate the system using MC-based analysis for outage probability, average latency and throughput. The results depict that the buffer threshold based relaying can significantly decrease the latency and increase the average throughput by negotiating for the outage probability in homogeneous buffers. Furthermore, the homogeneous buffers at relays have better results as compared to non-homogeneous buffer size. Index Terms-Buffer-aided relay determination, buffer threshold , outage probability, amplify and forward, decode and forward , Markov chain.
... Then we can state (see e.g. [16], Chp. 2) Lemma 1. The following identities hold P (Υ 1:n = Υ j , Υ 1:n > t) = P (Υ 1:n = Υ j )P(Υ 1:n > t) , f or any t > 0, ...
Preprint
Full-text available
The notion of stochastic precedence between two random variables emerges as a relevant concept in several fields of applied probability. When one consider a vector of random variables $X_1,...,X_n$, this notion has a preeminent role in the analysis of minima of the type $\min_{j \in A} X_j$ for $A \subset \{1, \ldots n\}$. In such an analysis, however, several apparently controversial aspects can arise (among which phenomena of "non-transitivity"). Here we concentrate attention on vectors of non-negative random variables with absolutely continuous joint distributions, in which a case the set of the multivariate conditional hazard rate functions can be employed as a convenient method to describe different aspects of stochastic dependence. In terms of the m.c.h.r. functions, we first obtain convenient formulas for the probability distributions of the variables $\min_{j \in A} X_j$ and for the probability of events $\{X_i=\min_{j \in A} X_j\}$. Then we detail several aspects of the notion of stochastic precedence. On these bases, we explain some controversial behavior of such variables and give sufficient conditions under which paradoxical aspects can be excluded. On the purpose of stimulating active interest of readers, we present several comments and pertinent examples.
... This problem, that corresponds to the case where the state space I of the unobserved process X is a finite set, has been dealt with in [19]. In that case, the rate transition measure reduces to a matrix (sometimes called Q-matrix (see e. g. [45]) and a more precise characterization of the value function can be obtained, thanks to the peculiar structure of the problem. Such a setting may be more familiar to the reader and we invite she/he to keep in mind this situation also in the present setting. ...
Article
Full-text available
We consider an infinite horizon optimal control problem for a pure jump Markov process X, taking values in a complete and separable metric space I, with noise-free partial observation. The observation process is defined as Y_t = h(X_t), t ≥ 0, where h is a given map defined on I. The observation is noise-free in the sense that the only source of randomness is the process X itself. The aim is to minimize a discounted cost functional. In the first part of the paper we write down an explicit filtering equation and characterize the filtering process as a Piecewise Deterministic Process. In the second part, after transforming the original control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we prove the equivalence of the original and separated problems through an explicit formula linking their respective value functions. The value function of the separated problem is also characterized as the unique fixed point of a suitably defined contraction mapping.
... In classical computer science, a Markov chain is a memoryless stochastic machine, which progresses from one state to another on a discrete time scale. Since their introduction in 1906 by Andrey Markov, the properties of Markov chains have been studied in great detail by mathematicians, computer scientists and physicists alike [43]. In the meantime, more complex versions of stochastic machines, like Hidden Markov Models (HMMs) [44], have been introduced. ...
Thesis
Full-text available
It is widely believed that quantum physics is a fundamental theory describing the Universe. As such, one would expect to be able to see how classical physics that is observed in the macroscopic world emerges from quantum theory. This has so far largely eluded physicists, due to the inherent linear nature of quantum physics and the non-linear behaviour of classical physics. One of the principle differences between classical and quantum physics is the statistical, probabilistic nature of quantum theory. It is from this property that non-classical states can arise, such as entangled states. These states possess maximal correlations. However, they are not the only way in which correlations are created in quantum systems. This thesis aims to show how open quantum systems naturally contain correlations from their quantum nature. Moreover, even seemingly simple open quantum systems can behave far more complexly than expected upon the introduction of quantum feedback. Using this effect, the dynamics may become non-linear and as such behave non-trivially. Furthermore, it is shown how these effects may be exploited for a variety of tasks, including a computational application in hidden quantum Markov models and a quantum metrology scheme that does not require the use of exotic quantum states. This results in the design of systems that benefit from the use of quantum mechanics, but are not constrained by the use of experimentally difficulties such as entanglement.
Article
An essential factor toward ensuring the security of individuals and critical infrastructures is the timely detection of potentially threatening situations. To this end, especially in the law enforcement context, the availability of effective and efficient threat assessment mechanisms for identifying and eventually preventing crime‐ and terrorism‐related threatening situations is of utmost importance. Toward this direction, this work proposes a hidden Markov model‐based threat assessment framework for effectively and efficiently assessing threats in specific situations, such as public events. Specifically, a probabilistic approach is adopted to estimate the threat level of a situation at each point in time. The proposed approach also permits the reflection of the dynamic evolution of a threat over time by considering that the estimation of the threat level at a given time is affected by past observations. This estimation of the dynamic evolution of the threat is very useful, since it can support the decisions by security personnel regarding the taking of precautionary measures in case the threat level seems to adopt an upward trajectory, even before it reaches the highest level. In addition, its probabilistic basis allows for taking into account noisy data. The applicability of the proposed framework is showcased in a use case that focuses on the identification of potential threats in public events on the basis of evidence obtained from the automatic visual analysis of the footage of surveillance cameras.
Article
Full-text available
We study expansion and flooding in evolving graphs, when nodes and edges are continuously created and removed. We consider a model with Poisson node inter‐arrival and exponential node survival times. Upon joining the network, a node connects to d=𝒪(1) random nodes, while an edge disappears whenever one of its endpoints leaves the network. For this model, we show that, although the graph has Ωd(n)$$ {\Omega}_d(n) $$ isolated nodes with large, constant probability, flooding still informs a fraction 1−exp(−Ω(d))$$ 1-\exp \left(-\Omega (d)\right) $$ of the nodes in time 𝒪(logn). Moreover, at any given time, the graph exhibits a “large‐set expansion” property. We further consider a model in which each edge leaving the network is replaced by a fresh, random one. In this second case, we prove that flooding informs all nodes in time 𝒪(logn), with high probability. Moreover, the graph is a vertex expander with high probability.
Article
Full-text available
Modern methods of simulating molecular systems are based on the mathematical theory of Markov operators with a focus on autonomous equilibrated systems. However, non-autonomous physical systems or non-autonomous simulation processes are becoming more and more important. A representation of non-autonomous Markov jump processes is presented as autonomous Markov chains on space-time. Augmenting the spatial information of the embedded Markov chain by the temporal information of the associated jump times, the so-called augmented jump chain is derived. The augmented jump chain inherits the sparseness of the infinitesimal generator of the original process and therefore provides a useful tool for studying time-dependent dynamics even in high dimensions. Furthermore, possible generalizations and applications to the computation of committor functions and coherent sets in the non-autonomous setting are discussed. After deriving the theoretical foundations, the concepts with a proof-of-concept Galerkin discretization of the transfer operator of the augmented jump chain applied to simple examples are illustrated.
Conference Paper
In this paper, a new method is presented in which the generation of drilling schedules for planning well cycle requirements are fully automated using advanced machine learning algorithms. The generated schedules incorporate key logics and rules to mimic key operational constrains. The new approach brings about a paradigm shift in planning and resource allocation where time and efforts are reduced by orders of magnitude allowing planners to explore an unlimited number of scenarios and assess plan uncertainty. Advanced machine learning algorithms are used to automate drilling schedules. A key element is the learning part, in which all rig movements over the past years are carefully tracked and analyzed. Rig capabilities are then inferred and used in prediction, which is achieved by building a Markov Chain (MC) model that tracks the movement of every rig in history and analyses the type of wells drilled in the process. From this, the algorithm computes transition probabilities between different well classes – called MC states – controlling the assignment of rigs to future wells. The MC states can be defined by the user and contain typical well information, such as field and reservoir, fluid type, location, drilling operation type, and well completion type. The algorithm is tested using a synthetic well requirements dataset, including drilling cost and time estimations over the planning cycle. A detailed drilling schedule was produced from which yearly budget, rig year and well count figures were determined. Since the time to generate a schedule was extremely short, multiple scenarios were performed to address the impact of the total number of rigs to be added every year. This feature permits for resource planning, sensitivity analysis and scenario planning. Quantification of uncertainty in drilling cost and time were addressed by fitting a binormal distribution to historical cost and time values. In each drilling schedule case, drilling cost and time were drawn from the distributions and used to determine overall budget. Tens of cases were performed and histograms of yearly budget, rig year and well count were generated indicating the range of possible figures due to uncertainties in well drilling cost and time. We present a breakthrough innovation that has a far-reaching impact on planning through automated generation of optimized drilling schedules in minutes. The new approach serves as an important resource for planners by providing the capabilities to explore hundreds of scenarios encompassing entire range of uncertainties.
Article
Approximately counting and sampling knowledge states from a knowledge space is a problem that is of interest for both applied and theoretical reasons. However, many knowledge spaces used in practice are far too large for standard statistical counting and estimation techniques to be useful. Thus, in this work we use an alternative technique for counting and sampling knowledge states from a knowledge space. This technique is based on a procedure variously known as subset simulation, the Holmes–Diaconis–Ross method, or multilevel splitting. We make extensive use of Markov chain Monte Carlo methods and, in particular, Gibbs sampling, and we analyse and test the accuracy of our results in numerical experiments.
Article
Full-text available
This paper introduced a study for a new system that consists of one unit with mixed standby units. The mathematical model for the system is constructed using semi-Markov model with regenerative point technique in two cases: the first case when there is preventive maintenance provided to the main unit and the second case when there is no preventive maintenance in the system. Life and repair times of the units in the system are assumed to be generally distributed with fuzzy parameters defined by the bell-shaped membership function. A Numerical application is introduced to compare the performance of the system in the two cases.
Article
The traditional process monitoring techniques used to study high‐quality processes have several demerits, that is, high‐false alarm rate and poor detection, etc. A recent and promising idea to monitor such processes is the use of time‐between‐events (TBE) control charts. However, the available TBE control charts have been developed in a nonadaptive fashion assuming the Poisson process. There are many situations where we need adaptive monitoring, for example, health, flood, food, system, or terrorist surveillance. Therefore, the existing control charts are not useful, especially in sequential monitoring. This article introduces new adaptive TBE control charts for high‐quality processes based on the nonhomogeneous Poisson process by assuming the power law intensity. In particular, probability control limits are used to develop control charts. The proposed methodology allows us to get control limits that are dynamic and suitable for online process monitoring with an additional advantage to monitor a process where we believe the underlying failure rate may be changing over time. The average run length and coefficient of variation of the run length distribution are used to assess the performance of the proposed control charts. Besides simulation studies, we also discuss three examples to highlight the application of the proposed charts.
Chapter
PageRank is a widely used hyperlink‐based algorithm for estimating the relative importance of nodes in networks. In this chapter, the authors formulate the PageRank problem as a first‐ and second‐order Markov chains perturbation problem. Using numerical experiments, they compare convergence rates for different values of perturbation parameter on different graph structures and investigate the difference in ranks for the two problems. Generally, the PageRank problem of the second‐order perturbed Markov chains can be seen as a way to control over‐scoring of some vertices when strategic promotion is made. The authors conclude that the second‐order perturbed Markov chain cannot be ignored since it is practically advantageous when strategic promotion is required.
Article
Full-text available
The fast pace evolving of Android malware demands for highly efficient strategy. That is, for a range of malware types, a malware detection scheme needs to be resilient and with minimum computation performs efficient and precise. In this paper, we propose Mutual Information and Feature Importance Gradient Boosting (MIFIBoost) tool that uses byte n‐gram frequency. MIFIBoost consists of four steps in the model construction phase and two steps in the prediction phase. For training, first, n‐grams of both the classes.dex and AndroidManifest.xml binary files are obtained. Then, MIFIBoost uses Mutual Information (MI) to determine the top most informative items from the entire n‐gram vocabulary. In the third phase, MIFIBoost utilizes the Gradient Boosting algorithm to re‐rank these top n‐grams. For testing, MIFIBoost uses the learned vocabulary of byte n‐grams term‐frequency (tf) to feed into the classifier for prediction. Thus, MIFIBoost does not require reverse engineering. A key insight from this work is that filtering using XGBoost helps us to address the hard problem of detecting obfuscated malware better while having a negligible impact on nonobfuscated malware. We have conducted a wide range of experiments on four different datasets one of which is obfuscated, and MIFIBoost outperforms state‐of‐the‐art tools. MIFIBoost's f1‐score for Drebin, DexShare, and AMD datasets is 99.1%, 98.87%, and 99.62%, respectively, a False Positive Rate of 0.41% using AMD dataset. On average, the False Negative Rate of MIFIBoost is 2.1% for the PRAGuard dataset in which seven different obfuscation techniques are implemented. In addition to fast run‐time performance and resiliency against obfuscated malware, the experiments show that MIFIBoost performs quite efficiently for five zero‐day families with 99.78% AUC.
Article
We study the stochastic dynamics of a system of interacting species in a stochastic environment by means of a continuous-time Markov chain with transition rates depending on the state of the environment. Models of gene regulation in systems biology take this form. We characterise the finite-time distribution of the Markov chain, provide conditions for ergodicity, and characterise the stationary distribution (when it exists) as a mixture of Poisson distributions. The mixture measure is uniquely identified as the law of a fixed point of a stochastic recurrence equation. This recursion is crucial for statistical computation of moments and other distributional features.
Article
This work focuses on a class of stochastic damping Hamiltonian systems with state-dependent switching, where the switching process has a countably infinite state space. After establishing the existence and uniqueness of a global weak solution via the martingale approach under very mild conditions, the paper next proves the strong Feller property for regime-switching stochastic damping Hamiltonian systems by the killing technique together with some resolvent and transition probability identities. The commonly used continuity assumption for the switching rates qkl(⋅) in the literature is relaxed to measurability in this paper. Finally the paper provides sufficient conditions for exponential ergodicity and large deviations principle for regime-switching stochastic damping Hamiltonian systems. Several examples on regime-switching van der Pol and (overdamped) Langevin systems are studied in detail for illustration.
Chapter
For flexible access to the spectrum, Mitola and Maguire introduced cognitive radio (CR) relying on software‐defined radio. Software‐defined radio is a radio that can realize in software form the typical functions of the radio interface generally realized in a hardware form, such as the carrier frequency, signal bandwidth and modulation. Indeed, Mitola and Maguire combined their software‐defined radio experiences, as well as their passions for machine learning and artificial intelligence (AI) to set up CR technology. This chapter focuses on the AI techniques that were most commonly used during the last three years in CR. It presents the cognition cycle, the main CR tasks and their corresponding challenges. The chapter offers a state‐of‐the‐art on the application of AI methods to CR. It proposes a categorization of the techniques presented depending on the type of learning (supervised or unsupervised) and presents their applications depending on the CR tasks.
Chapter
Full-text available
Article
We propose a multistate joint model to analyze interval‐censored event‐history data subject to within‐unit clustering and nonignorable missing data. The model is motivated by a study of the neurocysticercosis (NC) cyst evolution at the cyst‐level, taking into account the multiple cysts phases with intermittent missing data and loss to follow‐up, as well as the intra‐brain clustering of observations made on a predefined data collection schedule. Of particular interest in this study is the description of the process leading to cyst resolution, and whether this process varies by antiparasitic treatment. The model uses shared random effects to account for within‐brain correlation and to explain the hidden heterogeneity governing the missing data mechanism. We developed a likelihood‐based method using a Monte Carlo EM algorithm for the inference. The practical utility of the methods is illustrated using data from a randomized controlled trial on the effect of antiparasitic treatment with albendazole on NC cysts among patients from six hospitals in Ecuador. Simulation results demonstrate that the proposed methods perform well in the finite sample and misspecified models that ignore the data complexities could lead to substantial biases.
Article
Full-text available
Two simple Markov processes are examined, one in discrete and one in continuous time, arising from idealized versions of a transmission protocol for mobile networks. We consider two independent walkers moving with constant speed on the discrete or continuous circle, and changing directions at independent geometric (respectively, exponential) times. One of the walkers carries a message that wishes to travel as far and as fast as possible in the clockwise direction. The message stays with its current carrier unless the two walkers meet, the carrier is moving counter‐clockwise, and the other walker is moving clockwise. Then the message jumps to the other walker. Explicit expressions are derived for the long‐term average clockwise speed and number of jumps made of the message, via the solution of associated boundary value problems. The tradeoff between speed and cost (measured as the rate of jumps) is also examined.
Article
Temporal graphs abstractly model real-life inherently dynamic networks. Given a graph G, a temporal graph with G as the underlying graph is a sequence of subgraphs (snapshots) Gt of G, where t≥1. In this paper we study stochastic temporal graphs, i.e. stochastic processes G whose random variables are the snapshots of a temporal graph on G. A natural feature observed in various real-life scenarios is a memory effect in the appearance probabilities of particular edges; i.e. the probability an edge e∈E appears at time step t depends on its appearance (or absence) at the previous k steps. We study the hierarchy of models of memory-k, k≥0, in an edge-centric network evolution setting: every edge of G has its own independent probability distribution for its appearance over time. We thoroughly investigate the complexity of two naturally related, but fundamentally different, temporal path problems, called Minimum Arrival and Best Policy.
Article
Full-text available
We propose two complementary ways to deal with a nesting structure in the node set of a network — such a structure may be called a multilevel network, with a node set consisting of several groups. First, within‐group ties are distinguished from between‐group ties by considering them as two distinct but interrelated networks. Second, effects of nodal variables are differentiated according to the levels of the nesting structure, to prevent ecological fallacies. This is elaborated in a study of two repeated observations of a sociability network in seven villages in Senegal, analyzed using the Stochastic Actor‐oriented Model.
Article
Full-text available
Brownian motion whose infinitesimal variance changes according to a three-state continuous-time Markov Chain is studied. This Markov Chain can be viewed as a telegraph process with one on state and two off states. We first derive the distribution of occupation time of the on state. Then the result is used to develop a likelihood estimation procedure when the stochastic process at hand is observed at discrete, possibly irregularly spaced time points. The likelihood function is evaluated with the forward algorithm in the general framework of hidden Markov models. The analytic results are confirmed with simulation studies. The estimation procedure is applied to analyze the position data from a mountain lion.
Preprint
Full-text available
Let $W^{(n)}$ be the $n$-letter word obtained by repeating a fixed word $W$, and let $R_n$ be a random $n$-letter word over the same alphabet. We show several results about the length of the longest common subsequence (LCS) between $W^{(n)}$ and $R_n$; in particular, we show that its expectation is $\gamma_W n-O(\sqrt{n})$ for an efficiently-computable constant $\gamma_W$. This is done by relating the problem to a new interacting particle system, which we dub ``frog dynamics''. In this system, the particles (`frogs') hop over one another in the order given by their labels. Stripped of the labeling, the frog dynamics reduces to a variant of the PushASEP. In the special case when all symbols of $W$ are distinct, we obtain an explicit formula for the constant $\gamma_W$ and a closed-form expression for the stationary distribution of the associated frog dynamics. In addition, we propose new conjectures about the asymptotic of the LCS of a pair of random words. These conjectures are informed by computer experiments using a new heuristic algorithm to compute the LCS. Through our computations, we found periodic words that are more random-like than a random word, as measured by the LCS.
Preprint
In this work we study the recurrence problem for quantum Markov chains, which are quantum versions of classical Markov chains introduced by S. Gudder and described in terms of completely positive maps. A notion of monitored recurrence for quantum Markov chains is examined in association with Schur functions, which codify information on the first return to some given state or subspace. Such objects possess important factorization and decomposition properties which allow us to obtain probabilistic results based solely on those parts of the graph where the dynamics takes place, the so-called splitting rules. These rules also yield an alternative to the folding trick to transform a doubly infinite system into a semi-infinite one which doubles the number of internal degrees of freedom. The generalization of Schur functions --so-called FR-functions-- to the general context of closed operators in Banach spaces is the key for the present applications to open quantum systems. An important class of examples included in this setting are the open quantum random walks, as described by S. Attal et al., but we will state results in terms of general completely positive trace preserving maps. We also take the opportunity to discuss basic results on recurrence of finite dimensional iterated quantum channels and quantum versions of Kac's Lemma, in close association with recent results on the subject.
Thesis
Cette thèse est consacrée à l'étude de modèles stochastiques associés aux systèmes quantiques ouverts. Plus particulièrement, nous étudions les marches quantiques ouvertes qui sont les analogues quantiques des marches aléatoires classiques. La première partie consiste en une présentation générale des marches quantiques ouvertes. Nous présentons les outils mathématiques nécessaires afin d'étudier les systèmes quantiques ouverts, puis nous exposons les modèles discrets et continus des marches quantiques ouvertes. Ces marches sont respectivement régies par des canaux quantiques et des opérateurs de Lindblad. Les trajectoires quantiques associées sont quant à elles données par des chaînes de Markov et des équations différentielles stochastiques avec sauts. La première partie s'achève avec la présentation de quelques pistes de recherche qui sont le problème de Dirichlet pour les marches quantiques ouvertes et les théorèmes asymptotiques pour les mesures quantiques non destructives. La seconde partie rassemble les articles rédigés durant cette thèse. Ces articles traîtent les sujets associés à l'irréductibilité, à la dualité récurrence-transience, au théorème central limite et au principe de grandes déviations pour les marches quantiques ouvertes à temps continu.
Preprint
Full-text available
The present paper addresses the transformation of modeling primitives in structured Markovian based formalisms such as Queueing Networks, Stochastic Petri Nets, Performance Evaluation Process Algebra, and Stochastic Automata Networks. Since all of those formalisms share the same underneath Markov Chain, there is a formal correspondence that yields the same results for a class of models described here using examples from the Queueing Networks formalism. Our idea is to provide insight for future research on this subject as we discuss how to shift the modeling considerations toward the model itself, instead of worrying about specific modeling primitives or other formalism intricacies when solving complex problems. We also discuss other approaches and translations available in the literature and discuss some key considerations when addressing model transformations among stochastic structured formalisms with Markovian assumptions.
ResearchGate has not been able to resolve any references for this publication.