## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

This article is a tutorial on Markov chain Monte Carlo simulations and their statistical analysis. The theoretical concepts are illustrated through many numerical assignments from the author's book on the subject. Computer code (in Fortran) is available for all subjects covered and can be downloaded from the web.

To read the full-text of this research,

you can request a copy directly from the author.

... For this, a multicanonical and PT algorithm were tested and the latter turned out to be more convenient. In the following, we briefly outline the PT scheme used within AMEA and refer to[106][107][108][109][110]for a thorough introduction to MCMC, simulated annealing, multicanonical sampling, and PT. As just stated, in a MCMC scheme, one typically samples from the Boltzmann distribution c b = ...

... This is done through an iteratively created chain of states { } x l , whereby one avoids the explicit calculation of the partition function Z. An effective and wellknown scheme for this is the Metropolis–Hastings algorithm[106,107]. One starts out with some state x l and proposes a new configuration x k , whereby it has to be ensured that every state of the system can be reached in order to achieve ergodicity. ...

... One starts out with some state x l and proposes a new configuration x k , whereby it has to be ensured that every state of the system can be reached in order to achieve ergodicity. The proposed state x k is accepted with probability[106,107] ...

We present a general scheme to address correlated nonequilibrium quantum impurity problems based on a mapping onto an auxiliary open quantum system of small size. The infinite fermionic reservoirs of the original system are thereby replaced by a small number N B of noninteracting auxiliary bath sites whose dynamics are described by a Lindblad equation, which can then be exactly solved by numerical methods such as Lanczos or matrix-product states. The mapping becomes exponentially exact with increasing N B, and is already quite accurate for small N B. Due to the presence of the intermediate bath sites, the overall dynamics acting on the impurity site is non-Markovian. While in previous work we put the focus on the manybody solution of the associated Lindblad problem, here we discuss the mapping scheme itself, which is an essential part of the overall approach. On the one hand, we provide technical details together with an in-depth discussion of the employed algorithms, and on the other hand, we present a detailed convergence study. The latter clearly demonstrates the above-mentioned exponential convergence of the procedure with increasing N B. Furthermore, the influence of temperature and an external bias voltage on the reservoirs is investigated. The knowledge of the particular convergence behavior is of great value to assess the applicability of the scheme to certain physical situations. Moreover, we study different geometries for the auxiliary system. On the one hand, this is of importance for advanced manybody solution techniques such as matrix product states which work well for short-ranged couplings, and on the other hand, it allows us to gain more insights into the underlying mechanisms when mapping non-Markovian reservoirs onto Lindblad-type impurity problems. Finally, we present results for the spectral function of the Anderson impurity model in and out of equilibrium and discuss the accuracy obtained with the different geometries of the auxiliary system. In particular, we show that allowing for complex Lindblad couplings produces a drastic improvement in the description of the Kondo resonance.

... Grid-based methods [42] provide an alternative approach for approximating non-linear probability density functions, although they rapidly become computationally intractable in high dimensions. The Monte Carlo simulation-based techniques such as Sequential Monte Carlo (SMC) [43] and Markov Chain Monte Carlo (MCMC) [44] are among the most powerful and popular methods of approximating proabilities. They are also very flexible as they do not make any assumptions regarding the probability densities to be approximated. ...

... Here, a Markov chain is a sequence of random samples generated according to a transition probaility (kernel) function with Markovian property, i.e. the transition probabilities between different sample values in the state space depend only on the random samples' current state. It has been shown that one can always use a well-designed Markov chain that converges to a unique stationary density of interest (in terms of drawn samples) [44]. The convergence occurs after a sufficiently large number of iterations, called the burn-in period. ...

... The good look of the linear fits is deceptive as they have a rather large χ 2 and a small goodness of fit Q (see p. 111 of [53]) which can be explained by the small errors bars. Another potentially deceptive result is that the imaginary part of the lowest zero decreases like L −3.08 . ...

... It is questionable that two MUCA runs could lead to a reliable estimate of the errors. An error bar from just two independent measurements fluctuates strongly and reaches a 95% confidence range only at about 14 (instead of 2) error bars (see p.78 of [53]). We decided therefore to smoothen the error bars by assuming that the real relative error is the same for all four of our large lattices. ...

We present high-accuracy calculations of the density of states using multicanonical methods for lattice gauge theory with a compact gauge group U(1) on 44, 64, and 84 lattices. We show that the results are consistent with weak and strong coupling expansions. We present methods based on Chebyshev interpolations and Cauchy theorem to find the (Fisher’s) zeros of the partition function in the complex β=1/g2 plane. The results are consistent with reweighting methods whenever the latter are accurate. We discuss the volume dependence of the imaginary part of the Fisher’s zeros, the width and depth of the plaquette distribution at the value of β where the two peaks have equal height. We discuss strategies to discriminate between first- and second-order transitions and explore them with data at larger volume but lower statistics. Higher statistics and even larger lattices are necessary to draw strong conclusions regarding the order of the transition.

... One of the most well-known generalized-ensemble algorithms is perhaps the multicanonical algorithm (MUCA) [42,43] (for reviews see, e.g., Refs. [44,45]). The method is also referred to as entropic sampling [46,47] and adaptive umbrella sampling [48] of the potential energy [49]. ...

... (The details of this process are described, for instance, in Refs. [44,45]). However, the iterative process can be non-trivial and very tedius for complex systems. ...

In this Special Festschrift Issue for the celebration of Professor Nobuhiro Gō’s 80th birthday, we review enhanced conformational sampling methods for protein structure predictions. We present several generalized-ensemble algorithms such as multicanonical algorithm, replica-exchange method, etc. and parallel Monte Carlo or molecular dynamics method with genetic crossover. Examples of the results of these methods applied to the predictions of protein tertiary structures are also presented.
Fullsize Image

... Grid-based methods [42] provide an alternative approach for approximating non-linear probability density functions, although they rapidly become computationally intractable in high dimensions. The Monte Carlo simulation-based techniques such as Sequential Monte Carlo (SMC) [43] and Markov Chain Monte Carlo (MCMC) [44] are among the most powerful and popular methods of approximating proabilities. They are also very flexible as they do not make any assumptions regarding the probability densities to be approximated. ...

... Here, a Markov chain is a sequence of random samples generated according to a transition probaility (kernel) function with Markovian property, i.e. the transition probabilities between different sample values in the state space depend only on the random samples' current state. It has been shown that one can always use a well-designed Markov chain that converges to a unique stationary density of interest (in terms of drawn samples) [44]. The convergence occurs after a sufficiently large number of iterations, called the burn-in period. ...

In this paper, we present the fusion of two complementary approaches for modeling and monitoring the spatio-temporal behavior of a fluid flow system. We also propose a mobile sensor deployment strategy to produce the most accurate estimate of the true ...

... Kim, Shephard and Chib (1998), Meyer and Yu (2000) and Tse, Zhang and Yu (2004) noted that the mixing performance of the sample paths can be measured by simulation inefficiency factor (SIF), which is also known as the integrated autocorrelation time by Berg (2005). It is estimated as the sample mean from an sampler that draws iid observations from the posterior distribution, SIF is given by σ 2 /σ 2 . ...

Error density estimation in a nonparametric functional regression model with functional predictor and scalar response is considered. The unknown error density is approximated by a mixture of Gaussian densities with means being the individual residuals, and variance as a constant parameter. This proposed mixture error density has a form of a kernel density estimator of residuals, where the regression function is estimated by the functional Nadaraya–Watson estimator. A Bayesian bandwidth estimation procedure that can simultaneously estimate the bandwidths in the kernel-form error density and the functional Nadaraya–Watson estimator is proposed. A kernel likelihood and posterior for the bandwidth parameters are derived under the kernel-form error density. A series of simulation studies show that the proposed Bayesian estimation method performs on par with the functional cross validation for estimating the regression function, but it performs better than the likelihood cross validation for estimating the regression error density. The proposed Bayesian procedure is also applied to a nonparametric functional regression model, where the functional predictors are spectroscopy wavelengths and the scalar responses are fat/protein/moisture content, respectively.

... Ref. [7]). In fact, the recently discovered [6] simple modification of FMC to effectively suppress critical slowing down [8] [9] [10] promises to promote FMC in the current top league of simulation algorithms from critical behavior. Yet, even though there is definitely interest in using the machinery of FMC, despite past efforts [11] [12] many potential users found the task of setting up the formulas and working out a concrete implementation to be too painful to seriously consider using it in practice. ...

The Fourier Monte Carlo algorithm represents a powerful tool to study criticality in lattice spins systems. In par- ticular, the algorithm constitutes an interesting alternative to other simulation approaches for models with microscopic or effective long-ranged interactions. However, due to the somewhat involved implementation of the basic algorithmic machinery, many researchers still refrain from using Fourier Monte Carlo. It is the aim of the present article to lower this barrier. Thus, the basic Fourier Monte Carlo algorithm is presented in great detail with emphasis on providing ready-to-use formulas for the reader's own implementation.

... A bunch of other MCMC techniques are discussed later in this chapter but the discussion is by no means exhaustive. For a detailed discussion see again Gilks et al. (1996) or Berg (2004) and Hobson et al. (2010). ...

Le fond diffus cosmologique (CMB), relique du Big-Bang chaud, porte les traces à la fois de la formation des structures des époques récentes et des premières phases énergétiques de l'Univers.Le satellite Planck, en service de 2009 à 2013, a fourni des mesures de haute qualité des anisotropies du CMB. Celles-ci sont utilisés dans cette thèse pour déterminer les paramètres du modèle cosmologique standard et autour du secteur des neutrinos.Ce travail décrit la construction d'un fonction de vraisemblance pour les hauts-l de Planck. Cela implique une stratégie de masquage pour l'émission thermique de la Galaxie ainsi que pour les sources ponctuelles. Les avant-plans résiduels sont traités directement au niveau du spectre de puissance en utilisant des templates physiques bases sur des études de Planck.Les méthodes statistiques nécessaires pour extraire les paramètres cosmologiques dans la comparaison entre les modèles et les données sont décrites, à la fois la méthode bayésienne de Monte-Carlo par chaînes de Markov et la technique fréquentiste du profil de la fonction de vraisemblance.Les résultats sur les paramètres cosmologiques sont présentés en utilisant les données de Planck seul et en combinaison avec les données à petites échelles des expériences de CMB basées au sol (ACT et SPT), avec les contraintes provenant des mesures des oscillations acoustiques des baryons (BAO) et des supernovae. Les contraintes sur l'échelle absolue de la masse des neutrinos et sur le nombre effectif de neutrinos sont également discutées.

... [26] in MCMC suggest that the number of iterations, NUM, for the MCMC sampler to settle in equilibrium should be, NUM ≫ τ. ...

In this paper, we present a new population-based Monte Carlo method, so-called MOMCMC (Multi-Objective Markov Chain Monte Carlo), for sampling in the presence of multiple objective functions in real parameter space. The MOMCMC method is designed to address the “multi-objective sampling” problem, which is not only of interest in exploring diversified solutions at the Pareto optimal front in the function space of multiple objective functions, but also those near the front. MOMCMC integrates Differential Evolution (DE) style crossover into Markov Chain Monte Carlo (MCMC) to adaptively propose new solutions from the current population. The significance of dominance is taken into consideration in MOMCMC’s fitness assignment scheme while balancing the solution’s optimality and diversity. Moreover, the acceptance rate in MOMCMC is used to control the sampling bandwidth of the solutions near the Pareto optimal front. As a result, the computational results of MOMCMC with the high-dimensional ZDT benchmark functions demonstrate its efficiency in obtaining solution samples at or near the Pareto optimal front. Compared to MOSCEM (Multiobjective Shuffled Complex Evolution Metropolis), an existing Monte Carlo sampling method for multi-objective optimization, MOMCMC exhibits significantly faster convergence to the Pareto optimal front. Furthermore, with small population size, MOMCMC also shows effectiveness in sampling complicated multi-objective function space.

... ALGORITHM This section proposes a developed algorithm based on the path reputation based scheme being solved by the classical monte Carlo method (PRRMC) [15]. The misbehaving sensor nodes are both intentionally and unintentionally trouble making nodes for many reasons, e.g., unexpected node failures, overflow traffic injections in a vulnerable node, node malfunctions , etc. Theoretically, the monte Carlo uses the feedback experiences to evaluate a proper action to be taken at a particular state. ...

This paper addresses the modified Monte Carlo algorithm to determine the best rule to distinguish between misbehaving nodes and cooperative nodes for the best proper path in wireless sensor network that satisfies the minimum energy, maximum node lifetime and guarantee high path reputation value. A topological profile has been designed by static nodes which are separated be two types such as sensor nodes and actor nodes. The actors are stationed and positioned at reachable spots on the land alongside the river whereas sensor nodes are floating in the river. The topology has been motivated from a segmentised of the real physical terrain of arterial river in Thailand, namely, Chaophraya. By that, the system has been formulated as the semi-Markov decision process. Moreover, to evaluate the system performance, three objective functions have been proposed which are the minimisation of energy consumption, the maximisation of node lifetime and the path reputation. Through preliminary simulations, the reported results show that the Monte Carlo algorithm with the proposed path reputation scheme outperforms both uniform and greedy selections. Moreover, our proposed method can also increase by up to 10.22% on average comparing with the other two methods. The path reputation scheme is finally confirmed its capability in promoting the reliably path by incorporating the good node behaviors.

... Various so-called approximate estimation methods have been developed, but unfortunately none has good properties for all possible models and data sets (e.g., for ungrouped binary data). For this reason, numerical methods involving the Markov Chain Monte Carlo method (Berg 4) have increasingly been used, as increasing computing power and advances in methods have made them more practical. However, drawbacks exist here too, as the underlying distribution needs to be specified in advance. ...

To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied.

... Otherwise, we stay in the present state, i.e. the next state is x. See [8] for a general introduction to MCMC methods. A simple choice for the proposal kernel is to sample from the prior distribution, i.e. k(x, y) = µ 0 (y). ...

We present a Bayesian methodology for infinite as well as finite dimensional parameter identification for partial differential equation models. The Bayesian framework provides a rigorous mathematical framework for incorporating prior knowledge on uncertainty in the observations and the parameters themselves, resulting in an approximation of the full probability distribution for the parameters, given the data. Although the numerical approximation of the full probability distribution is computationally expensive, parallelised algorithms can make many practically relevant problems computationally feasible. The probability distribution not only provides estimates for the values of the parameters, but also provides information about the inferability of parameters and the sensitivity of the model. This information is crucial when a mathematical model is used to study the outcome of real-world experiments. Keeping in mind the applicability of our approach to tackle real-world practical problems with data from experiments, in this initial proof of concept work, we apply this theoretical and computational framework to parameter identification for a well studied semilinear reaction-diffusion system with activator-depleted reaction kinetics, posed on evolving and stationary domains.

Introducing power flow control capabilities into the electric power system increases the operational flexibility of the overall system. Quantifying such flexibility is an important aspect in the determination of the value which such devices add to the system. In this paper, Monte Carlo Markov Chain simulations are employed to study the benefits achieved by HVDC lines in combination with Static Var Compensators. In a first approach, the objective is to maximize the achievable power transfer whereas in a second approach, an economic dispatch is determined for fixed loading scenarios. Simulations are carried out for the IEEE Reliability Test System showing that power flow control significantly enlarges the space of feasible generation settings and thereby reduces generation dispatch costs.

HIV-1 integrase has NLS (nuclear localization signals) which plays an important role in intranuclear transport of viral PIC (preintegration complex). The exact mechanisms of PIC formation and its inter-nuclear transport are not known. It was shown that NLSs binds to the cell transport machinery e.g. proteins of nuclear pore complex such as transportins. We investigated the interaction of this viral protein with proteins of the nuclear pore complex (transportin-SR2). We showed reasons why transportin-SR2 is the nuclear import protein for HIV-1 integrase and not transportin-SR1: (i) 3D alignments identify differences between transportin-SR1 and transportin-SR2. (ii) Rigid protein-protein docking showed key domain interactions and hydrogen bonds available to transportin-SR2. (iii) Flexible receptor-ligand docking was performed to reveal crucial amino acid residues involved in this hydrogen bond formation. These results are discussed to better understand this specific and efficient retroviral transport route comparing the interactions of related retroviruses (SIV, HIV-2, PFV etc.) with their cognate transport proteins, NLS sequences and kinase binding motifs.

In this work we investigate if a small fraction of quarks and gluons, which
escaped hadronization and survived as a uniformly spread perfect fluid, can
play the role of both dark matter and dark energy. This fluid, as developed in
\citep{Brilenkov}, is characterized by two main parameters: $\beta$, related to
the amount of quarks and gluons which act as dark matter; and $\gamma$, acting
as the cosmological constant. We explore the feasibility of this model at
cosmological scales using data from type Ia Supernovae (SNeIa), Long Gamma-Ray
Bursts (LGRB) and direct observational Hubble data. We find that: (i) in
general, $\beta$ cannot be constrained by SNeIa data nor by LGRB or H(z) data;
(ii) $\gamma$ can be constrained quite well by all three data sets,
contributing with $\approx78\%$ to the energy-matter content; (iii) when a
strong prior on (only) baryonic matter is assumed, the two parameters of the
model are constrained successfully.

Inspired from modern out-of-equilibrium statistical physics models, a matrix product based framework is defined and studied, that permits the formal definition of random vectors and time series whose desired joint distributions are a priori prescribed. Its key feature consists of preserving the writing of the joint distribution as the simple product structure it has under independence, while inputing controlled dependencies amongst components: This is obtained by replacing the product of probability densities by a product of matrices of probability densities. It is first shown that this matrix product model can be remapped onto the framework of Hidden Markov Models. Second, combining this dual perspective enables us both to study the statistical properties of this model in terms of marginal distributions and dependencies (a stationarity condition is notably devised) and to devise an efficient and accurate numerical synthesis procedure. A design procedure is also described that permits the tuning of model parameters to attain targeted statistical properties. Pedagogical well-chosen examples of times series and multivariate vectors aim at illustrating the power and versatility of the proposed approach and at showing how targeted statistical properties can actually be prescribed.

Global-state networks provide a powerful mechanism to model the increasing heterogeneity in data generated by current systems. Such a network comprises of a series of network snapshots with dynamic local states at nodes, and a global network state indicating the occurrence of an event. Mining discriminative subgraphs from global-state networks allows us to identify the influential sub-networks that have maximum impact on the global state and unearth the complex relationships between the local entities of a network and their collective behavior. In this paper, we explore this problem and design a technique called MINDS to mine minimally discriminative subgraphs from large global-state networks. To combat the exponential subgraph search space, we derive the concept of an edit map and perform Metropolis Hastings sampling on it to compute the answer set. Furthermore, we formulate the idea of network-constrained decision trees to learn prediction models that adhere to the underlying network structure. Extensive experiments on real datasets demonstrate excellent accuracy in terms of prediction quality. Additionally, MINDS achieves a speed-up of at least four orders of magnitude over baseline techniques.

Time-series microarray data can capture dynamic genomic behavior not available in steady-state expression data, which has made time-series analysis especially useful in the study of dynamic cellular processes such as the circadian rhythm, disease progression, drug response, and the cell cycle. Using the information available in the time-series data, we address three related computational problems: the prediction of gene expression levels from previous time steps, the simulation of an entire time-series microarray dataset, and the reconstruction of gene regulatory networks. We model the gene expression levels using a linear model, due to its simplicity and the ability to interpret the coefficients as interactions in the underlying regulatory network. A stepwise multiple linear regression method is applied to fit the parameters of the linear model to a given training dataset. The learned model is utilized in predicting and replicating the time course of the expression levels and in identifying the regulatory interactions. Each predicted interaction is also associated with a statistical significance to provide a confidence measure that can guide prioritization in further costly manual or experimental verification. We demonstrate the performance of our approach on several yeast cell-cycle datasets and show that it provides comparable or greater accuracy than existing methods and provides additional quantitative information about the interactions not available from the other methods.

In this paper we aim to introduce the reader to some basic concepts and instruments used in a wide range of economic networks models. In particular, we adopt the theory of random networks as the main tool to describe the relationship between the organization of interaction among individuals within different components of the economy and overall aggregate behavior. The focus is on the ways in which economic agents interact and the possible consequences of their interaction on the system. We show that network models are able to introduce complex phenomena in economic systems by allowing for the endogenous evolution of networks.

Bed erosion and sediment transport are ubiquitous and linked processes
in rivers. Erosion can either be modeled as a "detachment limited"
function of the shear stress exerted by the flow on the bed, or as a
"transport limited" function of the sediment flux capacity of the flow.
These two models predict similar channel profiles when erosion rates are
constant in space in time, but starkly contrasting behavior in transient
settings. Traditionally detachment limited models have been used for
bedrock rivers, whereas transport limited models have been used in
alluvial settings. In this study we demonstrate that rivers incising
into a substrate of loose, but very poorly sorted relict glacial
sediment behave in a detachment limited manner. We then develop a
methodology by which to both test the appropriate incision model and
constrain its form. Specifically we are able to tightly constrain how
incision rates vary as a function of the ratio between sediment flux and
sediment transport capacity in three rivers responding to deglaciation
in the Ladakh Himalaya, northwest India. This represents the first field
test of the so-called "tools and cover" effect along individual rivers.

We propose a new generalized-ensemble molecular dynamics simulation algorithm, which we refer to as the multibaric–multithermal molecular dynamics. This is the molecular dynamics version of the recently proposed multibaric–multithermal Monte Carlo method. The multibaric–multithermal simulations perform random walks widely both in the potential-energy space and in the volume space. From only one simulation run, therefore, one can calculate isobaric–isothermal-ensemble averages in wide ranges of temperature and pressure. We test the effectiveness of this algorithm by applying it to a Lennard-Jones 12-6 potential system.

A novel tracking algorithm is proposed for targets with drastically changing geometric appearances over time. To track such objects, we develop a local patch-based appearance model and provide an efficient online updating scheme that adaptively changes the topology between patches. In the online update process, the robustness of each patch is determined by analyzing the likelihood landscape of the patch. Based on this robustness measure, the proposed method selects the best feature for each patch and modifies the patch by moving, deleting, or newly adding it over time. Moreover, a rough object segmentation result is integrated into the proposed appearance model to further enhance it. The proposed framework easily obtains segmentation results because the local patches in the model serve as good seeds for the semi-supervised segmentation task. To solve the complexity problem attributable to the large number of patches, the Basin Hopping (BH) sampling method is introduced into the tracking framework. The BH sampling method significantly reduces computational complexity with the help of a deterministic local optimizer. Thus, the proposed appearance model could utilize a sufficient number of patches. The experimental results show that the present approach could track objects with drastically changing geometric appearance accurately and robustly.

A Bayesian approach for generating inference from multiple overlapping higher level system data sets on component reliability parameters within systems with continuous life metrics (as distinct from on-demand systems) is presented in this paper. Overlapping data sets are those that are drawn simultaneously from the same process or system. The methodology proposed in this paper is exclusively based on time, but is easily transferrable to any other variable (such as distance, flow etc.). The approach is able to incorporate overlapping evidence from systems with continuous life metrics using a detailed understanding of the system logic represented using fault-trees, reliability block diagrams or equivalent representation. The reliability parameters of each component define the continuous reliability function associated with each sensor location. This paper offers a fully Bayesian method of analyzing multiple overlapping higher level data sets for complex systems with multiple instances of identical components. The scope of the paper is limited to binary-state systems and components that exist in either ‘failed’ or ‘successful’ states.

The temporal evolution of the coupled variability between the South Atlantic Convergence Zone (SACZ) and the underlying sea surface temperature (SST) during austral summer is investigated using monthly data from the NCEP/NCAR reanalysis. A maximum covariance analysis shows that the SACZ is intensified [weakened] by warm [cold] SST anomalies in the beginning of summer, drifting northward. This migration is accompanied by the cooling [warming] of the underlying oceanic anomalies. The results confirm earlier analyses using numerical models, and suggest the existence of a negative feedback between the SACZ and the underlying South Atlantic SST field. A linear regression of daily anomalies of SST and omega at 500 hPa to the equations of a stochastic oscillator reveals a negative ocean--atmosphere feedback in the western South Atlantic, stronger during January and February and directly underneath the oceanic band of the SACZ.

In this article, we review the generalised-ensemble algorithms. Three well-known methods, namely multicanonical algorithm, simulated tempering and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are presented. We then present further extensions of the above three methods. Finally, we discuss the relations among multicanonical algorithm, Wang–Landau method and metadynamics.

Atmospheric chemistry transport models are important tools to investigate the local, regional and global controls on atmospheric composition and air quality. To ensure that these models represent the atmosphere adequately, it is important to compare their outputs with measurements. However, ground based measurements of atmospheric composition are typically sparsely distributed and representative of much smaller spatial scales than those resolved in models; thus, direct comparison incurs uncertainty. In this study, we investigate the feasibility of using observations of one or more atmospheric constituents to estimate parameters in chemistry transport models and to explore how these estimates and their uncertainties depend upon representation errors and the level of spatial coverage of the measurements. We apply Gaussian process emulation to explore the model parameter space and use monthly averaged ground-level concentrations of ozone (O3) and carbon monoxide (CO) from across Europe and the US. Using synthetic observations, we find that the estimates of parameters with greatest influence on O3 and CO are unbiased, and the associated parameter uncertainties are low even at low spatial coverage or with high representation error. Using reanalysis data, we find that estimates of the most influential parameter – corresponding to the dry deposition process – are closer to its expected value using both O3 and CO data than using O3 alone. This is remarkable because it shows that while CO is largely unaffected by dry deposition, the additional constraints it provides are valuable for achieving unbiased estimates of the dry deposition parameter. In summary, these findings identify the level of spatial representation error and coverage needed to achieve good parameter estimates and highlight the benefits of using multiple constraints to calibrate atmospheric chemistry transport models.

Calculating free energies is an important and notoriously difficult task for
molecular simulations. The rapid increase in computational power has made it
possible to probe increasingly complex systems, yet extracting accurate free
energies from these simulations remains a major challenge. Fully exploring the
free energy landscape of, say, a biological macromolecule typically requires
sampling large conformational changes and slow transitions. Often, the only
feasible way to study such a system is to simulate it using an enhanced
sampling method. The accelerated weight histogram (AWH) method is a new,
efficient extended ensemble sampling technique which adaptively biases the
simulation to promote exploration of the free energy landscape. The AWH method
uses a probability weight histogram which allows for efficient free energy
updates and results in an easy discretization procedure. A major advantage of
the method is its general formulation, making it a powerful platform for
developing further extensions and analyzing its relation to already existing
methods. Here, we demonstrate its efficiency and general applicability by
calculating the potential of mean force along a reaction coordinate for both a
single dimension and multiple dimensions. We make use of a non-uniform, free
energy dependent target distribution in reaction coordinate space so that
computational efforts are not wasted on physically irrelevant regions. We
present numerical results for molecular dynamics simulations of lithium acetate
in solution and chignolin, a 10-residue long peptide that folds into a
$\beta$-hairpin. We further present practical guidelines for setting up and
running an AWH simulation.

Within the recently introduced auxiliary master equation approach it is
possible to address steady state properties of strongly correlated impurity
models, small molecules or clusters efficiently and with high accuracy. It is
particularly suited for dynamical mean field theory in the nonequilibrium as
well as in the equilibrium case. The method is based on the solution of an
auxiliary open quantum system, which can be made quickly equivalent to the
original impurity problem. In its first implementation a Krylov space method
was employed. Here, we aim at extending the capabilities of the approach by
adopting matrix product states for the solution of the corresponding auxiliary
quantum master equation. This allows for a drastic increase in accuracy and
permits us to access the Kondo regime for large values of the interaction. In
particular, we investigate the nonequilibrium steady state of a single impurity
Anderson model and focus on the spectral properties for temperatures T below
the Kondo temperature T_K and for small bias voltages phi. For the two cases
considered, with T=T_K/4 and T=T_K/10 we find a clear splitting of the Kondo
resonance into a two-peak structure for phi close above T_K. In the equilibrium
case (phi=0) and for T=T_K/4, the obtained spectral function essentially
coincides with the one from numerical renormalization group.

Online assessment of remaining useful life (RUL) of a system or device has been widely studied for performance reliability, production safety, system conditional maintenance, and decision in remanufacturing engineering. However, there is no consistency framework to solve the RUL recursive estimation for the complex degenerate systems/device. In this paper, state space model (SSM) with Bayesian online estimation expounded from Markov chain Monte Carlo (MCMC) to Sequential Monte Carlo (SMC) algorithm is presented in order to derive the optimal Bayesian estimation. In the context of nonlinear & non-Gaussian dynamic systems, SMC (also named particle filter, PF) is quite capable of performing filtering and RUL assessment recursively. The underlying deterioration of a system/device is seen as a stochastic process with continuous, nonreversible degrading. The state of the deterioration tendency is filtered and predicted with updating observations through the SMC procedure. The corresponding remaining useful life of the system/device is estimated based on the state degradation and a predefined threshold of the failure with two-sided criterion. The paper presents an application on a milling machine for cutter tool RUL assessment by applying the above proposed methodology. The example shows the promising results and the effectiveness of SSM and SMC online assessment of RUL.

The precision and accuracy of certain simulations in nanoscience, fluid dynamics and biotechnology in the analyses of boundary conditions problems with real experimental results are in general related to the characteristics of numerical approach used and subsequently to the morphological structure of lattices used along these calculations. The more the lattice used approximates to the original boundary or initial condition problem more precise the simulation would be. This work shows a simple algorithm that can be used to build huge lattices containing the main geometrical structures statistically similar to experimental 2D image data of ceramic grains by using some freeware software.

Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

Critical engineering systems generally demand high reliability while considering uncertainties, which makes failure event identification and reliability analysis based on computer simulation codes computationally very expensive. Rare event can be defined as a failure event that has a small probability of failure value, normally less than 10−5. This paper presents a new approach, referred to as Accelerated Failure Identification Sampling (AFIS), for probability analysis of rare events, enabling savings of vast computational efforts. To efficiently identify rare failure sample points in probability analysis, the proposed AFIS technique will first construct a Gaussian process (GP) model for system performance of interest and then utilize the developed model to predict unknown responses of Monte Carlo sample points. Second, a new quantitative measure, namely “failure potential”, is developed to iteratively search sample points that have the best chance to be a failure sample point. Third, the identified sample points with highest failure potentials are evaluated for the true performance and then used to update the GP model. The failure identification process will be iteratively preceded and the Monte Carlo simulation will then be employed to estimate probabilities of rare events if the maximum failure potential of existing Monte Carlo samples falls below a given target value. Two case studies are used to demonstrate the effectiveness of the developed AFIS approach for rare events identification and probability analysis.

In molecular simulations of complex systems with many degrees of freedom, conventional Monte Carlo and molecular dynamics simulations in canonical ensemble or isobaric-isothermal ensemble suffer from a great difficulty, in which simulations tend to get trapped in states of energy local minima. A simulation in generalized ensemble performs a random walk in specified variables and overcomes this difficulty. In this chapter, we review the generalized-ensemble algorithms. Replica-exchange method, multicanonical algorithm, and their extensions are described. Some simulation results based on these generalized-ensemble algorithms are also presented.

Sensor-based localization has been found to be one of the most preliminary issues in the world of Mobile/ Wireless Robotics. One can easily track a mobile robot using a Kalman Filter, which uses a Phase Locked Loop for tracing via averaging the values. Tracking has now become very easy, but one wants to proceed to navigation. The reason behind this is that tracking does not help one determine where one is going. One would like to use a more precise "Navigation" like Monte Carlo Localization. It is a more efficient and precise way than a feedback loop because the feedback loops are more sensitive to noise, making one modify the external loop filter according to the variation in the processing. In this case, the robot updates its belief in the form of a probability density function (pdf). The supposition is considered to be one meter square. This probability density function expands over the entire supposition. A door in a wall can be identified as peak/rise in the probability function or the belief of the robot. The mobile updates a window of 1 meter square (area depends on the sensors) as its belief. One starts with a uniform probability density function, and then the sensors update it. The authors use Monte Carlo Localization for updating the belief, which is an efficient method and requires less space. It is an efficient method because it can be applied to continuous data input, unlike the feedback loop. It requires less space. The robot does not need to store the map and, hence, can delete the previous belief without any hesitation.

In the presence of multiscale dynamics in a reaction network, direct
simulation methods become inefficient as they can only advance the system on
the smallest scale. This work presents stochastic averaging techniques to
accelerate computations for obtaining estimates of expected values and
sensitivities with respect to the steady state distribution. A two-time-scale
formulation is used to establish bounds on the bias induced by the averaging
method. Further, this formulation provides a framework to create an accelerated
`averaged' version of most single-scale sensitivity estimation method. In
particular, we propose a new lower-variance ergodic likelihood ratio type
estimator for steady-state estimation and show how one can adapt it to
accelerated simulations of multiscale systems.Lastly, we develop an adaptive
"batch-means" stopping rule for determining when to terminate the
micro-equilibration process.

The use of computer simulations as “virtual microscopes” is limited by sampling difficulties that arise fromthe large dimensionality and the complex energy landscapes of biological systems leading to poor convergences already in folding simulations of single proteins. In this chapter, we discuss a few strategies to enhance sampling in bimolecular simulations, and present some recent applications.

Sensor-based localization has been found to be one of the most preliminary issues in the world of Mobile/Wireless Robotics. One can easily track a mobile robot using a Kalman Filter, which uses a Phase Locked Loop for tracing via averaging the values. Tracking has now become very easy, but one wants to proceed to navigation. The reason behind this is that tracking does not help one determine where one is going. One would like to use a more precise “Navigation” like Monte Carlo Localization. It is a more efficient and precise way than a feedback loop because the feedback loops are more sensitive to noise, making one modify the external loop filter according to the variation in the processing. In this case, the robot updates its belief in the form of a probability density function (pdf). The supposition is considered to be one meter square. This probability density function expands over the entire supposition. A door in a wall can be identified as peak/rise in the probability function or the belief of the robot. The mobile updates a window of 1 meter square (area depends on the sensors) as its belief. One starts with a uniform probability density function, and then the sensors update it. The authors use Monte Carlo Localization for updating the belief, which is an efficient method and requires less space. It is an efficient method because it can be applied to continuous data input, unlike the feedback loop. It requires less space. The robot does not need to store the map and, hence, can delete the previous belief without any hesitation.

The term ‘time series’ refers to data that can be represented as a sequence. This includes for example financial data in which the sequence index indicates time, and genetic data (e.g. ACATGC …) in which the sequence index has no temporal meaning. In this tutorial we give an overview of discrete-time probabilistic models, which are the subject of most chapters in this book, with continuous-time models being discussed separately in Chapters 4, 6, 11 and 17. Throughout our focus is on the basic algorithmic issues underlying time series, rather than on surveying the wide field of applications.

We propose and analyze a potential induced random walk and its modification called random teleportation on finite graphs. The transition probability is determined by the gaps between potential values of adjacent and teleportation nodes. We show that the steady state of this process has a number of desirable properties. We present a continuous time analogue of the random walk and teleportation, and derive the lower bound on the order of its exponential convergence rate to stationary distribution. The efficiency of proposed random teleportation in search of global potential minimum on graphs and node ranking are demonstrated by numerical tests. Moreover, we discuss the condition of graphs and potential distributions for which the proposed approach may work inefficiently, and introduce the intermittent diffusion strategy to overcome the problem and improve the practical performance.

Network meta-analysis (NMA) is an extension of pairwise meta-analysis that facilitates comparisons of multiple interventions over a single analysis. It is the method in which multiple interventions (that is, three or more) are compared using both direct comparisons of interventions within randomized controlled trials and indirect comparisons across trials based on a common comparator. NMA is methodologically complex compared to simple pairwise meta-analysis as it accounts for a broader evidence base. Results from NMA are more useful to policy makers, service commissioners, and providers when making choices between multiple alternatives than those from multiple, separate pairwise meta-analyses. It can be an ideal choice to be extended to compare complex interventions that are multifaceted. Apart from the numer-ous beneﬁts the NMA offers, it is prone to methodological complications that need to be understood, implemented, and ﬁnally reported correctly. This article is meant to provide a primer to the various methodological issues pertaining to NMA. The NMA can be as valid as a standard pairwise meta-analysis if these methodological issues are taken care of.

The knowledge about the deteriorating characteristics and future operation conditions in civil infrastructure systems is never complete. Any relevant decision-making framework must include a rational approach for quantifying these uncertainties and their bearing on the decision making process, through the entire life-cycle of operation. Health monitoring data, when available, may be used to update the probabilistic quantification related to these uncertainties. This work presents a Bayesian framework for updating the assessment of bridge infrastructure systems through use of monitoring data with focus on the deteriorating characteristics of bridges. Stochastic simulation techniques are proposed for the Bayesian updating and various model classes are examined for the bridge system, describing different possible deteriorating models. The updating of the relative likelihood for each of the models through monitoring data is also investigated.

This paper presents a decision-centered lifetime and reliability prognostics using a generic Bayesian framework. This generic Bayesian framework models and updates sensory degradation data, remaining life, and reliability using non-conjugate Bayesian updating mechanism. Thus, it continuously updates lifetime distributions of degraded system in realtime. Furthermore, the generic Bayesian framework eliminates dependency of evolutionary updating process on a selection of distribution types for the parameters of a sensory degradation model. The Markov Chain Monte Carlo (MCMC) technique is employed as a numerical method of non-conjugate Bayesian updating framework. While accounting for variability in loading conditions, material properties, and manufacturing tolerances over the population of system samples, different reliabilities will be identified for different samples. So, reliability distribution for an engineering system can be obtained and updated in a Bayesian format. The proposed Bayesian methodology is generally applicable for different degradation models and prior distribution types. The proposed methodology is successfully demonstrated with 26 resistors for the lifetime and reliability prognostics. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.

Multicanonical MCMC (Multicanonical Markov Chain Monte Carlo; Multicanonical Monte Carlo) is discussed as a method of rare event sampling. Starting from a review of the generic framework of importance sampling, multicanonical MCMC is introduced, followed by applications in random matrices, random graphs, and chaotic dynamical systems. Replica exchange MCMC (also known as parallel tempering or Metropolis-coupled MCMC) is also explained as an alternative to multicanonical MCMC. In the last section, multicanonical MCMC is applied to data surrogation; a successful implementation in surrogating time series is shown. In the appendix, calculation of averages and normalizing constant in an exponential family, phase coexistence, simulated tempering, parallelization, and multivariate extensions are discussed.

We present and analyze in detail a test bench for random number sequences based on the use of physical models. The first two tests, namely the cluster test and the autocorrelation test, are based on exactly known properties of the two-dimensional Ising model. The other two, the random walk test and the n-block test, are based on random walks on lattices. We have applied these tests to a number of commonly used pseudorandom number generators. The cluster test is shown to be particularly efficient in detecting periodic correlations on bit level, while the autocorrelation, the random walk, and the n-block tests are very well suited for studies of weak correlations in random number sequences. Based on the test results, we demonstrate the reasons behind errors in recent high precision Monte Carlo simulations, and discuss how these could be avoided.

A new approach to Monte Carlo simulations is presented, giving a highly efficient method of simulation for large systems near criticality. The algorithm violates dynamic universality at second-order phase transitions, producing unusually small values of the dynamical critical exponent.

We present a new Monte Carlo algorithm that produces results of high accuracy with reduced simulational effort. Independent random walks are performed (concurrently or serially) in different, restricted ranges of energy, and the resultant density of states is modified continuously to produce locally flat histograms. This method permits us to directly access the free energy and entropy, is independent of temperature, and is efficient for the study of both 1st order and 2nd order phase transitions. It should also be useful for the study of complex systems with a rough energy landscape.

Multicanonical ensemble simulations for the simulation of first-order phase transitions suffer from exponential slowing down. Monte Carlo autocorrelation times diverge exponentially with free energy barriers ΔF, which in L
d
boxes grow as L
d−1. We exemplify the situation in a study of the 2D Ising-model at temperature T/T
c
=0.63 for two different lattice manifolds, toroidal lattices, and surfaces of cubes. For both geometries the effect is caused by discontinuous droplet shape transitions between various classical crystal shapes obeying geometrical constraints. We use classical droplet theory and numerical simulations to calculate transition points and barrier heights. On toroidal lattices we determine finite size corrections to the droplet free energy, which are given by a linear combination of Gibbs–Thomson corrections, capillary wave fluctuation corrections, constant terms, and logarithmic terms in the droplet volume. Tolman corrections are absent. In addition, we study the finite size effects on the condensation phase transition, which occurs in infinite systems at the Onsager value of the magnetization. We find that this transition is of discontinuous order also. A combination of classical droplet theory and Gibbs–Thomson corrections yields a fair description for the transition point and for the droplet size discontinuity for large droplets. We also estimate the nucleation barrier that has to be surmounted in the formation of the stable droplet at coexistence.

Sampling, Statistics and Computer Code Error Analysis for Independent Random Variables Markov Chain Monte Carlo Error Analysis for Markov Chain Data Advanced Monte Carlo Parallel Computing Conclusions, History and Outlook.

DOI:https://doi.org/10.1103/PhysRevLett.63.1658.2

This is a tutorial review on the Potts model aimed at bringing out in an
organized fashion the essential and important properties of the standard
Potts model. Emphasis is placed on exact and rigorous results, but other
aspects of the problem are also described to achieve a unified
perspective. Topics reviewed include the mean-field theory, duality
relations, series expansions, critical properties, experimental
realizations, and the relationship of the Potts model with other
lattice-statistical problems.

Preface; 1. Overview; 2. Structure and scattering; 3. Thermodynamics and
statistical mechanics; 4. Mean-field theory; 5. Field theories, critical
phenomena, and the renormalization group; 6. Generalized elasticity; 7.
Dynamics: correlation and response; 8. Hydrodynamics; 9. Topological
defects; 10. Walls, kinks and solitons; Glossary; Index.

The critical-point anomaly of a plane square m×n Ising lattice with periodic boundary conditions (a torus) is analyzed asymptotically in the limit n→∞ with ξ=m/n fixed. Among other results, it is shown that for fixed τ=n(T-Tc)/Tc, the specific heat per spin of a large lattice is given by Cmn(T)/kBmn=A0lnn+B(τ, ξ)+B1(τ)(lnn)/n+B2(τ, ξ)/n+O[(lnn)3/n2], where explicit expressions can be given for A0 and for the functions B, B1, and B2. It follows that the specific-heat peak of the finite lattice is rounded on a scale δ=ΔT/Tc∼1/n, while the maximum in Cmn(T) is displaced from Tc by ε=(Tc-Tmax)/Tc∼1/n. For ξ0>ξ>ξ0-1, where ξ0=3.13927⋯, the maximum lies above Tc; but for ξ>ξ0 or ξ<ξ0-1, the maximum is depressed below Tc; when ξ=∞, ξ0, or ξ0-1, the relative shift in the maximum from Tc is only of order (lnn)/n2. Detailed graphs and numerical data are presented, and the results are compared with some for lattices with free edges. Some heuristic arguments are developed which indicate the possible nature of finite-size critical-point effects in more general systems.

This new and updated edition deals with all aspects of Monte Carlo simulation of complex physical systems encountered in condensed-matter physics, statistical mechanics, and related fields. After briefly recalling essential background in statistical mechanics and probability theory, it gives a succinct overview of simple sampling methods. The concepts behind the simulation algorithms are explained comprehensively, as are the techniques for efficient evaluation of system configurations generated by simulation. It contains many applications, examples, and exercises to help the reader and provides many new references to more specialized literature. This edition includes a brief overview of other methods of computer simulation and an outlook for the use of Monte Carlo simulations in disciplines beyond physics. This is an excellent guide for graduate students and researchers who use computer simulations in their research. It can be used as a textbook for graduate courses on computer simulations in physics and related disciplines. A broad and self-contained overview of Monte Carlo simulations Contains extensive cross-referencing between simulation and relevant theory and between applications of similar algorithms in different contexts Provides many applications, examples, `recipes', and specific case studies

It is suggested that the interface free energy between bulk phases with a macroscopically flat interface can be estimated from the variation of certain probability distribution functions of finite blocks with block size. For a liquid-gas system the probability distribution of the density would have to be used. The method is particularly suitable for the critical region where other methods are hard to apply. As a test case, the two-dimensional lattice-gas model is treated and it is shown that already, from rather small blocks, one obtains results consistent with the exact solution of Onsager for the surface tension, by performing appropriate extrapolations. The surface tension of the three-dimensional lattice-gas model is also estimated and found to be reasonably consistent with the expected critical behavior. The universal amplitude of the surface tension of fluids near their critical point is estimated and shown to be in significantly better agreement with experimental data than the results of Fisk and Widom and the first-order 4-d renormalization-group expansion. Also the universal amplitude ratio used in nucleation theory near the critical point is estimated.

It is shown that the two-dimensional q-component Potts model is equivalent to a staggered ice-type model. It is deduced that the model has a first-order phase transition for q>4, and a higher-order transition for q<or=4. The free energy and latent heat at the transition are calculated.

A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion.

A general method, suitable for fast computing machines, for investigating
such properties as equations of state for substances consisting of
interacting individual molecules is described. The method consists
of a modified Monte Carlo integration over configuration space. Results
for the two-dimensional rigid-sphere system have been obtained on
the Los Alamos MANIAC and are presented here. These results are compared
to the free volume equation of state and to a four-term virial coefficient
expansion. The Journal of Chemical Physics is copyrighted by The
American Institute of Physics.

Preface; 1. Introduction; 2. Some necessary background; 3. Simple
sampling Monte Carlo methods; 4. Importance sampling Monte Carlo
methods; 5. More on importance sampling Monte Carlo methods of lattice
systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte
Carlo methods; 9. Monte Carlo renormalization group methods; 10.
Non-equilibrium and irreversible processes; 11. Lattice gauge models: a
brief introduction; 12. A brief review of other methods of computer
simulation; 13. Monte Carlo simulations at the periphery of physics and
beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook;
Appendix; Index.

We consider the exact correlation length calculations for the two-dimensional Potts model at the transition point $\beta_{\rm t}$ by Klümper, Schadschneider and Zittartz, and by Buffenoir and Wallon. We argue that the correlation length calculated by the latter authors is the correlation length in the disordered phase and then combine their result with duality and the assumption of complete wetting to give an explicit formula for the order-disorder interface tension $\sigma_{\rm od}$ of this model. The result is used to clarify a controversy stemming from different numerical simulations of $\sigma_{\rm od}$.

This article describes an approach towards a random number generator that passes all of the stringent tests for randomness we have put to it, and that is able to produce exactly the same sequence of uniform random variables in a wide variety of computers, including TRS80, Apple, Macintosh, Commodore, Kaypro, IBM PC, AT, PC and AT clones, Sun, Vax, IBM , 3090, Amdahl, CDC Cyber and even 205 and ETA supercomputers.

We present a new method for using the data from Monte Carlo simulations that can increase the efficiency by 2 or more orders of magnitude. A single Monte Carlo simulation is sufficient to obtain complete thermodynamic information over the entire scaling region near a phase transition. The accuracy of the method is demonstrated by comparison with exact results for the d=2 Ising model. New results for d=2 eight-state Potts model are also presented. The method is generally applicable to statistical models and lattice-gauge theories.

A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality. We demonstrate its efficiency in the two-dimensional O(n) σ models for n=1 (Ising) and n=2 (x-y) at their critical temperatures, and for n=3 (Heisenberg) with correlation lengths around 10 and 20. On lattices up to 1282 no sign of critical slowing down is visible with autocorrelation times of 1-2 steps per spin for estimators of long-range quantities.

Relying on the recently proposed multicanonical algorithm, we present a numerical simulation of the first order phase transition in the 2d 10-state Potts model on lattices up to sizes $100\times100$. It is demonstrated that the new algorithm $lacks$ an exponentially fast increase of the tunneling time between metastable states as a function of the linear size $L$ of the system. Instead, the tunneling time diverges approximately proportional to $L^{2.65}$. Thus the computational effort as counted per degree of freedom for generating an independent configuration in the unstable region of the model rises proportional to $V^{2.3}$, where $V$ is the volume of the system. On our largest lattice we gain more than two orders of magnitude as compared to a standard heat bath algorithm. As a first physical application we report a high precision computation of the interfacial tension.

The low-temperature series expansion for the partition function of the two-dimensional Ising model on a square lattice can be determined exactly for finite lattices using Kaufman's generalization of Onsager's solution. The exact distribution function for the energy can then be determined from the coefficients of the partition function. This provides an exact solution with which one can compare energy histograms determined in Monte Carlo simulations. This solution should prove useful for detailed studies of statistical and systematic errors in histogram reweighting.

The problem of calculating multicanonical parameters recursively is discussed. I describe in detail a computational implementation which has worked reasonably well in practice. Comment: 23 pages, latex, 4 postscript figures included (uuencoded Z-compressed .tar file created by uufiles), figure file corrected.

For the Edwards-Anderson Ising spin-glass model in three and four dimensions (3d and 4d) we have performed high statistics Monte Carlo calculations of those free-energy barriers $F^q_B$ which are visible in the probability density $P_J(q)$ of the Parisi overlap parameter $q$. The calculations rely on the recently introduced multi-overlap algorithm. In both dimensions, within the limits of lattice sizes investigated, these barriers are found to be non-self-averaging and the same is true for the autocorrelation times of our algorithm. Further, we present evidence that barriers hidden in $q$ dominate the canonical autocorrelation times. Comment: 20 pages, Latex, 12 Postscript figures, revised version to appear in Phys. Rev. B

The purpose of this article is to provide a starter kit for multicanonical simulations in statistical physics. Fortran code for the $q$-state Potts model in $d=2, 3,...$ dimensions can be downloaded from the Web and this paper describes simulation results, which are in all details reproducible by running prepared programs. To allow for comparison with exact results, the internal energy, the specific heat, the free energy and the entropy are calculated for the $d=2$ Ising ($q=2$) and the $q=10$ Potts model. % in a temperature range from $T=\infty$ down to sufficiently low % temperatures, such that the groundstates are included in the sampling. Analysis programs, relying on an all-log jackknife technique, which is suitable for handling sums of very large numbers, are introduced to calculate our final estimators.

Markov Chain Monte Carlo Simulations and Their Statistical Analysis Information on the web at http://www.hep.fsu.edu/~ berg

- B A Berg

B.A. Berg, Markov Chain Monte Carlo Simulations and Their Statistical Analysis, World Scientific, Singapore, 2004. Information on the web at http://www.hep.fsu.edu/~ berg.

2d Crystal Shapes, Droplet Condensation and February 2, 2008 Master Review Vol. 9in x 6in – (for Lecture Note Series, IMS, NUS) article 54 B.A. Berg Supercritical Slowing Down in Simulations of First Order Phase Transitions

- T Neuhaus
- J S Hager

T. Neuhaus and J.S. Hager, 2d Crystal Shapes, Droplet Condensation and February 2, 2008 Master Review Vol. 9in x 6in – (for Lecture Note Series, IMS, NUS) article 54 B.A. Berg Supercritical Slowing Down in Simulations of First Order Phase Transitions, J. Stat. Phys. 113 (2003), 47–83.

- B A Berg
- Multicanonical Recursions

B.A. Berg, Multicanonical Recursions, J. Stat. Phys. 82 (1996), 323–342.

- B A Berg

B.A. Berg, Multicanonical Simulations Step by Step, Comp. Phys. Commun.
153 (2003), 397–406.

Supercritical Slowing Down in Simulations of First Order Phase Transitions

Supercritical Slowing Down in Simulations of First Order Phase Transitions,
J. Stat. Phys. 113 (2003), 47–83.