Article

Markov Chain Monte Carlo Simulations and Their Statistical Analysis: With Web-Based Fortran Code

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Sampling, Statistics and Computer Code Error Analysis for Independent Random Variables Markov Chain Monte Carlo Error Analysis for Markov Chain Data Advanced Monte Carlo Parallel Computing Conclusions, History and Outlook.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... As for computational simulations, two simulation models (2-state model and 6-state model), which satisfied the ice rules in the ground state, were proposed and the value was estimated [6][7][8][9] by the Multicanonical (MUCA) Monte Carlo (MC) Method [10,11] (for reviews, see, e.g., [12,13]). After these simulation models were suggested, many research groups estimated the residual entropy by various computational approaches for the last decade (see, e.g., [14][15][16][17][18]). ...
... A brief explanation of MUCA [10][11][12][13] is now given here. The multicanonical probability distribution of potential energy P MUCA (E) is defined by ...
... The Wang-Landau (WL) algorithm [28,29] also uses 1/g(E) as the weight factor and the Metropolis criterion is the same as in Eq. (13). However, g(E) is updated dynamically as g(E) → f × g(E) during the simulation when the simulation visits a certain energy value E. f is a modification factor. ...
Preprint
We estimated the residual entropy of ice Ih by the recently developed simulation protocol, namely, the combination of Replica-Exchange Wang-Landau algorithm and Multicanonical Replica-Exchange Method. We employed a model with the nearest neighbor interactions on the three-dimensional hexagonal lattice, which satisfied the ice rules in the ground state. The results showed that our estimate of the residual entropy is found to be within 0.038 % of series expansion estimate by Nagle and within 0.000077 % of PEPS algorithm by Vanderstraeten. In this article, we not only give our latest estimate of the residual entropy of ice Ih but also discuss the importance of the uniformity of a random number generator in MC simulations.
... We performed Monte Carlo simulations for the two-dimensional extended -state clock model without an external magnetic field on 4 × 4 lattice. The updating was performed with a version of the heatbath algorithm from Ref. [3] modified appropriately to handle noninteger values of . In this initial round we mainly focused on scanning the parameter space and did not pursue larger lattices with Monte Carlo since the local updating algorithms suffer significant slowing down in the ordered phase at large values of the inverse temperature . ...
... The measurements were performed once in 2 8 sweeps, giving us a time series of length of 2 22 for averaging and error analysis. For error propagation we used the jackknife method with 2 6 jackknife bins as described in Ref. [3]. ...
... For an observable , an estimator of the integrated autocorrelation time is given by [3] , ...
Preprint
Full-text available
The $q$-state clock model is a classical spin model that corresponds to the Ising model when $q=2$ and to the $XY$ model when $q\to\infty$. The integer-$q$ clock model has been studied extensively and has been shown to have a single phase transition when $q=2$,$3$,$4$ and two phase transitions when $q>4$.We define an extended $q$-state clock model that reduces to the ordinary $q$-state clock model when $q$ is an integer and otherwise is a continuous interpolation of the clock model to noninteger $q$. We investigate this class of clock models in 2D using Monte Carlo (MC) and tensor renormalization group (TRG) methods, and we find that the model with noninteger $q$ has a crossover and a second-order phase transition. We also define an extended-$O(2)$ model (with a parameter $\gamma$) that reduces to the $XY$ model when $\gamma=0$ and to the extended $q$-state clock model when $\gamma\to\infty$, and we begin to outline the phase diagram of this model. These models with noninteger $q$ serve as a testbed to study symmetry breaking in situations corresponding to quantum simulators where experimental parameters can be tuned continuously.
... In recent years, research has widely focused on the computational aspects of the Bayesian inverse method [26]. The brute force approach is represented by the Markov Chain Monte Carlo (MCMC) method [27][28][29][30]. This approach has the advantage of being model-independent, but it requires a huge number of model simulations. ...
... To overcome this limitation, various approximations have been developed. Among others, the Ensemble Kalman Filter (EnKF) keeps the form of the filter given in (14), but computes the covariances by MC samples of Q, U, and error E [30,55] To avoid the sampling procedure required by the EnKF, one may resort again to a functional approximation of the random variables; in this light, the linear Bayesian procedure is reduced to a simple algebraic method [51]. ...
Article
Full-text available
In civil and mechanical engineering, Bayesian inverse methods may serve to calibrate the uncertain input parameters of a structural model given the measurements of the outputs. Through such a Bayesian framework, a probabilistic description of parameters to be calibrated can be obtained ; this approach is more informative than a deterministic local minimum point derived from a classical optimization problem. In addition, building a response surface surrogate model could allow one to overcome computational difficulties. Here, the general polynomial chaos expansion (gPCE) theory is adopted with this objective in mind. Owing to the fact that the ability of these methods to identify uncertain inputs depends on several factors linked to the model under investigation , as well as the experiment carried out, the understanding of results is not univocal, often leading to doubtful conclusions. In this paper, the performances and the limitations of three gPCE-based stochastic inverse methods are compared: the Markov Chain Monte Carlo (MCMC), the polynomial chaos expansion-based Kalman Filter (PCE-KF) and a method based on the minimum mean square error (MMSE). Each method is tested on a benchmark comprised of seven models: four analytical abstract models, a one-dimensional static model, a one-dimensional dynamic model and a finite element (FE) model. The benchmark allows the exploration of relevant aspects of problems usually encountered in civil, bridge and infrastructure engineering, highlighting how the degree of non-linearity of the model, the magnitude of the prior uncertainties, the number of random variables characterizing the model, the information content of measurements and the measurement error affect the performance of Bayesian updating. The intention of this paper is to highlight the capabilities and limitations of each method, as well as to promote their critical application to complex case studies in the wider field of smarter and more informed infrastructure systems.
... For example, we can recommend Jackknife method as an easy and reliable approach. [128] To estimate uncertainties, the measured data set is divided into N jk blocks of (approximately) equal size. Often 100 blocks are chosen, that is, if we perform one million hops, each of the blocks contains 10 4 hops. ...
... Prior to applying jackknife one has to check whether the condition of uncorrelated blocks holds, for example, by measuring autocorrelation times (cf. [128] ) or, at least, by inspecting the time series of the corresponding quantity. ...
Article
Full-text available
Device modeling is an established tool to design and optimize organic electronic devices, be them organic light emitting diodes, organic photovoltaic devices, or organic transistors and thin‐film transistors. Further, reliable device simulations form the basis for elaborate experimental characterizations of crucial mechanisms encountered in such devices. The present contribution collects and compares contemporary model approaches to describe charge transport in devices. These approaches comprise kinetic Monte Carlo, the master equation, drift‐diffusion, and equivalent circuit analysis. This overview particularly aims at highlighting the following three aspects for each method: i) The foundation of a method including inherent assumptions and capabilities, ii) how the nature of organic semiconductors enters the model, and iii) how major tuning handles required to control the device operation are accounted for, namely temperature, external field, and provision of mobile carriers. As these approaches form a hierarchy of models suitable for multiscale modeling, this contribution also points out less established or even missing links between the approaches. An overview on state of the art approaches for the modeling of organic electronic devices is presented. By inspecting the foundations and inner working of these methods, it is explained how the nature of transport in organic semiconductors is considered and how important device‐relevant factors such as biases, temperature, and charge carrier densities enter these simulations.
... Here, nine algorithms are given. Some of them have Metropolis-and heat bath method-type [33][34][35] transition probability terms in their total transition probability. One of them produces trajectories that are statistically equivalent to those obtained using the Gillespie algorithm combined with Szabo's master equation. ...
... The first candidate corresponds to the conventional Metropolis criterion, which is always smaller than unity. The third one corresponds to the heat bath method, [33][34][35] though the factor of 2 is added in our expression to satisfy the diffusion equation. The last two candidates often have values above unity. ...
Article
A series of new Monte Carlo (MC) transition probabilities was investigated that could produce molecular trajectories statistically satisfying the diffusion equation with a position-dependent diffusion coefficient and potential energy. The MC trajectories were compared with the numerical solution of the diffusion equation by calculating the time evolution of the probability distribution and the mean first passage time, which exhibited excellent agreement. The method is powerful when investigating, for example, the long-distance and long-time global transportation of a molecule in heterogeneous systems by coarse-graining them into one-particle diffusive molecular motion with a position-dependent diffusion coefficient and free energy. The method can also be applied to many-particle dynamics.
... With the support of Markov chains, one can now generate algorithms for the approximation of the desired posterior distribution [6,24]. This procedure takes advantage of the fact that it is often much more comfortable to simulate Markov chains with stationary distribution than the distribution itself. ...
... 6 shows the schematic dependencies of the individual components in such a way that a better overview of the measurement chain for the current signal is achieved. ...
Thesis
Many industrial applications include model parameters for which precise values are hardly available. To better characterize these parameters, deterministic values are replaced by stochastic variables. These can be regarded as parameter uncertainties and potentially have a significant influence on the simulation results. The quantification of such uncertainties plays a crucial role, e.g., for unknown component tolerances or measurement errors. One of the challenges is to gain knowledge about the parameter distribution from experimental data. In this context, Bayesian inference offers an approach to combine numerical simulations with experimental data to obtain a better knowledge of the uncertainties. Many standard methods require a large amount of evaluations to achieve high numerical accuracy. This is a significant drawback, especially when the cost of a single forward simulation is very high. Meta models, such as Polynomial Chaos (PC) extensions, can significantly reduce the number of required evaluations. To validate the described methods and algorithms, in reality, a test bench was developed in the present work, with which a motor characteristic of an electric machine with uncertain physical parameters can be measured. With this test bench, it is possible to define physical reference parameters and to record a corresponding set of measurements. The focus is on the validation of the methods based on real measurements from an industrial application. The numerical results show that the PC approach can significantly reduce the required computing time compared to the original simulation model and thus make the method applicable in practice.
... To quantify the sampling efficiency of the Monte Carlo update, we calculated the integrated autocorrelation time (4) of the order parameter (15) at the transition temperature. We estimated the autocorrelation time from the asymptotic relation τ int = σ 2 /2σ 2 , where σ 2 andσ 2 are the mean squared error, or the square of the statistical error, calculated from the binning analysis and without binning [34]. For each Markov chain, we ran 2 27 Monte Carlo steps, each consisting of N local spin updates, and discarded the first half for the thermalization (burn-in) process. ...
Preprint
The choice of the transition kernel essentially determines the performance of the Markov chain Monte Carlo method. Despite the importance of kernel choice, guiding principles for optimal kernels have not been established. We here propose a rejection-controlling one-parameter transition kernel that can be applied to various Monte Carlo samplings and demonstrate that the rejection process plays a major role in determining the sampling efficiency. Varying the rejection probability, we study the autocorrelation time of the order parameter in the two- and three-dimensional ferromagnetic Potts models at the transition temperature. As the rejection rate is reduced, the autocorrelation time decreases exponentially in the sequential spin update and algebraically in the random spin update. The autocorrelation times of conventional algorithms almost fall on a single curve as a function of the rejection rate. The present transition kernel with an optimal parameter provides one of the most efficient samplers for general cases of discrete variables.
... The total χ 2 we use in the following is of course defined as the sum of the contributions from each probe. To minimize the χ 2 we use our own implementation of a Monte Carlo Markov Chain (MCMC) [103][104][105] and we test its convergence using the method of [106]. We also try to establish a statistical hierarchy or preference among our models by using the Bayesian Evidence, E, calculated using the nested sampling algorithm described in [107]. ...
Article
Full-text available
We investigate entropic force cosmological models with the possibility of matter creation and energy exchange between the bulk and the horizon of a homogeneous and isotropic flat Universe. We consider three different kinds of entropy, Bekenstein’s, the non-extensive Tsallis–Cirto’s, and the quartic entropy, plus some phenomenological functional forms for matter creation rate to model different entropic force models and put the observational constraints on them. We show that while most of them are indistinguishable from a standard $$\Lambda $$ Λ CDM scenario, the Bekenstein entropic force model with a matter creation rate proportional to the Hubble parameter is statistically highly favored over $$\Lambda $$ Λ CDM. As a general result, we also find that both the Hawking temperature parameter $$\gamma $$ γ , which relates the energy exchange between the bulk and the boundary of the Universe, and the matter creation rate $$\Gamma (t)$$ Γ ( t ) , must be very small to reproduce observational data.
... In this work, we use the most recent estimate of the critical temperature, T c /J = 2.19879 (2), which was obtained in ref. [9] through a sophisticated finite-size-scaling analysis of the Binder cumulant associated with the magnetization. In our Monte Carlo simulations, Markov chains of vector-field configurations are generated by a combination of local heat-bath [10][11][12] and overrelaxation [13] updates. For a subset of our runs, we also use non-local, single-cluster updates [14]. ...
Article
Full-text available
A bstract We present a high-precision Monte Carlo study of the classical Heisenberg model in four dimensions. We investigate the properties of monopole-like topological excitations that are enforced in the broken-symmetry phase by imposing suitable boundary conditions. We show that the corresponding magnetization and energy-density profiles are accurately predicted by previous analytical calculations derived in quantum field theory, while the scaling of the low-energy parameters of this description questions an interpretation in terms of particle excitations. We discuss the relevance of these findings and their possible experimental applications in condensed-matter physics.
... We performed 5 × 10 5 MC steps for each spin configuration and we used 10 5 MC steps per site for equilibration of the system. Error bars are calculated with the Jackknife method [43]. ...
Article
The effects of RKKY interaction and the influence of the four spin interaction J4, on the multi-layer transition and magnetic properties of a spin-1/2 Ashkin Teller model of a ferromagnetic system shaped by two magnetic multi-layer materials, of different thicknesses, separated by a non-magnetic spacer of thickness M, are examined using Monte Carlo simulations. The system presents a new partially ordered phase <σS>, for various values of J4/J2, and exhibits behavior typical of a second-order phase transition. It is found that the transition temperatures of the two blocks are strongly dependent on the thickness of the magnetic layers N and the four-spin coupling J4/J2, while they do not depend on the non-magnetic spacer and the Fermi level kf. Finally, We have obtained a critical thickness ML of the non-magnetic spacer above which the multi-layer blocks undergo a transition separately at different temperatures, for J4/J2=3 and kf=0.2.
... We performed 5 × 10 5 MC steps for each spin configuration and we used 10 5 MC steps per site for equilibration of the system. Error bars are calculated with the Jackknife method [43]. ...
Article
transition and magnetic properties of a spin-1/2 Ashkin Teller model of a ferromagnetic system shaped by two magnetic multi-layer materials, of different thicknesses, separated by a non-magnetic spacer of thickness M, are examined using Monte Carlo simulations. The system presents a new partially ordered phase < σ S >, for various values of J4/ J2, and exhibits behavior typical of a second-order phase transition. It is found that the transition temperatures of the two blocks are strongly dependent on the thickness of the magnetic layers N and the four-spin coupling J4/ J2, while they do not depend on the non-magnetic spacer and the Fermi level k f . Finally, We have obtained a critical thickness ML of the non-magnetic spacer above which the multi-layer blocks undergo a transition separately at different temperatures, for J4/ J2 = 3 and k f = 0.2.
... This effect is well visible in Fig. 2, especially in the critical region. Also, sequential updating, while (or because) it violates detailed balance and only fulfills the necessary condition of balance, in general leads to faster decorrelation [43,44]. Initially surprisingly, however, the sequential Metropolis update does not work well at high temperatures. ...
Preprint
Full-text available
Population annealing is a recent addition to the arsenal of the practitioner in computer simulations in statistical physics and beyond that is found to deal well with systems with complex free-energy landscapes. Above all else, it promises to deliver unrivaled parallel scaling qualities, being suitable for parallel machines of the biggest calibre. Here we study population annealing using as the main example the two-dimensional Ising model which allows for particularly clean comparisons due to the available exact results and the wealth of published simulational studies employing other approaches. We analyze in depth the accuracy and precision of the method, highlighting its relation to older techniques such as simulated annealing and thermodynamic integration. We introduce intrinsic approaches for the analysis of statistical and systematic errors, and provide a detailed picture of the dependence of such errors on the simulation parameters. The results are benchmarked against canonical and parallel tempering simulations.
... In order to quantitatively characterize the gain achieved with parallel tempering and optimize its efficiency, it is useful to study the autocorrelation time of the topological susceptibility. We use as definition of the autocorrelation time for the generic observable O the expression [71] ...
Article
Full-text available
A bstract We simulate 4d SU( N ) pure-gauge theories at large N using a parallel tempering scheme that combines simulations with open and periodic boundary conditions, implementing the algorithm originally proposed by Martin Hasenbusch for 2 d CP N –1 models. That allows to dramatically suppress the topological freezing suffered from standard local algorithms, reducing the autocorrelation time of Q ² up to two orders of magnitude. Using this algorithm in combination with simulations at non-zero imaginary θ we are able to refine state-of-the-art results for the large- N behavior of the quartic coefficient of the θ -dependence of the vacuum energy b 2 , reaching an accuracy comparable with that of the large- N limit of the topological susceptibility.
... In particle systems with central potentials, sequential MCMC is found to be slightly more efficient than its collapsed reversible counterpart [76,44,53]. Likewise, in the Ising model and related systems, the analogous spin sweeps (updating spin i + 1 after spin i, etc.) again appear marginally faster than the random sampling of spin indices [5]. Sequential MCMC is but a slightly changed variant of a reversible Markov chain, yet its particle lifting, the deliberative choice of the active particle, is one of the key features of ECMC. ...
Preprint
Full-text available
This review treats the mathematical and algorithmic foundations of non-reversible Markov chains in the context of event-chain Monte Carlo (ECMC), a continuous-time lifted Markov chain that employs the factorized Metropolis algorithm. It analyzes a number of model applications, and then reviews the formulation as well as the performance of ECMC in key models in statistical physics. Finally, the review reports on an ongoing initiative to apply the method to the sampling problem in molecular simulation, that is, to real-world models of peptides, proteins, and polymers in aqueous solution.
... In order to test the predictions of the GUP-modified cosmological models given the observational data, we use our own implementation of a Monte Carlo Markov Chain (MCMC) [59][60][61] to minimise the total χ 2 . We test its convergence using the method developed in [62]. ...
Article
Full-text available
The Generalized Uncertainty Principle (GUP) has emerged in numerous attempts to a theory of quantum gravity and predicts the existence of a minimum length in Nature. In this work, we consider two cosmological models arising from Friedmann equations modified by the GUP (in its linear and quadratic formulations) and compare them with observational data. Our aim is to derive constraints on the GUP parameter and discuss the viability and physical implications of such models. We find for the parameter in the quadratic formulation the constraint $$\alpha ^{2}_{Q}<10^{59}$$ α Q 2 < 10 59 (tighter than most of those obtained in an astrophysical context) while the linear formulation does not appear compatible with present cosmological data. Our analysis highlights the powerful role of high-precision cosmological probes in the realm of quantum gravity phenomenology.
... The cover meter is used to measure the depth of the concrete cover. The chloride ion penetration method measures the chloride content in the concrete which is the same measurement equation (1) predicts [13,14]. ...
... MCMC [35,36] is applied in this work to obtain a sample of supply failure states used in the IS optimization. ...
Article
This paper presents a new methodology for Monte Carlo-based multi-area reliability assessment based on optimal Importance Sampling (IS), Markov Chain Monte Carlo (MCMC) and optimal stratification. The proposed methodology allows the representation of relevant features of variable renewable energy (VRE) resources such as intermittency, daily and seasonal variation and spatial correlation with other sources (e.g. correlation between wind and hydro). The effectiveness of the proposed methodology is illustrated with case studies based on the Saudi Arabia and Chilean systems, with speedups of 300 and 800 times compared with the standard Monte-Carlo simulation.
... In order to quantitatively characterize the gain achieved with parallel tempering and optimize its efficiency, it is useful to study the autocorrelation time of the topological susceptibility. We use as definition of the autocorrelation time for the generic observable O the expression [71] The last column refers to the total statistics accumulated for all imaginary-θ simulations. The defect length was, in all cases, L d /a = 2. ...
Preprint
We simulate $4d$ $SU(N)$ pure-gauge theories at large $N$ using a parallel tempering scheme that combines simulations with open and periodic boundary conditions, implementing the algorithm originally proposed by Martin Hasenbusch for $2d$ $CP^{N-1}$ models. That allows to dramatically suppress the topological freezing suffered from standard local algorithms, reducing the autocorrelation time of $Q^2$ up to two orders of magnitude. Using this algorithm in combination with simulations at non-zero imaginary $\theta$ we are able to refine state-of-the-art results for the large-$N$ behavior of the quartic coefficient of the $\theta$-dependence of the vacuum energy $b_2$, reaching an accuracy comparable with that of the large-$N$ limit of the topological susceptibility.
... Solutions of integrals to be used in the calculation of posterior distributions in the Bayesian approach are either too difficult or impossible. In these cases, approaches are given to derive the Markov chain and achieve convergence properties and posterior distribution (Berg 2005). Thus, using the Markov chain derivation and simulation techniques, approaches called chain data replication were developed. ...
... The total χ 2 we use in the following is of course defined as the sum of the contributions from each probe. In order to minimize the χ 2 we use our own implementation of a Monte Carlo Markov Chain (MCMC) [100][101][102] and we test its convergence using the method of [103]. We also try to establish a statistical hierarchy or preference among our models by using the Bayesian Evidence, E, calculated using the nested sampling algorithm described in [104]. ...
Preprint
Full-text available
We investigate entropic force cosmological models with the possibility of matter creation and energy exchange between the bulk and the horizon of a homogeneous and an isotropic flat Universe. We consider three different kinds of entropy, Bekenstein's, the non-extensive Tsallis-Cirto's and the quartic entropy, plus some phenomenological functional forms for matter creation rate to model different entropic force models and put the observational constraints on them. We show that while most of them are basically indistinguishable from a standard $\Lambda$CDM scenario, the Bekenstein entropic force model with a matter creation rate proportional to the Hubble parameter is statistically highly favored over $\Lambda$CDM. As a general result, we also find that both the Hawking temperature parameter $\gamma$, which relates the energy exchange between the bulk and the boundary of the Universe, and the matter creation rate $\Gamma(t)$, must be very small in order to reproduce observational data.
... Normal distribution is selected as the proposal distribution. It is necessary when a full conditional distribution is not in a recognizable form even though it will slow down the MCMC procedure (Bernd 2004). ...
Article
Full-text available
Air pollution has been an environmental problem exerting serious impact on human health. An accurate prediction of air pollutant concentrations in space and time is essential to the mitigation and minimization of exposure to air pollution, particularly in cities. To take advantage of data collected at monitoring stations and the effect of secondary information such as meteorological factors, this paper proposes a flexible hierarchical Bayesian model (HBM) which can predict air pollutant concentrations in space and time. Within the framework, spatial temporal Kriging (STK) is employed for interpolation and the obtained results are used as priors. Because of the undesirable performance of the STK, the likelihood in the form of a nonlinear regression (NLR) with skewed normal residuals is employed to take into account the effect of meteorological factors to derive the posterior distribution of air pollutant concentrations. Due to high dimensionality and complexity, the formulation of the posterior distribution is non-analytic. We thus need to draw samples from the estimated parameters of the posterior distribution with Markov chain Monte Carlo (MCMC) method, and approximate the population characteristics with the sample characteristics. We evaluate the HBM with the concentrations of air pollutants and the meteorological variables from Northern China. For all pollutants, in the cross-validation (CV) experiments, the HBM reduces the root mean square error (RMSE) of NLR and STK by at least 0.08 and 0.4, and \(R^{2}\) by at least 0.02 and 0.4. The empirical results show that the proposed HBM generally outperforms the NLR and STK in terms of efficiency, accuracy and robustness. The proposed framework is general enough for the analysis of spatial-temporal data of all kinds.
... This effect is well visible in Fig. 2, especially in the critical region. Also, sequential updating, while (or because) it violates detailed balance and only fulfills the necessary condition of balance, in general leads to faster decorrelation [43,44]. Initially surprisingly, however, the sequential Metropolis update does not work well at high temperatures. ...
Article
Full-text available
Population annealing is a recent addition to the arsenal of the practitioner in computer simulations in statistical physics and it proves to deal well with systems with complex free-energy landscapes. Above all else, it promises to deliver unrivaled parallel scaling qualities, being suitable for parallel machines of the biggest caliber. Here we study population annealing using as the main example the two-dimensional Ising model, which allows for particularly clean comparisons due to the available exact results and the wealth of published simulational studies employing other approaches. We analyze in depth the accuracy and precision of the method, highlighting its relation to older techniques such as simulated annealing and thermodynamic integration. We introduce intrinsic approaches for the analysis of statistical and systematic errors and provide a detailed picture of the dependence of such errors on the simulation parameters. The results are benchmarked against canonical and parallel tempering simulations.
... Monte Carlo reenactments are utilized to show monetary frameworks, to mimic media transmission organizations, to figure results for high-dimensional integrals, or to inexact qualities, for example, constants or numeric integrals [7,8]. The Monte Carlo procedure is likewise utilized in displaying a wide scope of actual frameworks at the front line of consistent examination today, in light of a run of irregular numbers [9]. ...
Article
Full-text available
The term Pi can be defined as the share of the boundary of any circle to the diameter of that circle. Despite the circle’s dimension, this ratio is the same for all circles. Sometimes it is approximated as 22/7. In decimal form, the value of Pi is approximately 3.14. These approximations result in an error during precise calculations because actually, it is an irrational number. Its decimal representation is non-repeating & non-terminating. In this paper, an effort has been made to estimate the value of Pi using Monte Carlo Simulations.
... We use the most recent estimate of the critical temperature, as determined in [11]. In our Monte Carlo simulations, Markov chains of vector-field configurations are generated by a combination of local heath bath [12][13][14] and overrelaxation updates [15]. In the high-temperature phase periodic boundary conditions are assumed. ...
Preprint
Full-text available
We present a high-precision Monte Carlo study of the O(3) spin theory on the lattice in four dimensions. This model exhibits interesting dynamical features, in particular in the broken-symmetry phase, where suitable boundary conditions can be used to enforce monopole-like topological excitations. We investigate the Euclidean time propagation and the features of these excitations close to the critical point, where our numerical results show an excellent quantitative agreement with analytic predictions derived from purely quantum-field-theoretical tools by G. Delfino. We conclude by commenting on the implications of our findings for a conjectured violation of Derrick's theorem at the quantum level and on the consequences in various areas of physics, ranging from condensed matter to astro-particle physics.
... In particle systems with central potentials, particle-sweep liftings lead to slight speedups [27][28][29]. Likewise, in the Ising model and related systems, the analogous spin sweeps (updating spin i + 1 after spin i, etc.) again appear marginally faster than the random sampling of spin indices [30]. Particle sweeps appear as a minimal variant of reversible Markov chains, yet their deliberative choice of the active particle, common to all particle liftings, is a harbinger of ECMC. ...
Article
Full-text available
This review treats the mathematical and algorithmic foundations of non-reversible Markov chains in the context of event-chain Monte Carlo (ECMC), a continuous-time lifted Markov chain that employs the factorized Metropolis algorithm. It analyzes a number of model applications and then reviews the formulation as well as the performance of ECMC in key models in statistical physics. Finally, the review reports on an ongoing initiative to apply ECMC to the sampling problem in molecular simulation, i.e., to real-world models of peptides, proteins, and polymers in aqueous solution.
... Thus the FGLS estimator for estimating fi is given as: The Bayesian Approach Several authors hâve worked on model estimation using Bayesian approach, [13][14][15][16][17], Bayesian linear régression is an approach to linear régression in which the statistical analysis is undertaken within the context of Bayesian inference. It is commonly recommended as a means of dealing with multicollinearity. ...
Article
Full-text available
This paper addressed the popular issue of collinearity among explanatory variables in the context of a multiple linear regression analysis, and the parameter estimations of both the classical and the Bayesian methods. Five sample sizes: 10, 25, 50, 100 and 500 each replicated 10,000 times were simulated using Monte Carlo method. Four levelss of correlation ρ=0.0,0.1,0.5 and 0.9 representing no correlation, weak correlation, moderate correlation and strong correlation were considered. The estimation techniques considered were; Ordinary Least Squares (OLS), Feasible Generalized Least Squares (FGLS) and Bayesian Methods. The performances of the estimators were evaluated using Absolute Bias (ABIAS) and Mean Square Error (MSE) of the estimates. In all cases considered, the Bayesian estimators had the best performance. It was consistently most efficient than the other estimators, namely OLS and FGLS.
... The Monte Carlo method is a general, versatile tool to solve an extremely wide range of problems in science, including condensed-matter and high-energy physics [1][2][3][4], biology [5], and machine learning [6]. The progress of modern computers allows us to perform Monte Carlo calculation of statistical problems with millions of degrees of freedom within a reasonable amount of time and estimate various quantities with great precision. ...
Preprint
Full-text available
In Markov-chain Monte Carlo simulations, estimating statistical errors or confidence intervals of numerically obtained values is an essential task. In this paper, we review several methods for error estimation, such as simple empirical estimation with multiple independent runs, the blocking method, and the stationary bootstrap method. We then study their performance when applied to an actual Monte-Carlo time series. We find that the stationary bootstrap method gives a reasonable and stable estimation for any quantity using only one single time series. In contrast, the simple estimation with few independent runs can be demonstratively erroneous. We further discuss the potential use of the stationary bootstrap method in numerical simulations.
... Because of the high dimensionality of the pMSSM and the large range of parameter values considered in the scan (Table 1), the parameter space covered by this scan is extremely large. A Markov chain Monte Carlo (McMC) algorithm [24][25][26][27][28] is used to explore the space in an efficient way, guided by a likelihood constructed from existing experimental results as described in Sec. 2.1. ...
Preprint
We present a flexible framework for interpretation of SUSY sensitivity studies for future colliders in terms of the phenomenological minimal supersymmetric standard model (pMSSM). We perform a grand scan of the pMSSM 19-dimensional parameter space that covers the accessible ranges of many collider scenarios, including electron, muon, and hadron colliders at a variety of center of mass energies. This enables comparisons of sensitivity and complementarity across different future experiments, including both colliders and precision measurements in the Cosmological and Rare Frontiers. The details of the scan framework are discussed, and the impact of future precision measurements on Higgs couplings, the anomalous muon magnetic moment, and dark matter quantities is presented. The next steps for this ongoing effort include performing studies with simulated events in order to quantitatively assess the sensitivity of selected future colliders in the context of the pMSSM.
... In past decades, research and applications have focused almost exclusively on reversible Markov chains (and some close relatives, such as sequential schemes 3,4 ). Reversible Markov chains are straightforward to set up for arbitrary probability distributions π, and are easy to conceptualize, in particular because of the real-valued eigenvalue a) Electronic mail: hoellmer@physik.uni-bonn.de ...
Preprint
Full-text available
We benchmark event-chain Monte Carlo (ECMC) algorithms for tethered hard-disk dipoles in two dimensions in view of application of ECMC to water models in molecular simulation. We characterize the rotation dynamics of dipoles through the integrated autocorrelation times of the polarization. The non-reversible straight, reflective, forward, and Newtonian ECMC algorithms are all event-driven, and they differ only in their update rules at event times. They realize considerable speedups with respect to the local reversible Metropolis algorithm. We also find significant speed differences among the ECMC variants. Newtonian ECMC appears particularly well-suited for overcoming the dynamical arrest that has plagued straight ECMC for three-dimensional dipolar models with Coulomb interactions.
... In our research, we have carried out over MC steps for each spin configuration and we applied MC steps per site for equilibration the system. The error bars are determined using the Jackknife method [75]. ...
... In our research, we have carried out over MC steps for each spin configuration and we applied MC steps per site for equilibration the system. The error bars are determined using the Jackknife method [75]. ...
Article
Using Monte Carlo simulations based on the Metropolis algorithm, we have investigated magnetic properties and phase diagrams of the spin-1 Ashkin Teller model ferromagnetic thin films. The effects of the crystal field D/J2b and four-spin coupling (J4s/J2b, J4b/J2b) has been studied in detail. Therefore, the phase diagrams in the (kBTc/J2b,J2s/J2b) plane exhibits the special point (Rs=J2s/J2b)sp, for different values of D/J2b and (J4s/J2b, J4b/J2b), wherein all film thicknesses having the same critical temperature. In addition, the magnetization profiles present a first order phase transition behavior. Then, we found rich phase diagrams with first- and second-order phase transitions that meet at tricritical points that are dependent on the film thickness N. Finally, this model presents a new partially ordered phase 〈σS〉, between the paramagnetic phase and the Baxter phase, which are separated by a second-order phase transition, for positive values of D/J2b and for each film thickness N.
... The data to train the NN contain multiple coordinates with the labels as the expected values of the potential function. Metropolis sampling [24] allows efficient evaluations of the energies of the given wavefunctions, same as the quantum Monte Carlo approaches [25][26][27]. A loss function that explicitly involves the energy is pro- posed to characterize the violation of the Schrödinger equation. ...
Preprint
The hybridizations of machine learning and quantum physics have caused essential impacts to the methodology in both fields. Inspired by quantum potential neural network, we here propose to solve the potential in the Schrodinger equation provided the eigenstate, by combining Metropolis sampling with deep neural network, which we dub as Metropolis potential neural network (MPNN). A loss function is proposed to explicitly involve the energy in the optimization for its accurate evaluation. Benchmarking on the harmonic oscillator and hydrogen atom, MPNN shows excellent accuracy and stability on predicting not just the potential to satisfy the Schrodinger equation, but also the eigen-energy. Our proposal could be potentially applied to the ab-initio simulations, and to inversely solving other partial differential equations in physics and beyond.
Article
Performing dynamic off-lattice multicanonical Monte Carlo (MuMC) simulations, we study the statics, dynamics and scission-recombination kinetics of self-assembled in situ polymerized polydisperse living polymer brush (LPB), designed by surface-initiated living polymerization. The living brush initially is grown from a two-dimensional substrate by end-monomer polymerization-depolymerization reactions through seeding of initiator arrays on the grafting plane which come in contact with a solution of non-bonded monomers under good solvent conditions. The polydispersity is shown to significantly deviate from the Flory-Schulz type for low temperatures due to pronounced diffusion limitation effects on the rate of equilibration reaction. The self-avoiding chains take up fairly compact structures of typical size Rg(N) ∼ N ν in rigorously two-dimensional (d = 2) melt with ν being the inverse fractal dimension (ν = 1/d). The Kratky description of the intramolecular structure factor F(q), in keeping with the concept of generalized Porod scattering from compact particles with fractal contour, discloses a robust nonmonotonic fashion with qdF(q) ∼ (qRg)-3/4 in the intermediate-q regime. It is found that the kinetics of LPB growth, given by the variation of the mean chain length follows a power law 〈N(t)〉 ∝ t1/3 with elapsed time after the onset of polymerization whereby the instantaneous molecular weight distribution (MWD) of the chains c(N) retains its functional form. The variation of 〈N(t)〉 during quenches of the LPB to different temperatures T can be described by a single master curve in units of dimensionless time t/τ∞, where τ∞ is the typical (final temperature T∞-dependent) relaxation time which is found to scale as τ∞ ∝ 〈N(t = ∞)〉5 with the ultimate average length of the chains. The equilibrium monomer density profile φ(z) of the LPB varies as φ(z) ∝ φ-α with the concentration of segments φ in the system and the probability distribution c(N) of chain lengths N in the brush layer scales as c(N) ∝ N-τ. The computed exponents α ≈ 0.64 and τ ≈ 1.70 are in good agreement with those predicted within the context of the Diffusion-Limited Aggregation theory, α = 2/3 and τ = 7/4.
Preprint
The celebrated Kitaev honeycomb model provides an analytically tractable example with an exact quantum spin liquid ground state. While in real materials, other types of interactions besides the Kitaev coupling ($K$) are present, such as the Heisenberg ($J$) and symmetric off-diagonal ($\Gamma$) terms, and these interactions can also be generalized to a triangular lattice. Here, we carry out a comprehensive study of the $J$-$K$-$\Gamma$ model on the triangular lattice covering the full parameters region, using the combination of the exact diagonalization, classical Monte Carlo and analytic methods. In the HK limit ($\Gamma=0$), we find five quantum phases which are quite similar to their classical counterparts. Among them, the stripe-A and dual N\'{e}el phase are robust against the $\Gamma$ term, in particular the stripe-A extends to the region connecting the $K=-1$ and $K=1$ for $\Gamma<0$. Though the 120$^\circ$ N\'{e}el phase also extends to a finite $\Gamma$, its region has been largely reduced compared to the previous classical result. Interestingly, the ferromagnetic (dubbed as FM-A) phase and the stripe-B phase are unstable in response to an infinitesimal $\Gamma$ interaction. Moreover, we find five new phases for $\Gamma\ne 0$ which are elaborated by both the quantum and classical numerical methods. Part of the space previously identified as 120$^\circ$ N\'{e}el phase in the classical study is found to give way to the modulated stripe phase. Depending on the sign of the $\Gamma$, the FM-A phase transits into the FM-B ($\Gamma>0$) and FM-C ($\Gamma<0$) phase with different spin orientations, and the stripe-B phase transits into the stripe-C ($\Gamma>0$) and stripe-A ($\Gamma<0$). Around the positive $\Gamma$ point, due to the interplay of the Heisenberg, Kiatev and $\Gamma$ interactions, we find a possible quantum spin liquid with a continuum in spin excitations.
Article
Full-text available
Every field management is unique and therefore fit-for-purpose data requirements vary from field to field and even from reservoir to reservoir within a field. The knowledge and experience of reservoir team members combined with the budgets and technology available to them at different stages of field life, all contribute to the commercial return on investment. When it is not in our power to follow what is true, we ought to follow what is most probable. In addition to the most likely value of the hydrocarbon in place, the reservoir engineer needs to know the uncertainty about this value. Another source of uncertainty and risk is the use of radioactive elements in logging data acquisition. The lost of these radioactive elements may cause damage to the environment and even the quality of produced oil; this is in addition to economic risk. This paper presents two methods for quantification of uncertainties propagated from log data: the Monte Carlo method, which requires a few lines of programming but a long computation time and an analytical solution, which requires a sophisticated program but short computation time. These analyses provide an estimation of the dispersion of the results caused by random and sampling errors.
Thesis
This thesis studies the behavior of event-chain Monte Carlo (ECMC) in long-range particle systems. We propose the formula for ECMC to sample in a Coulomb system under periodic boundary conditions. By grouping all Coulomb pairs between two molecules, the dipole factor induces smaller event rates. Together with the cell-veto method, we obtain an O(NlogN)-per-sweep algorithm for dipole systems. Also, we develop a scientific application called JeLLyFysh for molecular simulations through ECMC. JeLLyFysh is designed in a highly extensible way and is open-source online. Using JeLLyFysh, we profile the performance of ECMC for large water systems. The resulting dynamics imply that a more sophisticated scheme is needed to equilibrate the polarization. Thus, we test the sampling strategy with sequential direction changes. The dipole evolution exhibits distinct dynamics, and the set of direction choices and the order of selection prove both crucial in mixing the dipole's orientation.
Article
The authors propose a novel method to evaluate the position-dependent diffusion constant by analyzing unperturbed segments of a trajectory determined by the additional flat-bottom potential. The accuracy of this novel method is first established by studying homogeneous systems, where the reference value can be obtained by the Einstein relation. The applicability of this new method to heterogeneous systems is then demonstrated by studying a hydrophobic solute near a hydrophobic wall. The proposed method is also comprehensively compared with popular conventional methods, whereby the significance of the present method is illustrated. The novel method is powerful and useful for studying kinetics in heterogeneous systems based on molecular dynamics calculations.
Article
STUDY QUESTION To what extent do characteristics of germline genome editing (GGE) determine whether the general public supports permitting the clinical use of GGE? SUMMARY ANSWER The risk that GGE would cause congenital abnormalities had the largest effect on support for allowing GGE, followed by effectiveness of GGE, while costs, the type of application (disease or enhancement) and the effect on child well-being had moderate effects. WHAT IS KNOWN ALREADY Scientific progress on GGE has increased the urgency of resolving whether and when clinical application of GGE may be ethically acceptable. Various expert bodies have suggested that the treatment characteristics will be key in determining whether GGE is acceptable. For example, GGE with substantial risks (e.g. 15% chance of a major congenital abnormality) may be acceptable to prevent a severe disease but not to enhance non-medical characteristics or traits of an otherwise healthy embryo (e.g. eye colour or perhaps in the future more complex traits, such as intelligence). While experts have called for public engagement, it is unclear whether and how much the public acceptability of GGE is affected by the treatment characteristics proposed by experts. STUDY DESIGN, SIZE, DURATION The vignette-based survey was disseminated in 2018 among 1857 members of the Dutch general public. An online research panel was used to recruit a sample representing the adult Dutch general public. PARTICIPANTS/MATERIALS, SETTING, METHODS A literature review identified the key treatment characteristics of GGE: the effect on the well-being of the future child, use for disease or enhancement, risks for the future child, effectiveness (here defined as the chance of a live birth, assuming that if the GGE was not successful, the embryo would not be transferred), cost and availability of alternative treatments/procedures to prevent the genetic disease or provide enhancement (i.e. preimplantation genetic testing (PGT)), respectively. For each treatment characteristic, 2–3 levels were defined to realistically represent GGE and its current alternatives, donor gametes and ICSI with PGT. Twelve vignettes were created by fractional factorial design. A multinominal logit model assessed how much each treatment characteristic affected participants’ choices. MAIN RESULTS AND THE ROLE OF CHANCE The 1136 respondents (response rate 61%) were representative of the Dutch adult population in several demographics. Respondents were between 18 and 89 years of age. When no alternative treatment/procedure is available, the risk that GGE would cause (other) congenital abnormalities had the largest effect on whether the Dutch public supported allowing GGE (coefficient = −3.07), followed by effectiveness (coefficient = 2.03). Costs (covered by national insurance, coefficient = −1.14), the type of application (disease or enhancement; coefficient = −1.07), and the effect on child well-being (coefficient = 0.97) had similar effects on whether GGE should be allowed. If an alternative treatment/procedure (e.g. PGT) was available, participants were not categorically opposed to GGE, however, they were strongly opposed to using GGE for enhancement (coefficient = −3.37). The general acceptability of GGE was higher than participants’ willingness to personally use it (P < 0.001). When participants considered whether they would personally use GGE, the type of application (disease or enhancement) was more important, whereas effectiveness and costs (covered by national insurance) were less important than when they considered whether GGE should be allowed. Participants who were male, younger and had lower incomes were more likely to allow GGE when no alternative treatment/procedure is available. LIMITATIONS, REASONS FOR CAUTION Some (e.g. ethnic, religious) minorities were not well represented. To limit complexity, not all characteristics of GGE could be included (e.g. out-of-pocket costs), therefore, the views gathered from the vignettes reflect only the choices presented to the respondents. The non-included characteristics could be connected to and alter the importance of the studied characteristics. This would affect how closely the reported coefficients reflect ‘real-life’ importance. WIDER IMPLICATIONS OF THE FINDINGS This study is the first to quantify the substantial impact of GGE’s effectiveness, costs (covered by national insurance), and effect on child well-being on whether the public considered GGE acceptable. In general, the participants were strikingly risk-averse, in that they weighed the risks of GGE more heavily than its benefits. Furthermore, although only a single study in one country, the results suggests that—if sufficiently safe and effective—the public may approve of using GGE (presumably combined with PGT) instead of solely PGT to prevent passing on a disease. The reported public views can serve as input for future consideration of the ethics and governance of GGE. STUDY FUNDING/COMPETING INTEREST(S) Young Academy of the Royal Dutch Academy of Sciences (UPS/RB/745), Alliance Grant of the Amsterdam Reproduction and Development Research Institute (2017–170116) and National Institutes of Health Intramural Research Programme. No competing interests. TRIAL REGISTRATION NUMBER N/A.
Article
We estimated the residual entropy of Ice Ih by the recently developed simulation protocol, namely, the combination of the replica-exchange Wang–Landau algorithm and multicanonical replica-exchange method. We employed a model with the nearest neighbor interactions on the three-dimensional hexagonal lattice, which satisfied the ice rules in the ground state. The results showed that our estimate of the residual entropy is in accordance with various previous results. In this article, we not only give our latest estimate of the residual entropy of Ice Ih but also discuss the importance of the uniformity of a random number generator in Monte Carlo simulations.
Article
The worm algorithm is a versatile technique in the Markov chain Monte Carlo method for both classical and quantum systems. The algorithm substantially alleviates critical slowing down and reduces the dynamic critical exponents of various classical systems. It is crucial to improve the algorithm and push the boundary of the Monte Carlo method for physical systems. We here propose a directed worm algorithm that significantly improves computational efficiency. We use the geometric allocation approach to optimize the worm scattering process: worm backscattering is averted, and forward scattering is favored. Our approach successfully enhances the diffusivity of the worm head (kink), which is evident in the probability distribution of the relative position of the two kinks. Performance improvement is demonstrated for the Ising model at the critical temperature by measurement of exponential autocorrelation times and asymptotic variances. The present worm update is approximately 25 times as efficient as the conventional worm update for the simple cubic lattice model. Surprisingly, our algorithm is even more efficient than the Wolff cluster algorithm, which is one of the best update algorithms. We estimate the dynamic critical exponent of the simple cubic lattice Ising model to be z≈0.27 in the worm update. The worm and the Wolff algorithms produce different exponents of the integrated autocorrelation time of the magnetic susceptibility estimator but the same exponent of the asymptotic variance. We also discuss how to quantify the computational efficiency of the Markov chain Monte Carlo method. Our approach can be applied to a wide range of physical systems, such as the |ϕ|4 model, the Potts model, the O(n) loop model, and lattice QCD.
Article
Full-text available
We focus on the concentration dependency of fibril-forming peptides, which have the potential of aggregation by themselves. In this study, we performed replica-exchange molecular dynamics simulations of Lys-Phe-Phe-Glu (KFFE) fragments, which are known to form fibrils in experiments under different concentration environments. The analysis by static structure factors suggested that the density fluctuation of the KFFE fragments becomes large as the concentration increases. It was also found that the number of β-structures and oligomers also increases under a high concentration environment. Hence, a high concentration environment of fibril-forming peptides is likely to cause protein aggregation.
Article
We benchmark event-chain Monte Carlo (ECMC) algorithms for tethered hard-disk dipoles in two dimensions in view of application of ECMC to water models in molecular simulation. We characterize the rotation dynamics of dipoles through the integrated autocorrelation times of the polarization. The non-reversible straight, reflective, forward, and Newtonian ECMC algorithms are all event-driven and only move a single hard disk at any time. They differ only in their update rules at event times. We show that they realize considerable speedups with respect to the local reversible Metropolis algorithm with single-disk moves. We also find significant speed differences among the ECMC variants. Newtonian ECMC appears particularly well-suited for overcoming the dynamical arrest that has plagued straight ECMC for three-dimensional dipolar models with Coulomb interactions.
Article
The celebrated Kitaev honeycomb model provides an analytically tractable example with an exact quantum spin liquid ground state. While in real materials, other types of interactions besides the Kitaev coupling (K) are present, such as Heisenberg (J) and symmetric off-diagonal (Γ) terms, these interactions can also be generalized to a triangular lattice. Here, we carry out a comprehensive study of the J−K−Γ model on a triangular lattice covering the whole parameter region, using a combination of exact diagonalization, classical Monte Carlo, and analytic methods, with an emphasis on the effects of the Γ term. In the HK limit (Γ=0), we find five quantum phases which are quite similar to their classical counterparts. Among them, stripe-A and dual Néel phases are robust against the introduction of the Γ term, in particular, stripe A extends to the region connecting K=−1 and K=1 for Γ<0. Though the 120∘ Néel phase also extends to a finite Γ, its region is largely reduced compared to the previous classical result. Interestingly, the ferromagnetic (dubbed FM-A) and stripe-B phases are unstable in response to an infinitesimal Γ interaction. Moreover, we find five additional phases for Γ≠0 which are elaborated by both the quantum and classical numerical methods. Part of the parameter space previously identified as the 120∘ Néel phase in the classical study is found to give way to the modulated stripe phase. Depending on the sign of the Γ term, the FM-A phase transits into FM-B (Γ>0) and FM-C (Γ<0) phases with different spin orientations. Similarly, the stripe-B phase transits into stripe-C (Γ>0) and stripe-A (Γ<0) phases. Around the positive Γ point, due to the interplay of the Heisenberg, Kitaev, and Γ interactions, we find a possible quantum spin liquid in a noticeable region with a continuum in spin excitations.
Article
Advances in simulational methods sometimes have their origin in unusual places; such is the case with an entire class of methods which attempt to beat critical slowing down in spin models on lattices by flipping correlated clusters of spins in an intelligent way instead of simply attempting single spin-flips. The first steps were taken by Fortuin and Kasteleyn (Kasteleyn and Fortuin, 1969; Fortuin and Kasteleyn, 1972), who showed that it was possible to map a ferromagnetic Potts model onto a corresponding percolation model. The reason that this observation is so important is that in the percolation problem states are produced by throwing down particles, or bonds, in an uncorrelated fashion; hence there is no critical slowing down. In contrast, as we have already mentioned, the q-state Potts model when treated using standard Monte Carlo methods suffers from slowing down. (Even for large q where the transition is first order, the time scales can become quite long.) The Fortuin–Kasteleyn transformation thus allows us to map a problem with slow critical relaxation into one where such effects are largely absent. (As we shall see, not all slowing down is eliminated, but the problem is reduced quite dramatically.)
Article
Aggregates and fibrils of intrinsically disordered α-synuclein are associated with Parkinson's disease. Within a non-amyloid β component (NAC) spanning from the 61st to the 95th residue of α-synuclein, an 11-residue segment called NACore (68GAVVTGVTAVA78) is an essential region for both fibril formation and cytotoxicity. Although NACore peptides alone are known to form aggregates and amyloid fibrils, the mechanisms of aggregation and fibrillation remain unknown. This study investigated the dimerization process of NACore peptides as the initial stage of the aggregation and fibrillation processes. We performed an isothermal-isobaric replica-permutation molecular dynamics simulation, which is one of the efficient sampling methods, for the two NACore peptides in explicit water over 96 μs. The simulation succeeded in sampling a variety of dimer structures. An analysis of secondary structure revealed that most of the NACore dimers form intermolecular β-bridges. In particular, more antiparallel β-bridges were observed than parallel β-bridges. We also found that intramolecular secondary structures such as α-helix and antiparallel β-bridge are stabilized in the pre-dimer state. However, we identified that the intermolecular β-bridges tend to form directly between residues with no specific structure rather than via the intramolecular β-bridges. This is because the NACore peptides still have a low propensity to form the intramolecular secondary structures even though they are stabilized in the pre-dimer state.
Article
The hybridizations of machine learning and quantum physics have caused essential impacts to the methodology in both fields. Inspired by quantum potential neural network, we here propose to solve the potential in the Schrödinger equation provided the eigenstate, by combining Metropolis sampling with deep neural network, which we dub as Metropolis potential neural network (MPNN). A loss function is proposed to explicitly involve the energy in the optimization for its accurate evaluation. Benchmarking on the harmonic oscillator and hydrogen atom, MPNN shows excellent accuracy and stability on predicting not just the potential to satisfy the Schrödinger equation, but also the eigen-energy. Our proposal could be potentially applied to the ab-initio simulations, and to inversely solving other partial differential equations in physics and beyond.
Article
Every field management is unique and therefore fit-for-purpose data requirements vary from field to field and even from reservoir to reservoir within a field. The knowledge and experience of reservoir team members combined with the budgets and technology available to them at different stages of field life, all contribute to the commercial return on investment. When it is not in our power to follow what is true, we ought to follow what is most probable. In addition to the most likely value of the hydrocarbon in place, the reservoir engineer needs to know the uncertainty about this value. Another source of uncertainty and risk is the use of radioactive elements in logging data acquisition. The lost of these radioactive elements may cause damage to the environment and even the quality of produced oil; this is in addition to economic risk. This paper presents two methods for quantification of uncertainties propagated from log data: the Monte Carlo method, which requires a few lines of programming but a long computation time and an analytical solution, which requires a sophisticated program but short computation time. These analyses provide an estimation of the dispersion of the results caused by random and sampling errors.
Article
Amyloid-β (Aβ) aggregates are believed to be one of the main causes of Alzheimer's disease. A peptides form fibrils having cross β-sheet structures mainly through pri- mary nucleation, secondary nucleation, and elongation. In particular, self-catalyzed secondary nucleation is of great interest. Here, we investigate the adsorption of Aβ42 peptides to the Aβ42 fibril to reveal a role of adsorption as a part of secondary nu- cleation. We performed extensive molecular dynamics simulations based on replica exchange with solute tempering 2 (REST2) to two systems: a monomeric Aβ42 in solution and a complex of a Aβ42 peptide and Aβ42 fibril. Results of our simulations show that the Aβ42 monomer is extended on the fibril. Furthermore, we find that hairpin structure of Aβ42 monomer decreases but helix structure increases by adsorp- tion to the fibril surface. These structural changes are preferable for forming fibril-like aggregates, suggesting that the fibril surface serves as a catalyst in the secondary nucle- ation process. In addition, the stabilization of the helix structure of Aβ42 monomer on the fibril indicates that the strategy of a secondary nucleation inhibitor design for Aβ40 can also be used for Aβ42.
ResearchGate has not been able to resolve any references for this publication.