Science topic

# Monte Carlo - Science topic

Explore the latest questions and answers in Monte Carlo, and find Monte Carlo experts.
Questions related to Monte Carlo
Question
Do Monte Carlo or Generalize Linear Model do the same thing?
What are their difference?
Which is best for a count data and why?
These are two separate processes. A GLM, general linear model is used to infer the regression model and to compare the results across methods for the final analysis for which you use the optimal model. Monte Carlo methods use simulations of the original data to find the optimal solution. Simulations may be useful for data that does not some follow a known distribution.
Question
Dear scientific researchers,
I am working with SOLTRACE software to simulate the optical performance of a solar parabolic trough collector and I would like to know if there are possibilities to include a DNI file on the simulation of the studied collector instead of a fixed DNI value. Moreover, I calculate the ratio between the rays hitting the tube receiver and the ones reflected by the mirrors. I would like to know if this ratio is corresponding to the optical efficiency or to the intercept factor.
Best regards.
Question
I want to do Computational screening of chiral metal–organic frameworks for enantioselective adsorption
Your area of application is different from mine, but I can possibly try to help.
Question
Please share a paper/report/thesis how monte carlo simulation can be applied to Multi criteria decision making.
Look up these papers, they might be helpful to clarify things:
"Application of Monte Carlo Simulation in Multiple Criteria Decision Making" by Syed Muhammad Ali, Hira Sadaf, and Fahad Rasool
"Monte Carlo Simulation in Multiple Criteria Decision Making: A Review of Literature and Suggestions for Future Research" by I. A. El-Haddad
"Monte Carlo Simulation in Multiple Criteria Decision Making: Past Developments and Future Directions" by M. A. Kachi and S. E. El-Khatib
"An Introduction to Monte Carlo Simulation in Decision Making" by S. S. Abdul Razak, B. O. Yusof, Z. N. Zabidi, and M. S. Rosli
Question
Based on the Mediation model, I have 2 parallel mediators for which I would like to calculate the sample size based on a power of 0.80 and a medium effect size. Is what I had input correct?
Determining the sample size for a Monte Carlo power analysis for a mediation model with two parallel mediators can be done using a sample size calculator or by performing the calculations manually.
When performing the calculations manually, you will need to specify the following:
1. Power: The desired power level, typically set at 0.80.
2. Effect size: The medium effect size you expect to find. Cohen's d or the correlation coefficient (r) are commonly used measures of effect size.
3. Alpha level: The level of significance you will use for your test, typically set at 0.05.
4. Number of groups: The number of groups in your study, typically 2 for a mediation model.
5. Number of mediators: The number of parallel mediators in your study, typically 2 for this case.
6. Number of covariates: The number of covariates in your model, if any.
You can use a sample size calculator to determine the sample size based on the above input. However, it's important to note that power analysis is an estimation, and the actual power of the study might be different.
It's also important to note that the sample size is just one of many factors that can affect the power of a study. Other factors, such as measurement error and the strength of the relationship between the variables, should also be considered when planning a study.
Question
Like MPFP, important sampling etc.
You should perform Monte Carlo simulations with at least 10,000 iterations at worst process corner and temperature.
For HSNM and RSNM: FS corner and 125 'C
For WSNM: SF corner and -40 'C
Question
Hi,
Imagine that you are going to sample a protein conformation (consists of N atoms) using Monte Carlo and Molecular Dynamics simulations, independently.
For the MD simulation, the phase space has a dimension of 6N dimension (3N position and 3N momentum). I'm assuming that the phase space for MC has a dimension of 3N (just position of atoms). Am I missing something here?
The statement is wrong.
The phase space is invariant in both simulations. However, for the MC simulation only position sampling is performed, the velocity or momentum must be considered from the thermal distribution associated with the system.
Question
I'm doing a LINAC simulation using the Monte Carlo code TOPAS. I've read somewhere that the NRC LINAC specifications are not propriety and are in the public domain, but I failed to find them.
Can someone point me in the right direction where I should look. I tried google patents, science direct and web of science but came up empty handed.
I've found this paper (https://doi.org/10.1118/1.1290714) with the geometrical specifications but for the same component it was given two different distances form central axes can someone help me understand it.
Any help would be greatly appreciated.
The article mentioned only one distance 100 cm SSD.
What do you mean by other distance ? where does it mentioned?
Please let me know to be able for me helping you.
Regards,
Question
I am trying to determine health risk by Monte Carlo in @Risk software. When I am running my data, the mean, maximum and minimum is the same and the bell-shaped graph is not forming.
@Dipita Ghosh
What is your proposed probability distribution curve, for output and inputs? Have you properly taken those output and inputs? Don't you have values for such distribution to central tendencies? You may first required to fill in those values for your proposed output and inputs.
Question
Dear all,
I am eager to conduct power analysis by employing the Monte Carlo simulation available in Mplus. I have run a Multivariate Latent Growth Curve model (MLGCM) with my three-wave longitudinal data in my study, and thus, I have followed the guideline presented by Muthen & Muthen (2002; ). However, I guess, I am a systematic mistake (see, the attached output of monte carlo simulation for % sig. coefficient for mean score for SNAT = .134) but, unfortunately, I cannot find out what the problem/mistake is. Is there anyone who might help out to find the mistake(s)?
The out of the Monte Carlo simulation is attached. Besides, you can see the output of my MLGCM below. Thanks in advance for any kind of help!
MODEL RESULTS
Two-Tailed
Estimate S.E. Est./S.E. P-Value
INAT |
ID_T1 1.000 0.000 999.000 999.000
ID_T2 1.000 0.000 999.000 999.000
ID_T3 1.000 0.000 999.000 999.000
SNAT |
ID_T1 0.000 0.000 999.000 999.000
ID_T2 1.000 0.000 999.000 999.000
ID_T3 2.000 0.000 999.000 999.000
IET |
ID_G1 1.000 0.000 999.000 999.000
ID_G2 1.000 0.000 999.000 999.000
ID_G3 1.000 0.000 999.000 999.000
SET |
ID_G1 0.000 0.000 999.000 999.000
ID_G2 1.000 0.000 999.000 999.000
ID_G3 2.000 0.000 999.000 999.000
SNAT WITH
INAT -0.083 0.041 -2.037 0.042
IET WITH
INAT -0.208 0.080 -2.582 0.010
SNAT 0.019 0.034 0.556 0.578
SET WITH
INAT 0.063 0.037 1.732 0.083
SNAT 0.017 0.014 1.178 0.239
IET -0.205 0.087 -2.361 0.018
Means
INAT 3.497 0.061 57.275 0.000
SNAT -0.027 0.026 -1.018 0.309
IET 3.479 0.078 44.763 0.000
SET -0.129 0.035 -3.713 0.000
Intercepts
ID_T1 0.000 0.000 999.000 999.000
ID_T2 0.000 0.000 999.000 999.000
ID_T3 0.000 0.000 999.000 999.000
ID_G1 0.000 0.000 999.000 999.000
ID_G2 0.000 0.000 999.000 999.000
ID_G3 0.000 0.000 999.000 999.000
Variances
INAT 0.706 0.092 7.662 0.000
SNAT 0.079 0.031 2.568 0.010
IET 1.141 0.156 7.316 0.000
SET 0.148 0.069 2.135 0.033
Residual Variances
ID_T1 0.131 0.068 1.929 0.054
ID_T2 0.244 0.040 6.055 0.000
ID_T3 0.057 0.063 0.901 0.368
ID_G1 0.115 0.116 0.998 0.318
ID_G2 0.338 0.059 5.696 0.000
ID_G3 0.139 0.105 1.324 0.186
Dear Prof. Geiser,
Indeed, -0.020 is the slope factor mean in my original model. In fact, theoretically speaking such a stable pattern makes more sense rather than a relatively increased or decreased pattern. As you indicated, %sig coeff might indicate Type-1 error. I will closely monitor the warning messages as well.
Thanks again!
Best,
Savaş
Question
Determining intervals for the common language effect size (CLES), probability of superiority (PS), Area Under the Curve (AUC) or Exceedance Probability (EP) is possible via multiple method Ruscia and Mullen (2012). However, is this also possible via Fishers Z transformation? For simplicity I will call the “effect size” EP.
If we make the following assumptions: we have a (real) value that can range between -1 and 1 and assume the error distribution is (approximately) normally distributed (also invoking CLT), then we would be able to obtain intervals via Fishers z transformation (I think???).
The rationale is: EP does not range from -1 and 1, but from 0 till 1. Hereby 0.5 would represent the NULL. However, it would be possible to transform the EP to a value between -1 and 1 assuming a “directionality”: < 0.5 is negative and >= 0.5 is positive. Then,
EPd =(EP-0.5)*2
EPz = ln[ (EPd+1)/(EPd-1) ]*0.5 = atanh(EPd)
SE = √[ 1/(3-n1) + 1/(3-n2) ]
Lower = SE*1.96-EPd
Upper = SE*1.96+EPd
Transformation back to the original scale (EP) would be possible for both positive and negative values.
If positive
EP = [ exp(EPd*2)-1)/(exp(EPd*2)+1 ]/2+0.5 = tanh(EPd)/2+0.5
If negative
EP = [ 1 + (exp(EPd*2)-1)/(exp(EPd*2)+1) ] / 2 = [ 1 + tanh(EPd) ]/2
However, when comparing the analytical intervals to Monte Carlo (MC) simulations the intervals are much broader using a smaller samples size. Although the extreme intervals, either upper when EP < 0.5 or lower when EP < 0.5 Below an example of Ruscio and Mullen (2012) where n1 and n2 are both 15 and another example with the same mu and sd when nx and ny are both 150. Also the intervals by Ruscio and Mullen (2012) are much smaller. The question is, why are these intervals broader is the rationale completely wrong, did I make a mistake or is it simply impossible what I am doing? I know there are other ways obtaining the intervals but using fishers Z transformation would make it rather “elegant”.
Hello Wim,
I think your scenario doesn't match well to the situation for which Fisher initially developed the z-transformation (r-to-z). The issue with the distribution of the Pearson correlation (r) is that it does not have a constant variance and the r values don't behave linearly very well (unlike r-squared). So, the z-transform is principally a variance-stabilizing measure, which does tend to yield a z-variate that conforms better to the normal distribution than does r.
But, you're presuming that the variate to be transformed has an underlying normal distribution; that's unlike the r-to-z situation. As well, I'm guessing that your intention is to convert some proportion of area (of non-overlap) to a z-score or some other standard score (not the z-transformation, above), which is also presuming a normal distribution for whatever attribute is under investigation, which may not be the case (and frequently is not in practice). Why not stick with proportions as your explanatory ES, as in:
1. CLES of McGraw & Wong: Proportion of times randomly selected case from batch A has score that exceeds that of randomly selected case from batch B?
2. Cliff's dominance statistic: CLES - proportion of times that B cases exceed A cases?
3. Vargha & Delaney's A statistric (similar to Cliff's d, though it also splits tied cases and assigns half to one batch and half to the other).
Each of these is exact. There is a pretty good closed form expression for the variance of d, if this is important.
Question
Dear all,
I am looking for any helpful resources on monte carlo markov chain simulation. Either pdf, book or stata do file or R script would be a great help for me. Any starting point where I can learn mcmc asap.
Would be really happy if someone can share Stata do.file or R scripts for monte carlo markov chain.
Thank you very much
Not about MCMC per se, but worth looking: https://czekster.github.io/markov/
Question
Many Monte Carlo methods to solve a given Partial Differential Equation (PDE) are built by sampling the PDE's Green's function. E.g., for heat diffusion, diffusion-convection-reaction type of equations, and so on, have algorithms that can be derived directly from the PDE (i.e., through Ito calculus or stochastic integral). On the other hand, for the Radiative Transfer Equation (RTE), there is an Integral representation. However, the argument for explaining Monte Carlo Radiative Transfer (MCRT) ALWAYS revolves around the physical interpretation.
I even found a review article  that states on page 16: "Unlike traditional approaches to RT problems, MCRT calculations do not attempt to solve the RTE directly."
Is there really NO relation (discovered yet) between MCRT and the RTE? or is it just that no one has ever proven this?. I understand the physical interpretation; it is just that having mathematical foundations would also help teach it in class. Can anyone help me by directing me to a reference that derives this?.
 Noebauer, U. M., & Sim, S. A. (2019). Monte Carlo radiative transfer. Living Reviews in Computational Astrophysics, 5(1), 1-103.
Stam Nicolis I checked these references thoroughly. Actually, the document written by Duncan Forgan was the one I used to implement (probably) my first Radiative Monte Carlo code about 10 years ago. Still, I couldn't find what I was looking for. Maybe I am not making myself clear, but what I would like to know is if there is a document that clearly and explicitly explains the relation between the MCRT algorithm (i.e. steps involved and the iterative structure) and the Radiative Transfer Equation (e.g. the algorithm's relation to each of the operators present within the equation). I.e. a relation between MCRT and the RTE without resorting to the "imagine what happens to a photon in the physical scenario".
Question
i want to know how can i can begin to write the UDF(User Define Function) for monte carlo ray tracing
Sure
Question
Hi everyone,
I am currently developing a Gibbs Ensemble Monte Carlo algorithm. I am trying to implement a Widom Insertion Method to calculate the chemical potential of the liquid-phase and gas-phase boxes; however, I haven't been able to successfully do it. My inter-particle potential is that of a hard sphere (i.e. equal to infinity when particles overlap). I suspect the issue with my implementation has to do with how I've been treating the instances where the inserted particles overlap with any of the particles already present in the box I'm trying to determine the chemical potential of. I've been guiding myself by the work of Frenkel & Smit; more specifically, the article attached. Can anyone with experience in this topic help me figure this out?
Thank you beforehand for any assistance anyone may provide!
Widom's insertion method works for hard spheres, too. The Boltzmann factor exp(-beta Delta U) is 1 for a successful insertion and 0 for a failure. The problem is, however, that a rather large number of insertion attempts is required to obtain a meaningful ensemble average for the liquid phase.
For hard spheres it might be better to determine the chemical potential by thermodynamic integration or –if an insertion method must be used – by a multi-step insertion (i.e., insertion of a point particle followed by a gradually increase of its size).
Question
I am looking for the following article for a student at the University of Burgundy : - A Monte Carlo Study of Confidence Interval Methods for Generalizability Coefficient / Zhehan Jiang, Mark Raymond, Christine DiStefano, Dexin Shi, Ren Liu, Junhua Sun
Published August 7, 2021 Educational and Psychological Measurement Our library can pay this interloan library with IFLA Vouchez. Thank you for your help
Question
The forecast error determined by the probability density function (PDF) of the system forecast error can be used to model uncertainty. Monte Carlo sampling (MCS) and simulation generate a number of solar irradiances and load demand scenarios. The greater the number of generated scenarios, the more complicated the computation and the longer the convergence response time. As a result, this study used a K-means clustering-driven scenario reduction scheme to reduce the generated scenarios in reduced scenarios while preserving a precise estimation of the uncertainty.
Is there a manuscript or book where these concepts can be learned, or any MatLab/python code where they can be understood?
Montecarlo is a simple technique...let me explain you in steps:
1) First, you need to have a probability density function of uncertain parameters (solar, load).
2) Let us say the load is modeled as a normal distribution; this means you know the mean and standard deviation of it.
3) You can then generate random numbers based on the above information. random('norm',mean,std.,1000) 1000-> reqd. number of samples (say) in MATLAB. So, now you have 1000 random samples of load.
4) You can do the same step for solar irradiance where the only difference is the irradiance is commonly modeled as a beta distribution....so use random('beta', a,b,1000) where a and b are shape parameters
5) You can use the Gaussian Copula method to generate correlated random numbers if you want (sometimes solar and load can be correlated)
The above steps are not Montecarlo but let's say you want to analyze the impact of uncertain parameters on the voltage of your test system. Then, you can use the generated random samples to calculate voltage values (1000 as the total sample numbers) using load flow.
So, once you have 1000 samples of voltage, you can find the expected value and variance of the voltage and get the statistical information about it or even create a probability function if you want. Now, the overall method is called the Montecarlo simulation for voltage assessment.
I think in your case, you just need to follow steps 1-4 and then use the clustering method to provide the most occurring scenarios (which will depend on the number of clusters you decide ofcourse)
Samundra
Question
Dear colleagues
I have tried to conduct a Monte Carlo simulation in Mplus for a latent three-factor model with correlations between the factors. I want to test whether n = 500 would be enough to construct the factors properly and to detect correlations as low as r = .3. (I know it propably is, I just wanted to finally explore Monte Carlo simulations).
However, I keep receiving the following error message, no matter what I do:
*** ERROR in MONTECARLO command
Probabilities of observations for patterns must sum up to 1.
Unfortunately, the Mplus website is no help. I have enclosed my syntax. It is a lot to ask, but those of you who are familiar with Monte Carlo simulations for categorical indicators, would you consider pointing my in the right direction? What do I need to change?
Best
Marcel
P.S. I have attached a more reader friendly PDF of the syntax as well.
-------------------------------------------------------------------------------------------
TITLE:
Monte Carlo study
three factors: subjective content knowledge (geography, history, civics)
categorical indicators (4-point Likert scale)
MONTECARLO:
NAMES =
f18_a f18_b f18_c f18_d f18_e f18_f f18_g
f19_a f19_b f19_c f19_d f19_e
f20_a f20_b f20_c f20_d f20_e f20_f f20_h f20_i;
CATEGORICAL =
f18_a f18_b f18_c f18_d f18_e f18_f f18_g
f19_a f19_b f19_c f19_d f19_e
f20_a f20_b f20_c f20_d f20_e f20_f f20_h f20_i;
NOBSERVATIONS = 500;
NREPS = 1000;
!Items measured on a 4-point Likert scale
GENERATE = f18_a-f20_i (3 p);
!5 % missings on each item assumed
PATMISS = f18_a-f20_i (.05);
!50 % of cases assumed to have missings
PATPROBS = .50;
!The next part is probably where I get it wrong
MODEL POPULATION:
SCK_Geo BY f18_a-f18_g*.1
SCK_Geo@1;
SCK_His BY f19_a-f19_e*.1
SCK_His@1;
SCK_Civ BY f20_a-f20_i*.1
SCK_Civ@1;
SCK_Civ WITH SCK_Geo*.3;
SCK_Civ WITH SCK_His* .3;
SCK_His WITH SCK_Geo* .3;
ANALYSIS:
TYPE = general;
ESTIMATOR = WLSMV;
PARAMETERIZATION=THETA;
MODEL:
SCK_Geo BY f18_a-f18_g*.1
SCK_Geo@1;
SCK_His BY f19_a-f19_e*.1
SCK_His@1;
SCK_Civ BY f20_a-f20_i*.1
SCK_Civ@1;
SCK_Civ WITH SCK_Geo*.3;
SCK_Civ WITH SCK_His* .3;
SCK_His WITH SCK_Geo * .3;
OUTPUT: Tech9;
that makes a lot of sense because defining all those values seemed fairly arbitrary to me. For example, how do I know what value I should assign to the second threshold of the fourth variable? Having some data to fall back on seems advantageous. Still, I would like to be able to set up a population model without any real data, but maybe this is just beyond my abilities. For the moment.
Anyway, I would not even have made it this far without your help. Thanks a million!
Question
Actually, I am confused with the following problems:
1. Monte Carlo generates data whereas I have primary data.
2. Still I have to use Monte Carlo? should I generate data through monte Carlo first and then should I run the model?
3. Please share with me some sample syntax if anyone has.
4. Following previous studies, my purpose of using Monte Carlo is to find the indirect effect of the independent variable on the dependent variable through double mediation in the multilevel mediation model (2-1-1-2 model).
Practical Issues Related To The Analysis Of Multilevel Data. 156. Three-Level ... Many different pieces of limited software. • Mplus: analysis. • Monte Carlo simulation indirect effects and moderated mediation in multilevel models: New procedures . Monte. Carlo study using two-level data (200 clusters of varying size)
Question
Actually, I am confused with the following problems:
1. Monte Carlo generates data whereas I have primary data.
2. Still I have you use Monte Carlo, should I generate data through monte Carlo first and then should I run the model?
3. Please share with me some sample syntax if anyone has. thanks
I have not used Monte Carlo on that variation
Question
I initially performed an analysis with AMOS of my mediation (4 mediators), using McKinnon.
One reviewer suggested me to use Monte Carlo, and another one suggested me to use bootstrap with Mplus. I have the notion that Monte Carlo is better than bootstrap based on Preacher & Selig (2012), so I was thinking of performing a Monte Carlo analysis of the indirect effects on Mplus, but I don't know how to do it.
Anyone can help me telling me what syntax to introduce in Mplus in order to conduct the mediation and the Monte Carlo indirect effects analysis?
That is an interesting approach.
Question
Michael Uebel Thank you sir!
Question
Hi everyone. I took a basic course on Markov Chains, and know a little about Monte Carlo Stimulations and Methods. But I never got to the part of spreadsheets.
If anyone can help direct me to a few non-technical, not to hard to read books on Monte Carlo Stimulations I would be grateful.
The following some interesting books
Introducing Monte Carlo Methods with R
Springer-Verlag New YorkChristian Robert, George Casella (auth.)Year:2010
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Springer-Verlag LondonEnrico Zio (auth.)Year:2013
Handbook of Monte Carlo Methods (Wiley Series in Probability and Statistics)
WileyDirk P. Kroese, Thomas Taimre, Zdravko I. BotevYear:2011
Essentials of Monte Carlo Simulation: Statistical Methods for Building Simulation Models
Springer-Verlag New YorkNick T. Thomopoulos (auth.)Year:2013
Simulation and the Monte Carlo Method
WileyReuven Y. Rubinstein, Dirk P. KroeseYear:2017
Best Luck
Question
Suppose a six degree of freedom simulation of an aircraft, which some aerodynamic parameters (e.g: stability derivatives), mass configuration (e.g center of mass) and etc, are randomly choose within known bounds. From the Monte Carlo sample those simulations are split in two groups: with instability (any time during the simulation) and without instability during the flight.
My question is, How could I find the more important combination of random parameters the caused the instability in flight?
I have already done a sensitivity analysis, so I have an idea how each one influence individually. What I really one to find is how the combination of parameter is causing the instability.
For more details, we have:
Monte Carlo Simulation of Real Dynamic Systems :
Best regards
Question
I am assessing pros and cons of alternative statistical methods. I would appreciate your advice.
Thanks,
Luis Orlando Duarte
Could it be that you mean "permutation" instead of "randomization"?
If so, then think that bootstrap is a stochastic approximation of the permutation.
Question
Hi there, I need expert opinion in evaluating the use of Monte Carlo method and its accuracy in predicting the permeability of rock layers in oil exploration.
(Sep 13, 2020): The Monte Carlo method is a computational approach to calculate statistical averages for typical classical models or multi-dimensional integrals (that are averages) for quantum models. You can calculate anything that you want for a given model using the Monte Carlo method. The errors can be reduced tending to zero (for classical models). If your model is reliable, the Monte Carlo method will calculate very accurately, let's say the permeability of rock layers, for that model. Quite likely, this will correspond to the reality on the ground. But, if your starting model is inaccurate, the Monte Carlo results will still be numerically very accurate (but reflecting the model). Likely, they will not correspond to the reality on the ground.
Question
Does anyone know a geotechnical engineering software which can support subset simulation? I need to do some probabilistic analysis of a geotechnical project. However, due to the small probability, I need to use subset simulation instead of the crude Monte Carlo analysis.
Question
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
I believe setting confidence intervals is more of an art than an exact science. It really depends on how important it is that something remains within the confidence interval, but there are no real quantitative standards.
If going over budget almost certainly means default you may want to base risk reserves on a high percentle. If default is not imminent, like with governmental institutions, the percentile may be set lower.
Question
I am a beginner to probabilistic forecasting. From my research I have a vague idea that monte carlo simulation can be done for injecting uncertainity in the process. Do i need to get multiple point forecasts doing monte carlo and do postprocessing for obtaining a proabibilistic distribution?.Can any one help with the procedure what steps should i follow to do probabilistic forecasting? It would be helpful if someone can share an example
The approach presented by Leutbecher and Palmer (2008) aims to assess the sensitivity of the model to initial conditions. The proposed approach can certainly estimate the spread of the trajectories of the model in a phase space and make some rough estimate of the forecast uncertainty, but it should not be confused with probabilistic modelling. The latter can only be performed when the equations used for the forecast are written explicitly for the stochastic variables. The best known model illustrating this principle is that developed for the study of Brownian motions.
The correct mathematical foundation for probabilistic modelling is the Ito calculus. Please kindly consult the following sites:
Ito calculus and Brownian motions:
Itô’s stochastic calculus: Its surprising power for applications:
Question
How is Hamiltonian Monte Carlo is better than Markov Chain Monte Carlo method in Bayesian computations?
Hamiltonian Monte Carlo is a Markov chain Monte Carlo method. I believe that your question is related to how HMC is better than the Metropolis-Hastings algorithm. Again the Hamiltonian Monte Carlo corresponds to an instance of the Metropolis-Hastings algorithm but using a different dynamic (Hamiltonian dynamics).
From a practical point of view, I believe that it is your main concern, In my opinion, the HMC usually has an additional computational effort to compute some additional equations. On the other hand, it generates the samples more efficiently with less autocorrelation, and hence less samples and burn-in as well as smaller (or none) thin are need. As well as converge to the target distribution more easily. If I can do it for one specific model I would do it.
Question
I am doing research about neutron spectra measurement and interested in response function calculation. Then i want to know the theory in NRESP and NEFF. Please share your literature with me, Thank you !
My email is cyhu@usc.edu.cn.
Best wishes!
Question
I would like to conduct Monte Carlo uncertainty analysis for actual and predicted data in R;
Kindly share some effective resources;
However, I have this <<https://rdrr.io/rforge/metRology/man/uncertMC.html>> where understanding the hyperparameters of it is another challenge.
best regard!
Mohammed Deeb , Muhammad Ali , and Cristian Ramos-Vera thank you so much for your response which strengthens the idea for sure, however, the requirement as mentioned is slightly different. this " " section 2.3 the exactly the requirement is matched and the graph after analysis needs to draw as attached one. I hope you will find something precisely to the requirements.
I pray for your safe stay in this period.
best regards
Suraj
Question
I started a new project and I want to simulate co2 adsorption By a molecular simulation Method.
regards
Dear Molaee
Philippe Ungerer, Bernard Tavitian, Anne Boutin
Applications of Molecular Simulation in the Oil and Gas Industry: Monte Carlo Methods
Question
I'm doing a Montecarlo Analysis using Abaqus and Matlab.
I have written a Matlab code that is able to run the Abaqus job taken as imput the Abaqus .imp file.
I need to solve n realizations of my Abaqus model each with a different predefined field.
I can assign the predefined field from a .txt file using the Abaqus interface the command for the creation of a mapped field.
My question is if I can do this assignment of the predefined field from a .txt file using Matlab.
Question
I have done the non linear curve fitting  for the Birch-Murnaghan eos for the E vs V data that i have. Then calculated the chi squared value, minimsed it using solver but could not get the minimum values of the constants that i need( B0,V0,E0,B0'). I Wanted to know where i went wrong, the workbook of the data is attached. Any help is appreciated.
I think the best solution is to create in Origin "User Defined" fitting, where you can apply EOS.
To do that follow the steps:
2. Analysis -> Fitting -> Nonlinear Curve Fit -> Open Dialog
3. From Category, you need to choose : User Defined -> New Category -> New Function (on the right side of the window)
4. Fill in: Independent Variables: x, Dependent Variables: y, Parameters Names: eo,bo,bop,vo
5. Function: Here you need to insert the BMEOS with the parameters eo,bo,bop,vo (ofc you can use what names you want, it's just my example) and x, y and press "OK"
6. To properly fit the function to your graph, you need to apply some starting parameters for eo,bo,bop,vo, they don't have to be exact, just look at the graph, and put V0 and E0 close to minimum. For bulk modulus and bop doesn't really matter what numbers you will put.
Question
Several simulation studies show that the MCNP code fails when dealing with detailed physics at nanometric level (e.g., when modeling B4C nanoparticles in neutron shielding materials).
Dose MCNP6 work at nano scales? is Kerma approximation valid ? How about electron cross sections in MCNP at nano metric scales? could a user use MCNP tallies for nano dimension particles? If you have access to related references please introduce here.
Question
I would like to study properties of transport of some ions, and I found out that DCV-GCMD is a good way of simulating my system. However, in previous papers, I failed to find the simulation package people where using (it might have been an in-house code, but it's not precised).
Does anyone have suggestions on that ?
Hi Pauline,
Hope you're doing well. Regarding the DCV-GCMD method, you can use the LAMMPS code by combining the fix gcmc (for insertions) and the fix NVT (for the dynamics). An attention may be paid to the insertion region which must be far enough from the region where physical properties are calculated so that the fluid dynamics is not altered. I suggest you to take a look at my Ph.D thesis manuscript where the implementation of the method is well explained. You can also find answers and a DCV-GCMD LAMMPS template on this Lammps-users forum:
Hope this can help.
Regards,
Question
Does anyone else know how to simulate  a source with rectangular cross section without using beam collimation in gate monte carlo code?
Thanks & Regards
Dear Asra, what is your absolute cross-section definition? Can you explain more? Also, why do you prefer to un-collimation? Is there any special reason?
Question
I want to simulate MCTS algorithm in MATLAB for travelling salesman problem (TSP), and was wondering if such a source code exists?
Dear all,
Thank you very much for your help.
Kind regards,
Markos
Question
How radiative transfer equation with monte carlo technique can be started.
It depends very much on what you want to achieve. If, say, you were studying the radiative transfer (RT) through a layer (of clouds, aerosols) in an atmosphere where the profiles of temperature, humidity, and other radiatively active gases and the cloud and/or aerosol optical properties are not perfectly known/measured, over an area where the surface radiative properties are very variable (say, an ice and snow surface with liquid water ponds on top), one could either 1/ run N times the same RT code with a small change in one of the defining properties (with that small change consistent with the overall probability distribution of the variations for that property (PDF)) then look at how the results actually pan out (to figure, for example, what parameters are really dominant to define the result), or 2/ (more efficient) modify the original RT code to include (for the parameters one is interested in) some random number generators to be used to draw values from the PDFs of the relevant parameters.
Method 2 has been used for RT codes in GCMs in a series of publications by Howard Barker, Robert Pincus and collaborators (and some others) in the last 15 years. If your application is GCM-related, you might want to search for « McICA » (Monte-Carlo Independent Column Approximation).
Question
I run MC simulations with MCNP6, of the imaging process in CyberKnife radiosurgery system. I want to score the scattered photon fluence in a region simulating my detector that comes from a tube that produces 120kVp.
My question is how to score in a different bin the scattered fluence that comes from different cells of interest.
You can use surface of cell flagging functionality (MCNP 6 manual 3.3.5.12 - 3.3.5.13).
Question
I wrote an MC algorithm that simulates bead-spring models and am in need of verifying the results of my code. Is there any resource in the literature that has energy expectation values (or any other metric) for MC simulation of bead-spring models that I can use for result verification?
I appreciate your answer! I wasn't aware that that could be done for anything but the simplest models. I will look into this! Zhaoxi Sun
Question
I have 7 random variables with their distribution functions. I linked the MATLAB and Opensees program to generate a random variable in MATLAB and get the structure response in Opensees. To obtain an explicit function between the variables and the structure response, using the cftool in MATLAB, only two variables can be entered.
while I have 7 random variables.Now, who can help me to get a Polynomial function by using the response surface method to perform Monte Carlo analysis.
dear you can use Matlab library of Mathematics to solve your polynomial problems .
In Matlab Symbolic Math toolbox is available in any of the Matlab version , you can easily find in your Matlab.
Question
Would be grateful so much for a link.
I found some single publications, but topic is quite new for me, so I wanna find some book or big review paper.
Focus should be given on how to correctly model that, so I am eager to look at publications of the end of previous (some classical ones) and of current century.
The interest is also in simulation modelling using Monte Carlo algorythms.
Thank you for attention and possible support.
Check the ff:
1. Reineke (1933)
2. Pretzch (2009) - Forest Growth and Yield.
Question
There is t-depl as a 2D depletion measurement, and t-6depl which is a Monte Carlo KENO . va code, but I am looking to calculate the depletion process that like the t-depl, but in t-depl there is no geometry description like KENO Va
How it is possible to link between them
Best regards
Dmitry S Oleinik thanks so much, I have solve the problem. It has been a mistaken between the number of the tails and the burn-up cycle.
I was using t-6depl, it is better for description of my problem
Best regards
Question
Dear fellow researchers
I would like to run an a-posteriori Monte Carlo study to determine the necessary sample size for my SEM in Mplus with categorical/ordinal data but I cannot wrap my head around the systax I need.
I have 13 latent factors with 5 indicators (on average) and a total sample size of n = 450.
Any help is greatly appreciated.
Please note that I cannot use R.
Many thanks
Marcel
I haven't done this myself (with ordinal data). I would assume that you do it like a Monte Carlo study for continuous data, but with the requisite parameters for ordinal data.
Chapter 12 of the Mplus manual in combination with Examples 5.2 and 5.10 might be helpful, as well as this paper:
Muthén, L. K., & Muthén, B. O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 9(4), 599-620.
While, of course, I don't know the purpose of your study, I want to mention that post-hoc power analyses are usually of limited value.
Question
Using ray tracing software to model a luminescent solar concentrator.
"Soltrace" of National renewable energy lab, USA is best for solar
for reference
Question
We want to study neutron contamination in radiotherapy. Is the GATE monte carlo code capable of tracking the neutron particle separately and measuring the dose deposited by this particular particle? If yes, may you provide some brief information about this?
Dear professor Tekin
It was very useful. I really appreciate your help.
Question
I would like to simulate energy deposition as a result of the stopping of high energy particles in a dense plasma. For example the energy deposited when 10MeV protons are stopped in a hydrogen plasma of density 300g/cc. I think combining a Particle in Cell code with a monte carlo code would give the appropriate physics. However I have not found a code available to do this kind of simulation. Does anyone know of any appropriate simulation tools for this problem?
MAY BE MCNP CODE
Question
Number of environmental variable=5
Species abundance in each sites is used as dependent variables.
Thank you
Also I'm looking for it. Following up.
Question
1. In principle, a system perturbation can be taken into account by mean estimated difference, between the Value computed in two independent calculation, one with a nominal system rather the second with the perturbed system.
2. Small perturbations, practically, for small perturbations, the computation time necessary to obtain a statistical uncertainty on the difference of these values that is sufficiently low is prohibitive. Also number of perturbations required is often huge, therefore it is difficult to resort to independent calculation.
3. Which method can be sufficient to calculate the perturbation dependently (between the two system mentioned before)?!
Question
I want to carry out Monte Carlo simulations by varying certain parameters in an equation. I have to use Monte Carlo simulation as I have understood from literature survey. Can anyone please let me know which softwares have this provision?
but I want software as Tracpro...
but I cant bring it with License....
Best regards
Question
I have tested the performance of 9 strategies of asset allocation through a Monte Carlo simulation and I have calculated for each of them 10 risk/return metrics: average annualized return, standard deviation, skewness, kurtosis, downside deviation, Sharpe ratio, Sortino ratio, Value at Risk, Information ratio (with a Buy-and-hold strategy serving as a benchmark) and Total return (5-year investment horizon). The study in its base case is a replication of the work of Cesari and Cremonini (2003), please see link below. So how can I test if the differences in the performance of the 9 strategies (according to the above-mentioned 10 risk metrics) are statistically significant?
Any help would be highly appreciated.
Best regards,
Kaloyan Kazakov
In one simulation, you get 90 different values (9 strategies and 10 performance metrics). You have to repeat the simulation n times, preferably with different scenarios according to their probabilities. Then apply ANOVA or Tukey Test for a particular performance metric across the 9 possible strategies (and repeat for the other metrics as well). Significance means the metrics are significantly different.
However, testing significance with simulated data has some issues. By simulating, you force the data to follow your instructions. If this set of instructions is not true (e.g. wrong assumptions or model uncertainty or the world economy is changing its structure), then the result of the test is invalid as well.
Question
I guess we need the optical part of GATE. But how do I modify / set the physics to simulate this effect (because geant4 seems to have it already)?
Question
I am hoping to do a Monte Carlo in my Life Cycle Assessment. One parameter I need more data on is the regeneration efficiency for ion exchange (how much of a regenerant chemical I need to put in to get out what the ion exchange is capturing), and was hoping to find more data. I am looking for data specifically on a macroporous weak acid cation exchange resin. Does anyone have information on data sources to look at, or even better, if there is another source that has already determined the distribution?
I didn't get the whole picture of your topic. However, if you want to release the analytes extracted by weak acid cation exchange resin. You can add 0.1-0.2 N HCl to reduce the pH about 1.0 and to release them easily.
Question
Knowing the kVp and mAs for an x-ray machine, I need to model it as a source in my Monte Carlo input model. I know how to use these inputs to calculate the average energy of the spectrum, but hope someone can help me determine a) how I can calculate the exact spectrum (which parameters would I even use for this?) and b) how to model this in MCNP5?
I have a attached a screenshot of the guide I am using. This function seems ideal but the explanation is too brief for me to understand what inputs it takes.
There is various way to define an X ray spectrum. There is a small program that name is SpecCalc. Mostly, i obtain the spectrum data from this code. Afterwards, i define in SDEF command considering the spectral distribution source.
Question
In MCTS I have searched and found that the mcts itself has not very succesful results but when we combine it with deep reinforcement learning we get perfect results like Google have done in Alpha Go Zero. My question is how to combine mcts with deep reinforcement learning like Alpha Go Zero?
This is a great curriculum to understand & master all the concepts for AlphaGo Zero.
Question
Hello, I am trying to determine how to run a power analysis for a MANCOVA. I typically use G*Power. I did not see MANCOVA option in that program. Any direction/guidance on running a power analysis in SAS, R, or SPSS would be helpful. I do not have a background in bootstrapping or Monte Carlo but I heard that may be a option but I am not sure on the code.
Khaled Almaz, is there any way of doing a prospective Power Analysis (to calculate required sample size) instead of a retrospective Power Analysis (to work out how well powered an already collected data-set is)?
I am currently using G* Power's MANOVA Repeated Measures, Within-Between model as a guide, but I think that would still not give an appropriate suggested sample size.
If you know of another way to do so, that would be incredibly helpful
Question
- How effectively are Monte-Carlo methodology and Grid-Computing technology used in Finance for Corporate Performance Measurement?
- How extensively are these used in different business sectors/industries for Corporate Performance purposes?
- In terms of costs and benefits, how can the trade-off between short-term and long-term of operating Monte-Carlo and Grid-Computing be evaluated?
- Grid Computing and Cloud Computing are not the same, although they are mostly used synonymously in all-day life. Grind Computing has the ambitious vision to "connect and share" heterogeneous hardware for specific high-performance computing (e.g. running complex and resource-intensive calculations with distributed CPUs, RAMs, etc.), while Cloud Computing is till now mostly focused on services used as an interconnection of storage space and telecommunication: http://en.wikipedia.org/wiki/Grid_computing.
- There are several scientific Grid Computing platforms, like e.g. BOINC; but for the Finance world this seems to be quite in an initial phase ...
Grid computing system is a widely distributed resource for a common goal. It is Brother of Cloud Computing and Sister of Supercomputer. We can think the grid is a distributed system connected to a single network. This types of computing work with the large volume of files. Basically, it is a cluster types system. So people call it cluster computing.
Grid computer tends to be more geographically disperse and heterogeneous by nature. Grid network also has various types. A single grid is like dedicated connection but a common grid perform multiple tasks.
The size of the grid is large. So grid computing is like supercomputing. It consists of many network, computer, and middleware. Grid computer is dedicated to some specific function of the large volume of data. In the grid process, each task divided into a various process. All the process starts execution simultaneously on a different computer. As a result, very few seconds needs to execute and enjoy the flavor of supercomputing.
Question
what are other alternative methods to Monte Carlo Sampling?
The random sampling of random numbers of a distribution function can lead to a bundling in the range of a mode value. Especially large or small values ​​could then be underrepresented in a small number of iterations. With Latin Hypercube Sampling, the range between 0 and 1 (in which the random numbers are generated) is subdivided into equal areas (corresponding to the number of iterations).
As a result, a better representation of the desired distribution can be achieved with a smaller number of iterations.
Question
Hi, I've carried out Canonical Correspondence Analysis to test the correlation of microhabitat variables with herpetofauna species abundance. May I know how do I test the significance of the result? I wanted to try Monte Carlo analysis but I can't find a guide on how to do it. Please help! Thanks in advance.
The objective of study is not specific and clear.
Canonical correlation analysis may be suitable.
Question
I want to perform a Monte Carlo simulation in Crystal Ball application, by running the inputs in a matlab script and getting outputs from it to be pasted in forecast analysis cells in crystal ball.
please learn in deletes the applications of Monte Carlo program
in crystal. and MATLAB by your equation, with constants and variables.
the results of MATLAB can be used as a source to Monte Carlo data.
in which to get the final result.
Question
I have data for one year and would like to create a model based on mean wind speed of the wind at a given region and standard deviation.However I am confused where to start anyone with a vivid information on how to go about it IN MATLAB.
Dear James Maina,
regards
Hilal
Question
Hi all,
I'm trying to simulate the photon decay spectrum of Eu155 in Geant4. I'm not using the decay file provided by Geant4 because the energies and intensities listed are incorrect. I've changed/added values to match trusted literature, ensuring that the syntax is the same and the intensity column sums to 100.
The resulting spectra however is not right. There are too few photons per decay for most energies. I'm confident I'm collecting all the photons. I've attached my decay file and the resulting plot. The picture also has a 2nd plot of what the spectra should approximately look like for comparison.
I want to know what the intensity column is actually doing and how to adjust it such that the photons per decay match the values I've placed in the intensity column (noting that it's as a percentage there).
Any help is greatly appreciated
Thanks,
- Giuseppe
Hi Giuseppe,
According to my knowledge, the intensity of every line of gamma emission from Eu155 does not have to sum 100, the Intensity could mean Relative Intensity of Absolute Intensity.
On the one hand, relative intensity means that 100 is assigned to the most intense gamma from a given initial level, and other gammas relative intensities are referred to that. This is not usually used.
On the other hand, absolute intensity is "like" the probability of emission. Look at the Co60 spectrum, the two most intense gamma lines have ~99.9%. It means that every time the Co60 decays, the photon has a probability of 99.9826% of having 1332.492 keV and 99.85% of having 1173 keV, and the probability of a photon to have 1332 or 1173 keV is almost equal so you will see two peaks of the same heigh.
Using the utility GPS (general particle source) in Geant4 offers you a great control over the particles which you are interested. You can use something like:
/gps/particle ion
/gps/ion <Z><A> #atomic number and atomic mass number
Best regards,
Jose A.
Question
Is there any program or code which employ Monte Carlo simulation to generate the initial adsorption configurations? I know Materials Studio Adsorption Locator can do this, but I do not have Materials Studio, also it is not suitable for my case because I have over hundreds molecules. Can anyone suggest a program or python code to do this? Or a tutorial about how to execute it by python is also welcome.
Question
This one we have been stumped on for the entire day We're getting "fatal error. detector no. 1 of tally 5 is not in any cell.", What could be causing this? In the manual and all examples we've seen, nobody has parameters specifying cell location for the tally. It is a very simple problem, 3 concentric layered cylinders on the Z axis, with a point Cf-252 source in the center, and a ring detector 1cm outside the outer surface. Below is our simple input file if needed
1- c MCNP TEST 2- c CELL CARDS 3- 1 1 -0.96 -100 -400 500 IMP:N=1.0 \$ For radius 0-10 (under plane 400, above plane 500) 4- 2 2 -11.34 100 -200 -400 500 IMP:N=1.0 \$ For radius 10-20 5- 3 1 -0.96 200 -300 -400 500 IMP:N=1.0 \$ For radius 20-95 6- 4 0 600 IMP:N=0 \$ Terminate outside of kill-sphere 7- 8- c SURFACE CARDS 9- 100 CZ 10 10- 200 CZ 20 11- 300 CZ 95 12- 400 PZ 500 \$ 13- 500 PZ -500 \$Two planes to cut infinitely tall cylinders 14- 600 SO 700 \$Kill sphere around entire problem 15- 16- c DATA CARDS 17- MODE N 18- TOTNU 19- NPS 10000 20- SDEF POS=0 0 0 CEL=1 PAR=1 ERG=D1 21- SP1 -3 1.18000 1.03419 22- c MATERIAL SPECIFICATION 23- M1 NLIB=60c \$ pOLYETHEYLENE CH2 24- 1001 2.0 \$ Hydrogen 25- 6000 1.0 \$ Carbon 26- M2 NLIB=60c \$ Lead 27- 82207 1.0 28- c TALLY SPECIFICATIONS 29- F5Z:N 0 96 0 30- E5 1 18 31- DF5 IU=1 IC=10 FAC=6.44e7 --------------------------------------------------------------------------------------------------
And a snip of the output file where the error is looks like this: 28- c TALLY SPECIFICATIONS 29- F5Z:N 0 96 0 30- E5 1 18 31- DF5 IU=1 IC=10 FAC=6.44e7
fatal error. detector no. 1 of tally 5 is not in any cell.
ring detector specifications detector a0 r axis r0 1 0.00000E+00 9.60000E+01 z 0.00000E+00
This type of output representation can not provide the solution of problem. Please supply as your whole input file in a uniform shape. Here your output is a bit complicated. In this case, it is very hard to solve your problem.
Question
We are investigating preferential associations and more precisely whether some pairs of individuals interact more with each other in different behavioural contexts. The interaction dataset does not contain binary data, which then does not allow us to use Monte Carlo test. We are now trying to find an alternative solution, and are thinking of adapting/modifying MC test in order to use the non binary data.
The fact that the data aren't binary isn't an obstacle for using the Monte Carlo method to reconstruct their distribution.
Question
To all those statisticians out there! I'm trying to compare between differential expression of 41 proteins found within organisms (same species) from 5 sites. Each site contains 5 samples. The software I'm working with is PRIMER V.6. Since the transformed protein values I have are analog to abundance of various taxa, I chose to go with the Bray-Curtis similarity matrix. Next I looked at the MDS with the sites as the factor. There were several noticeable differences between sites. PERMANOVA (pairwise bw sites) was my the next step in which I got significant differences between various pairs of sites. Since the number of samples is rather small (5) and only 126 unique permutaions, I also tested with the Monte Carlo, which pointed out that one marginal P-value, which was significant in PERMANOVA, was not significant in the Monte Carlo test. Next I used pairwise PERMDISP to check for dispersion effect. Another analysis I did was the ANOSIM, in order to see the strength of the differences (dissimilarities) between pairs. R statistic range between 0.3 to 1. SIMPER was used to see what proteins contributed most to the observed differences between sites. Here comes one thing (of many) that I'm not sure about: P(perm) of the PERMDISP is 0.011 (global?), but I'm looking at the pairwise tests, where some are significant and some are not. In this case, does it matter that the global p value is <0.05?
Any comments on this procedure would be highly appreciated.
Cheers, Zafrir.
#Thiago Goncalves-Sousa - Since Zafrir has already done Permanova he doesn't need the R version you link to. He says he uses Anosim to see the strength of the differences. You can do this with Anosim, but not with Permanova/Adonis, so this is a sensible thing to do. He uses the semi-parametric framework for testing, which is what you are saying, so your comment is unhelpful. Do try to think before expressing opinion. There is no reason for 'avoiding Anosim' because it doesn't discriminate between location and dispersion. This is not the hypothesis it is testing, so why should it?
Question
I have read in the literature that secondary electrons (SEs) only escape from the top few nanometres (<10nm) of a sample under SEM. If this is the case, why am I able to see the microstructure of my sample even when I have a 20 nm carbon layer on top? How can the SEs escape from the sample surface and then through the carbon layer?
We have two types of secondary electrons (SE):
1. SE created by interaction with ingoing electrons of focused beam. They are all created at (or very near) surface, reflect topography of the surface with good resolution and bear no information about inner structure of specimen.
2. SE created by outgoing backscattered electrons when they are escaping specimens. These SE are also created near surface, but their number is proportional to backscattered electrons which have information about deeper regions. This signal has much worse resolution (about the same as BSE signal)
Image we see is created by interposition of these two signals and input of first one is much stronger, especially at lower accelerating voltage.
You can see some pictures here:
Question