Science topic
Probabilistic Risk Analysis - Science topic
Explore the latest questions and answers in Probabilistic Risk Analysis, and find Probabilistic Risk Analysis experts.
Questions related to Probabilistic Risk Analysis
Can the DSHA spectrum be less than the PSHAs spectra?
I came across a report with the attached graph showing DSHA curves from each source considered in the PSHA calculations are less than the spectral values for the PSHA curves with 1000 and 2500 return year periods.
My understanding was that since DSHA is deterministic, it represents the worst-case scenario, and the spectral acceleration using this method will be greater than that of PSHA.
Are disincentives the arche? How? Why?
My answer: Disincentives are highly probably the arche. How?: Any entity is more guided more by disincentives than by incentives. Why?: Disincentives may be the arche assuming the least but following the most evidence. Two other explanations for arches may be risks and or vibrations.
Source for vibrations:
Thesis The Arche May be Vibrations
Source for risks:
I want to use this attenuation relationship (suitable for central Italy) in probabilistic analysis therefore It must consider the soil type as a random variable with a certain probabilistic distribution.
Moreover, If there is any formula or methodology in which soil type from the epicenter to the target point is taken to account for obtaining an attenuation relationship, I would appreciate it if you introduce it.
Many thanks in advance.
Could any expert try to examine our novel approach for multi-objective optimization?
The brand new approch was entitled "Probability - based multi - objective optimization for material selection", and published by Springer available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
I have 6 variables with Mean, Stdev, CoV, Min and Max. Find the attached excel file.
I have a study that found an association between exposure to tricyclic antidepressants and the risk of preeclampsia. The number of women who were exposed and had an outcome (i.e. preeclampsia) was small: 210 exposed women and 10 of them developed late-onset preeclampsia. Generalized Linear Models with binomial distribution and log function was used to calculate the relative risk (SPSS software). The reviewer asked us to "report the model goodness of fit criteria (to ensure correct specification of the model)".
How should I reply him? Because our study is an exploratory study that suggested an association. We are not building any model or predicting anything. Besides, the number of exposed cases were too small to predict anything. Thank you so much.
Dear researchers
The provisions of ASCE 7-10 states that New Next Generation Attenuation Relationships (NNGAR) has been used in the process of Probabilistic Seismic Hazard Analysis (PSHA) to prepare seismic hazard maps provided in United State Geologic Survey (USGS).
Now, I want to know what is new next generation attenuation relationships and how are they different from other typical attenuation relationships such as Campbell, Douglas, Godrati, BJF, etc?
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
I am interested in doing a Quantitative Risk Assessment for a building. But the historical data of past fire occurrences for that building is not available? What value should I take? I am unable to get it from literature also. Please suggest.
Hello,
I am a Non-Math major person, and I need to prove mathematically along with some results about how the probability of detecting errors will improve with the scanning frequency. Can anyone please help or share some literature about this.
For better understanding:
I want to do something similar to how does scanning a bar code ( probability of reading a bar code on the first attempt and how it will get better with the frequency) works. I mean, like bar code often misses the first time scanning but will gradually read the bar code when you increase the frequency of the scanning effort ( moving the same bar code again and again through that scanner). So I want to show ( in context of this example only) that the probability of true positive and true negative in the first effort, and how to improve the probability with the more scanning efforts ( of the same product).
Thank you!!!
I wish to know whether Failure Modes and Effect Analysis (FMEA) is considered as probabilistic or deterministic method of risk assessment?
Hello everyone,
this is the second draft of my Question, I'll keep refining it until it becomes readable, coherent and goes to the point. Thanks for the entries and the suggestions already offered. This is part of my Ph.D. studies, dealing with remote sensing techniques and numerical modelling of deforming slopes. The question popped out once I completed a run of simulations using a combination of 2D and 3D trajectory analysis software (Rocfall and Rockyfor3D, and I'm planning to add RocPro3D to the recipe as well).
In a Ritchie's video (from Ritchie 1963, see attached image for reference, I do actually love it), on the CD that comes with the book ROCKFALL, Characterization and Control (TRB), he says how angular momentum, and increased rotational velocity, is one of the most important factor controlling the run-out of falling blocks, if a rock stays close to the slope, and start to roll faster and faster, is very likely to end up further away from the bottom of the slope, even compared to other geometrical/physical properties. And he mentions also how falling rocks tend to rotate perpendicular to their major axis, which is a minor issue for equidimensional blocks (spheres, blocks) but it can be fundamental for elongated blocks (e.g. fragments of columnar basalt).
The real case scenario I'm testing the models with, is a relatively small rockfall. Its vertical drop is about 15 m in a blocky/columnar weathered granite, the transition zone is resting at approximatively 45 degrees, covered in medium sized blocks (10 cm to 1 m across section), the deposition zone is about 25 m away from the vertical wall, confined by a 3 m height crushed rock embankment. The energy line for this event is extremely high (around 80 degrees), because is constrained by the rocktrap. I'll add some maps, maybe some screenshots, to hide some sensible information.
In the simulations that I have run (in ecorisQ's Rockyfor3D) it looks like the column-like boulders (having a very evident major axis, the base is .4 m x .8 m, while the height is 1.8 m) travel farther than any other class of rocks (I got 3 classes, small spheres 50 cm in diameter, large cubes 1 m by side, and column-like), even the ones larger in dimension and volume/mass, but with all 3 axis of comparable length. You can observe the results in the maps attached to the question. Img02 has been computed with cubical blocks. Img03 with elongated block.
The value of the pixels farther away from the bottom of the slope, the ones that overtopped the rocktrap, upon investigation, in GIS, show a value of indicatively 0.05 (%). Following some consideration in the ecorisQ manual they should be considered outliers, and practically tolerable.
My question is: how do I have to interpret this effect? Is it due to the rigid body approach? If everything else stay the same, mass should be the primary factor for controlling the horizontal travel distance right? Why I do find smaller block travelling farther? It might be a negligible difference given the extremely low likelyhood for those blocks to get there, but does it tell me something I don't get about how the numerical model works?
Is there a way to visualise angular momentum/rotational velocity in that software? AND, most importantly, is the way the problem has been formulated valid?
I really appreciate any help and any idea you can share about it. I'm very appreciative of the time you will spend regarding my problem. I'll probably keep adding details as they are needed. Thanks again
Kind Regards,
Carlo Robiati, PhD student in Camborne School of Mines, UK



Excuse my questions presented as statements. I actually mean that I have an idea but want others' thoughts.
I have a strong argument that verifiability in science does not carry an "axiomatic" value in science, but that it is there to reduce uncertainty (equiv: increase certainty). When we extrapolate too far, we cannot be that certain of our theory. How do we reduce it? Observation.
Bedford and Cooke: "In practical scientific and engineering contexts, certainty is achieved through observation, and uncertainty is that which is removed by observation. Hence in these contexts uncertainty is concerned with the results of possible observations."
Agree? Comments?
Ref: Bedford, Tim; Cooke, Roger. Probabilistic Risk Analysis: Foundations and Methods (Page 19). Cambridge University Press. Kindle Edition.
Power and gas retailers, are exposed to a variety of risks when selling to domestic customers. Many of these risks arise from the fact that customers are offered a fixed price, while the retailer must purchase the gas and power to supply their customers from the wholesale markets. The risk associated with offering fixed price contracts is exacerbated by correlations between demand and market prices. For example, during a cold spell gas demand increases and wholesale prices tend to rise, whilst during milder weather demand falls and wholesale prices reduce.
Various methods exist for quantitative risk analysis, such as Monte Carlo simulations, decision trees, sensitivity analyses, etc. Is there any reliable classification for such methods?
Two systems give uncorrerlated or less correlated outputs while their inputs show some correlated behavior. So how to transform and find some complex relationship between inputs-outputs which can possibly give good correlation and copula dependence for outputs.
I want to model a proportional variable bounded by [0,1] (the % of land fertilized). A high percentage of the data contains 0s (60%), a smaller percentage contains 1s (10%), and all the rest falls in between.
I want to compare different models with each other to see their performance, however the model I am currently looking at is a zero-one inflated beta model. I am using the R package gamlss for this.
However, I am having some troubles with the quite technical documentation of the gamlss package and I don’t seem to find an answer to my questions below:
1) model
The model below should model 3 submodels: one part that models the probability of having y=0 versus y>0 (nu.formula), one part that models the probability of having y=1 versus y<1 (tau.formula) and a final part that models all the values in between.
gam<-gamlss(proportion~x1+x2,nu.formula=~ x1+x2,tau.formula=~ x1+x2, family= BEINF, data=Alldata)
This is okay I think.
2) prediction
I would like to know now what is the predicted probability of an observation to have y = 0 or y = 1. I predicted the probability of y = 0 with the code below, however I get values that go far beyond the [0-1] interval. Therefore, they cannot be probabilities since these have to be in the interval [0,1].
Alldata$fit_proportion_0<-predict(gam, what="nu", type='response')
summary(Alldata$fit_proportion_0)
Could somebody explain me how to obtain the correct probabilities because the code above does not seem to work. I think the answer to my problem can be find on section 10.8.2, page 215 of the following link (http://www.gamlss.org/wp-content/uploads/2013/01/book-2010-Athens1.pdf). I think it says that the predict function that I use gives another answer, that I have to use in a certain formula to find the real probabilities. But I am not sure how to make this work?
What if the Relative Risk value is zero? Does that indicate there is no association between the risk and the disease?
Hello all,
I will perform containment analyses through GOTHIC - 3D.
I couldn't find enough resources.
Any kind of publication/tutorial/notes/tips are greatly appreciated.
Best regards,
E.B.
In the reliability analysis of repairable and redundant safety systems someone needs to consider the effect of maintenance program. We are developing a Markov model for the ECCS system of a typical PWR reactor and for transition rates calculation we need the typical values of test interval and test duration for the ECCS system of a PWR nuclear reactor.
I have considered Hydraulic conductivity (Ks) of clayey soil as random variable with log normal distribution. I have got negative mean (lambda) after the determination of measures of variation. Logically, I should not have negative mean for physical parameter Ks.. Find the attached excel document.. Kindly provide solution as soon as possible..
I cannot understand notions like €, this probability and how to compute.

Kindly, How to use maximum likelihood method or any other method to parametrize a mathematical model to discuss risk analysis?
How the "scale of fluctuation" interprets the homogeneity in clay sample prepared by slurry consolidation method ?
If the profiling of the clay bed is checked by CPT then How the scale of fluctuation estimated from CTP data will help in understanding the homogeneity along the depth?
hi dear researchers,
i need to do probalistic analysis, how i can do this analysis and plot their results?
pleaze send me useful sources.
regards
I am planning to conduct FMEA study for some of the existing equipment/device.This device has several models, most of them have almost same design features. I wish to find out the weak parts/sub component of these device type by conducting the study.
Since number of different models of this device are available in the market, it is not possible to conduct study on all the models.
Whether it would be appropriate to conduct FMEA study on specific model and conclude the results for the whole species of the devices?
Actually the demand in the internet sportsbook is growing, more websites represent more gamblers. At the same time more website named "Picks" are winning buyers, the term "pick" is used to refer for a professional sport advice you could use to bet in the sportbook.
With applied statistics in the European sports leagues which are corruption-free, could you predict the score with a high probability more accurate than the picks?
What are the uncertainties that we can assume in a probabilistic fatigue assessment of an existing steel bridge?
What is the strategy to be used for quantification of human actions for Probabilistic Safety Assessment when no data is available? Is expert opinion using questionnaire (with or without Delphi method) is correct approach. Is there any other suggestions. I am looking for data for headings of Event Tree for a process.
Could you please help me with finding references regarding combination of probabilistic risk assessment studies with air pollution modelling?
For instance, probabilistic safety assessment conducted for ammonia pipeline combined with the results of dispersion modelling can be used to assess annual costs of losses caused by accident. I met such works regarding groundwater pollution (http://www.sciencedirect.com/science/article/pii/S0304389413008005), however not so much regarding air pollution, or maybe I am wrong?
Thank you in advance.
I used risk reduction analysis to detect differences between the answers of two groups to a questionnaire question.
I have a basic doubt regarding reporting of odds ratio. We see that odds ratio can be expressed both in terms of risk (>1) or protection (<1) according to the reference group used. When the reference group is interchanged the odds ratio is changed from one (>1) to other (<1). My question is, is there any specific criteria or rule regarding reporting of odds ratio in terms of either risk or protection?
The Monte Carlo method of data validation requires large data sets (random numbers) for starter.