Science topic

Reliability Theory - Science topic

Explore the latest questions and answers in Reliability Theory, and find Reliability Theory experts.
Questions related to Reliability Theory
  • asked a question related to Reliability Theory
Question
1 answer
I wish to know the difference between the BN and Markov model. In what type of problems one is better than other?
In case of reliability analysis of a power plant, where equipment failures are considered, which model should be used and why?
Thank You!
Relevant answer
Answer
Dear Sanchit Saran Agarwal , Here is the answer
BAYESIAN
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
MARKOV
An example of a Markov random field. Each edge represents dependency. In this example: A depends on B and D. B depends on A and D. D depends on A, B, and E. E depends on D and C. C depends on E.
In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be Markov random field if it satisfies Markov properties.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite.
  • asked a question related to Reliability Theory
Question
4 answers
I want to do a research about visual landscape in urban greening and people's preference of landscape,so I need some reliable theories about it to evaluate aesthetics.What's more,I research it with seasonal change,so it is necessary for me to know more about landscape aesthetics quality with seasonal changes. Thanks a lot,if you could help me. 
Relevant answer
Answer
You might read my book, 'Landscape Appreciation: Theories since the cultural turn', out last December.
  • asked a question related to Reliability Theory
Question
4 answers
Dear all,
Does anyone know how can I estimate the NHPP reliability function through Non-parametric method?!
I know the Kernel density estimation is widely used in this area, but seems it has very complicated theory.
I was wondering if you could suggest an example or statistical software directly.
Also, I am just attaching the needed formulas of Kernel model.
Thanks for your attention.
Best/Hamzeh
Relevant answer
Answer
Dear Marcello Fera
I know maybe it is difullcut :) but generally its possible.
Thanks for your answer anyway.
Best.
  • asked a question related to Reliability Theory
Question
5 answers
I am very confused about the formula for availability because it is given by uptime/uptime+downtime.
Relevant answer
Answer
Dear Parag,
The formula Availability=Uptime/(Uptime+Downtime) is the most general, and therfore will ALWAYS be correct.
The expression MTBF/(MTBF+MTTR) holds only if ALL MTBF & MTTR assumptions are in effect, and these assumptions are another, extensive discussion which is beyond our scope.
From the prctical point of view, Uptime is the interval during which a system is available WHEN WE NEED IT, and Downtime is the complementary period of time.
Downtime can result from faulty system, maintenance (both preventive and corrective), inspection/calibration, and sometimes also power-up and set-up etc.
A major contributor to Downtime is logistic times, during which a system is unavailable due to a missing part, an unavailable technician etc.
Best,
Alon Sneor
  • asked a question related to Reliability Theory
Question
7 answers
If f(t) represents the probability density of failure rate, then how it it possible that f(t) will follow exponential distribution whereas the failure rate is constant?
Relevant answer
Answer
Dear Parag Sen,
your problem concerns the relationship between Poisson distribution and exponential distribution - namely:
If the random variable X represents the number of errors (system failures) in a given time period and has the Poisson distribution, then the intervals between every two consecutive errors have the exponential distribution.
For example, see Wikipedia:
If for every t > 0 the number of arrivals in the time interval [0, t] follows the Poisson distribution with mean value λt, then the sequence of inter-arrival times are independent and identically distributed exponential random variables having mean 1/λ.
In general, the exponential distribution describes the distribution of time intervals between every two subsequent Poisson events.
The answer to your question can be found at the following addresses:
Best regards
Anatol Badach
  • asked a question related to Reliability Theory
Question
4 answers
Dear all,
I really appreciate it if someone could reply my question.
The parameters based on NHPP are Shape= 0.46, Scale= 20.54.
and operation time
0, 50, 100, 200, 300, 400,..., 1000
for more detail, please find the attached file.
I do not understand why the reliability values are unreasonable!!
Thanks in advance,
Hamzeh
Relevant answer
Answer
Dear Prof. Naderpour.
Thank you very much. I appreciate your reply.
Best wishes,
Hamzeh
  • asked a question related to Reliability Theory
Question
6 answers
Hi all,
It is easy to show that the reliability of the MEAN score is the same as that of SUM score but I have not found any article/source of that. If you know such an article/source, please, send a note.
When forming the score of, for example, an attitude scale, we can form it by using either the SUM or MEAN operation. The alpha coefficient of reliability of the scale/score (KR20 or "Cronbach's alpha") uses the variance of the SUMMED scale/score in the formula/calculations and, hence, we cannot use the basic formula when estimating the reliability of a MEAN type of "sum". This is because the variance of the SUM is greater than the variance of MEAN and, if using the variance from MEAN type of "sum", we get faulty (out of range) result by using the classical formula. It's easy to show the reliability estimate is the same in both cases but I have not found a reference for that. Too obvious?
Relevant answer
Answer
I would add another risk of error (or inaccuracy) when comparing the results of different reliability tests.
Depending on how we test item reliability / test plan / - if the reliability information is collected while all items are be damaged - the result will be one.
If the reliability information is collected until then, when a certain number of items are be damaged, the result will be different.
If the reliability information is collected for a certain duration of work - the result will be third. Etc.
This makes it difficult to compare the reliability data and makes the results debatable
  • asked a question related to Reliability Theory
Question
6 answers
Let's say a system has a backup/standby unit. The failure rate and repair time of each unit are known. How to calculate the equivalent failure rate and repair time? I need these numbers to calculate the SAIDI and SAIFI. Correct me if I'm wrong, but I think those units are not in parallel redudancy since it takes time to switch to the standby unit in case of failure of main unit.
Relevant answer
Answer
The modeling of standby systems is typically done with state-based thinking. You need to distinguish three system states, assuming that the backup unit is a warm spare:
  • Both units are ok.
  • Only the backup unit is working.
  • Both units are broken.
You can now create something like a Markow chain and model the different state transition rates based on your given input. You will end up with probabilities for being in some of the states, expressible as reliability function, but no longer with an overall failure rate or repair rate for the whole system. As the other authors pointed out, it doesn't make sense in most cases. Talking in 'rates' assumes an exponential distribution of failure and repair times - but your standby system does not fail in a 'memoryless' fashion. The failure probability of the overall system at any given point in time depends on the question if and how long the primary unit already failed . There is some in-build dependency in your system behavior, so you can not seriously talk in failure and repair rates about it.
Shooman discussed a simplified example of your case in his book (978-0-471-29342-2) and showed how to derive a reliability function for the resulting system. May this is good enough for your purpose. You can also read papers by K. Trivedi, who investigated a lot of similar cases in the past. Dynamic fault trees have a spare gate for exactly such cases, so any reasonable reliability analysis tool will give you a reliability function for the resulting system. 
  • asked a question related to Reliability Theory
Question
1 answer
What are the cut-off values for the following reliability coefficients?
Guttman's (1945) Lambda 4 λ4
Bentler and Woodward's (1980) glb
McDonald's (1978) Omega-h ωh
McDonald's (1978) mega-t ωt
I failed to find any threshold for these coefficients.
Relevant answer
Answer
 Hi Xin,
There is no one-cut off point for all these parameters. if you tell me what topic you are working on, then i can advise. However, have a look at the following article
Hope this helps
  • asked a question related to Reliability Theory
Question
2 answers
Hello, I need a very short version of a reliable "Social Desirability Scale" (less than 10 could be ok). I already found the "Brief Social Desirability Scale (BSDS) -5 Ítems-", but found several studies that argue lack of its reliability. Could anyone help me with it?
Thank you
Relevant answer
I know about the "Marlowe–Crowne Social Desirability Scale"  as a common tool to multiple studies since 1982.
If it is for indigenous population, I see those (see the attach).
Thanks,
Alejandro Martínez S.
  • asked a question related to Reliability Theory
Question
3 answers
I am trying to perform a simulation using the available reliability techniques in particular the Monte Carlo simulation .I need to predict the reliability of the railway infrastructure system. This system is composed of the Track ,Signalling and Electricals. The available data for all the subsystems is recorded as follows is :
1) Incidences that cause delays
2) MTBF
3) MTTR
How can i approximate failure distributions for failure data given as  MTBF /MTTR for a reliability model of a railway network.
A Monte Carlo simulation requires failure distributions which can be approximated by statistical means (Weibull/Lognormal)
How do I choose the correct distribution with the given data (MTBF/MTTR/train delay etc) for 
1) Failure rate
2) Repair rate?
Relevant answer
Answer
If you have the record of incidents in function of time, you could use the Exponentiated Weibull distribution.
  • asked a question related to Reliability Theory
Question
4 answers
I have a scale of 10 items which all test respondents' knowledge on a specific legal issue. The responses could be true, false or unsure. Correct answers are given a score of 1. Incorrect answers and "unsure" answers are given a score of zero. Then the respondent is given a total knowledge score out of a possible 10. I ran a reliability analysis on this scale and resulted in Cronbach Alpha of .65. Can this scale be considered reliable?
Relevant answer
Answer
Over 0.6 for reliability is no problem in social science.
and reliability can be improved because you have 10 itmens.
  • asked a question related to Reliability Theory
Question
6 answers
Test of reliability and validity is the spirit of research without which research is worthless. But the case of literary studies (i.e., the analysis of fictional writings like drama, novels, poetry, stories) is different from other studies. so.. How can one make the literary research work reliable and valid?
Let's sketch a scenario...  when one is dealing with the analysis of stories (written in different years e.g., 1999, 2003, 2007, 2013, 2016) that represent culture of one specific place. Whatever the culture of the place in 1999, is now not the culture of 2007...further 2007 is different from 2016... How we can make our research authentic..original. Researchers often generalize their research studies by amalgamating different literary pieces of different times and different studies. Why?
Is their any criteria to make a literary study reliable and authentic?
Relevant answer
Answer
In literary studies it is possible to work with more than one analylist and to look for interrater agreement. It would be arbitrary to make some generalizations depending on one researcher's analyses and conclusions. Interrater agreement, through statistical analyses, gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. But be careful about the refining the tools given to human judges.
  • asked a question related to Reliability Theory
Question
5 answers
Dear all
I am wondering is it right to have a validity index value more than the value of the square root of the reliability index, while there is a rule that
max validity( = or less) sqrt Reliability index
is that rule valid for all method of extracting reliability and validity.
thank you
  • asked a question related to Reliability Theory
Question
6 answers
According to Bohmian quantum mechanics, dependence of quantum potential on all factors and boundary conditions, locally reflects the whole even in the one-particle case. But physicists usually use the term non-locality only in many-particle processes. Unfortunately, physicists usually do not consider the importance of the quantum potential and its dependence on form rather than amplitude in the one-particle case. Bohr was very clear about this non-locality. He did not use that word, he preferred to talk about the wholeness involved in quantum phenomena. What do you think about non-locality?
Relevant answer
Answer
The concept of single particle nonlocality is discussed in the following papers:
S. M. Tan, D. F. Walls, and M. J. Collett, Phys. Rev. Lett. 66, 252 (1991).
L. Hardy, Phys. Rev. Lett. 73, 2279 (1994).
J.Dunningham and V.Vedral, Phys. Rev. Lett. 99, 180404 (2007)
  • asked a question related to Reliability Theory
Question
10 answers
we will find the liner relationship between reliability and risk in the maintenance, Determination index that show weight tow criteria: risk and reliability
Relevant answer
Answer
Thank you very much, it was great
  • asked a question related to Reliability Theory
Question
8 answers
I have 144 EMG traces and the reaction time has been calculated by two different methods. I want to calculate a reliability statistic which will tell me how closely the two methods are related or consistent. Is this inter or intra reliability? Initially I was thinking Pearson's r, but now I think I need an absolute measure of reliability such as SEM. In which case I would calculate IntraClass Correlation, but I’m not sure it is within subject data? Any help would be much appreciated.
Relevant answer
Answer
Hi,
no I would still propose the same way: Pairwise comparisons of each method with the gold-standard.
Just some minor modifications(*):
- Instead of using the averages as x-coordinates you can use the values of the gold-standrad then.
- the variance of the differences is a measure for the precision of the new method. Still I would provide quantile- or predcion -ranges.
- if the differences do not scatter around zero, the average difference is the bias of the new method.
- bias (constant or dependent on the reaction time) can be modeled by regression and the estimated regression curve can be used to correct the bias of the new method.
- the residuals of this regression are a measure of the precision of the new method (this is the same as the "bias-corrected differences"; after bias-correction the differences will scatter around zero; again I would provide quantile- or prediction ranges of the corrected differences/residuals)
- it might be the the precision depends on the values. This is difficult to tackle. If you have a lot of values per method you may estimate the variance depending on the value with a running-window approach and then probably use a regression to model a functional relationship to variance versus values.
You can finally compare all the new methods with regard to bias (size, constancy or trend) and precision. Since the bias can be corrected (knowing a gold standard), the main criterion will likely be the precision.
(*) When I write "variance" i mean variability, measured by any means (not strictly the variance as the average squared difference). It may be for instance the median absolute deviation, or some quantile range or prediction range - or some other measure of variability. You should select the measure that is most suited for your specific problem.