Science topics: Safety EngineeringReliability Analysis
Science topic
Reliability Analysis - Science topic
Explore the latest questions and answers in Reliability Analysis, and find Reliability Analysis experts.
Questions related to Reliability Analysis
Hi,
I have developed a semi-structured interview assessment tool for a clinical population which gives scores - 1,2,3 (each score is qualitatively described on ordinal scale), using a mixed methodology. The tool has 31 questions.
The content validation was done in the Phase 1&2 and the tool was administered on a small sample (N=50) in Phase 3 to establish psychometric properties. The interview was re-administered on 30 individuals for retest reliability. The conceptual framework on which the tool is developed is multidimensional.
When I ran cronbach alpha for the entire interview it is .75 but for subscales it is coming between 0.4-0.5. The inter-item correlation is pretty low and some are negative and many are non-signifcant. The item-total correlation for many is below 0.2 and also has a negative correlation- based on the established criteria we will have to remove many items.
I am hitting a roadblock in analysis and was wondering if there is any other way in whcih we can establish reliablity of a qualitative tool with low sample, or other ways to interpret the data where a mixed methodology has been used (QUAL-quan).
Since the sample is small I will be unable to do factor analysis, but will be establishing convergent/divergent validity with other self-report scales.
Thanks in advance
I am trying to calculate the reliability of a product(like multiple devices of the item), I would like to know what are the various techniques that can be used and different methods of determining an ideal failure situation. I am already aware of the different indices like MTBF, MTTF, basic Reliability index for a single item of a product. I appreciate your suggestions and help. Thank you.
Hello! I have this scale which had 10 items initially. I had to remove items 8 and 10 because they correlated negatively with the scale, and then I removed item 9 because Cronbach's alpha and McDonald's omega were both below .7, and after removing it they are now above .7, as it shows in the picture.
My question is, should I also remove item 7 (IntEm_7) because it would raise the reliability coefficients even more and its item-rest correlation is low (0.16), or should I leave it in? Is it necessary to remove it? And also, would it be a problem if I'm now left with only 6 items out of 10?
My goal is to see the correlation between three scales and this is one of them. I am using Jasp.
Any input is appreciated, thank you!

Which method is more accurate and popular for testing the validity and reliability of my scales?
Which would be prefered first (factor analysis or reliability analysis of a Likert type scale)
H-INDEX & CITATION EVALUATIONS OF ACADEMICIANS, HOW MUCH RELIABLE !?
Hi,
I have conducted an EFA on three items, and all items load on one factor. I then ran a reliability analysis with the three variables to ensure internal reliability using Chronbachs Alpha.
My question: Should I run a reliability analysis before or after the EFA?
Does the order really matter in this case?
Thank you in advance!
I'm working on my thesis about reliability analysis of composite structures. Can I use maximum deflection of the plate structure as failure criteria to define limit state function to perform reliability analysis or I should use conventional failure criteria's (i.e. Tsai-Wu criteria)
Hello everyone, I did a survey with a 4 item 3 point-likert scales ( for example; not true, sometimes and true) and a 5 item 4 point-likert scales ( for example; strongly disagree, disagree, agree, strongly agree). I conducted a cronbach's alpha, but it was too low due to a low number of items.
Would there be an alternative to report the reliability analysis that would result in a higher score? Ideally, I would like to create scales from these items.
I also thought of reporting only the correlation coefficient of these scales.
Could you help me?
Hello all,
Could anyone recommend me good references on using PIMS data to predictive analysis of failure of equipment and machines in industry?
I have some questions regarding running internal consistency reliability test for a scale.
1. Should I run an internal consistency reliability test for an existing scale when I adopt one, even if the purpose of my study is not creating a new scale?
2. When I have some subdomains in a scale, should I run the reliability analysis on the subdomains only, or both the subdomains and the whole scale?
3. If the result of the reliability test indicated that the internal consistency for the whole scale is great, alpha level > .70. However, the internal consistency for the subdomains is for poor, alpha level < .70. Then, should I delete some item, or even a domain?
Thank you for helping!
I would like to ask for advice on data weighting - i.e. whether or not I should use post-stratification weights in my analyses? I have data collected by quota sampling, which are weighted with post-stratification weights to accurately represent the target population. Due to missing values, I am working with a smaller sample. I wanted to ask if I should do the missing value analysis and then further analysis on the smaller sample on the weighted data? Among the specific analyses, I am applying descriptive statistics, reliability testing, CFA and MGCFA.
I have tried both (weighted x unweighted) and there is not much difference in the case of descriptive statistics, but in CFA and MGCFA the results come out relatively quite different.
Thanks in advance for any advice and tips on how I should best proceed.
These items are taken from WVS7 questionnaire to find religious tolerance :
- Do you trust people of other religion?
- Whenever science and religion conflict, religion is always right
- The only acceptable religion is my religion
- Mention if you would not have people of a different religion as your neighbor
All of them are in 4 point scale. Higher value would indicate higher tolerance. Alpha value is below 0.2.
What should be done? Should I carry on ignoring the alpha? Is alpha even appropriate in this case?
Hi guys,
I might be publishing my Master's dissertation paper and my supervisor adjusted my data and now 2 of my constructs' cronbach's alpha are 0.6.
I have been looking for a good journal article where their cronbach's alpha values were 0.6 to see how they present it and also use it as a reference but I cannot find one. I have a lot of articles stating that 0.6 to 0.7 is the lower level of acceptability, however I have not found any articles using less than 0.7 values.
Any suggestions?
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
Thank you in advance.
I have translated and culturally adapted a survey from English to another language. Then, I conducted a pilot study to assess the face validity of the adapted version. So, would it be necessary to conduct a reliability analysis for the adapted version even if the original version didn't go through the process of reliability analysis?
I have a question regarding moderation effect.
I am testing a model with one one IV (A) and one DV (B) and I want to test the moderating effect of M on this path.
Is it necessary to investiagte the reliability and validity for cross construct(B*M) ?
or I only have to investigate reliability and validity for A construct and B construct ?
Help from one of the PLS-Experts in this forum would be highly appreciated!
To protect safety-critical systems against soft errors (induced by radiations), we usually use redundancy-based fault tolerance techniques.
Recently, to cut down unacceptable overheads imposed by redundancy, we can only protect the most critical parts of the system, i.e., selective fault tolerance. To identify such parts, we can use fault injection.
There are two methodologies based on fault injection widely presented in the literature toward improving the system's fault tolerance, called: Reliability assessment and Vulnerability assessment. Both use fault injection. I wonder, what is the main difference between these two concepts, i.e., Reliability assessment and Vulnerability assessment?
i am trying to do reliability analysis for short rc columns. i am referring a paper "reliability analysis of eccentrically loaded columns" by "Maria M. Szerszen, Aleksander Szwed and Andrzej S. Nowak".
in the end, based on a plot of 'strain' vs 'strength reduction factor', they have proposed a new values of strength reduction factor as a function of strain.
two models have been proposed, the dotted one is for all the points except black
while the solid line is for black points.
black points are depicting reinforcement ratio < 2
green, blue, red colors are showing reinforcement ratios in excess and equal to 2
my question is how they fitted these two lines, or, how they have measured the transition zone from the given scatter plot?

Hello everyone,
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
Researchers in the social sciences have to report some measure of reliability. Standard statistics packages provide functions to calculate (Cronbach's) Alpha or procedures to estimate (MacDonalds) Omega in straightforward way. However, things become a bit more complicated when your data have a nested structure. For instance, in experience sampling research (ESM) researchers usually have self-reports or observations nested in persons. In this case, Geldhof et al. (2014) suggest that reliability be estimated for each level of analysis separately. Albeit this is easy to do with commerical packages like MPlus, R users face some challenges. To the best of my knowledge most multilevel packages in R do not provide a function to estimate reliability at the within vs. the between person level of analysis (e.g., misty or multilevel).
So far, I have been using a tool created by Francis Huang (2016) which works fine for Alpha. However, more and more researchers prefer (MacDonalds) Omega instead (e.g., Hayes & Coutts, 2020).
After working with workarounds for years I accidentially found that the R package semTools provides a function to estimate multilevel Alpha, different variants of Omega, and average variance extracted for multilevel data. I would like to use this post to share this with anyone struggling with estimation of multilevel reliability in R.
I find this post helpful, feel free to let me know.
Oliver
Bliese, P. (o. J.). multilevel: Multilevel Functions. Comprehensive R Archive Network (CRAN). [Computer software]. https://CRAN.R-project.org/package=multilevel
Geldhof, G. J., Preacher, K. J., & Zyphur, M. J. (2014). Reliability estimation in a multilevel confirmatory factor analysis framework. Psychological Methods, 19(1), 72–91. https://doi.org/10.1037/a0032138
Huang, F. L. (2016). Conducting multilevel confirmatory factor analysis using R. http://faculty.missouri.edu/huangf/data/mcfa/MCFAinRHUANG.pdf
Hayes, A. F., & Coutts, J. J. (2020). Use Omega Rather than Cronbach’s Alpha for Estimating Reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
Yanagida, T. (2020). misty: Miscellaneous Functions „T. Yanagida“ (0.3.2) [Computer software]. https://CRAN.R-project.org/package=misty
Anyone who can assist with a guide to carrying out a reliability analysis using MATLAB software would be greatly appreciated. Thanks
Every tutorial and guide I can find for scale analyses in SPSS are specifically about Likery Scales. My study is not making use of a Likert Scale and is instead using a 0 - 100 scale.
Whats reliability analysis is best used for such a scale?
Greetings,
I am a DBA student conducting a study about "Factors Impacting Employee Turnover in the Medical Device Industry in the UAE."
My research model consists of 7 variables, out of which:
- 5 Variables measured using multi-item scales adapted from literature ex. Perceived External Prestige (6 items), Location (4 items), Flextime (4 items),.. etc.
- 2 are nominal variables
I want to conduct a reliability analysis using SPSS & I thought I need to do the below?
- Conduct reliability test using SPSS Cronbach's alpha for each construct (except for nominal variables)
- Deal with low alpha coefficients (how to do so?)
- Conduct Exploratory Factor Analysis to test for discriminant validity
Am I thinking right? Attached are my results up to now..
Thank you
Hello, I have a questionnaire that consists of five sections. The first section (related to drivers' knowledge) has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section (related to drivers' habits) has 9 items with the first five items having a six-point Likert scale while in the remaining items the respondents have to choose one question from four specific options. The third section (related to drivers' behavioral intentions) has 10 items with each following a six-point Likert scale. The fourth section (related to drivers' psychological conditions) has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options. Finally, the last section consists of questions regarding drivers' profiles (age, gender, education, driving experience, profession, frequency of driving through tunnels, etc.)
Now my question is, what kind of statistical tests or analysis can I perform here to investigate the relationship between the variables in the drivers' profile and other sections/items. For instance, how I can analyze which group of drivers (in terms of age, gender, experience, etc.) are more knowledgable (section 1) or adopt appropriate habits (section 2).
I am also open to all kinds of suggestions and collaborations on this research.
P.S: I am attaching my questionnaire as a file. Hope it will help to understand my question and questionnaire better.
I am doing a reliability analysis of a motivation questionnaire on a sample of athletes in different sports. I do the reliability analysis in order to check the reliability of the translation of the questionnaire into another language.
Thank you.
It's an online assessment for which there are 14 learning objectives. I had 3 groups (Novice, intermediate, and experts) take the assessment that had 4 items for each of the 14 objectives. Ultimately I want the assessment to randomly select only 1 item for each learning objective (a 14-item assessment) from 3 possible items. What test(s) will help me choose the best 3 items for each learning objective? I already have data from 169 test-takers (77 novice / 55 intermediate / 37 experts).
I want to learn the reliability coefficient of a scale I used in my study (an assignment for my experimental psychology class). I read about how to find Cronbach alpha, I can run a reliability analysis in SPSS to find it. But I read that in order to run reliability analysis, each item has to have a normal distribution, but my data is not normally distributed. Can I run a reliability analysis with non-normally distributed data? Is there an alternative to reliability analysis for non-normal distribution?
I'm doing a split-half estimation on the following data:
trial one: mean = 5.12 (SD = 5.76)
trial two: mean = 7.62 (SD = 8.5)
trial three: mean = 8.57 (SD = 12.66)
trial four: mean = 8.11 (SD = 10.7)
(SD = standard deviation)
Where i'm creating two subset scores (from trial one & two; and from trial three & four - I realise this is not the usual odd/even split):
Subset 1 (t1 & t2): mean = 12.73 (SD = 11.47)
Subset 2 (t3 & 4): mean = 16.68 (SD= 17.92)
I'm then computing a correlation between these two subsets, after which I'm computing the reliability of this correlation using the Spearman-Brown formulation.
However, in the literature I've found, it all suggests that the data must meet a number of assumptions, specifically that the mean and variance of the subsets (and possibly the items of these subsets) must all be equivalent.
As one source states:
“the adequacy of the split-half approach once again rests on the assumption that the two halves are parallel tests. That is, the halves must have equal true scores and equal error variance. As we have discussed, if the assumptions of classical test theory and parallel tests are all true, then the two halves should have equal means and equal variances.”
Excerpt From: R. Michael Furr. “Psychometrics”. Apple Books.
My question is, must variance and means be equal for a split-half estimate of reliability? If so, how can equality be tested? And is there a guide to the range, which means can be similar (surely it cannot be expected for means and variance across subsets to be 1:1 equal?!)?
Hello, I have a questionnaire that consist of four sections with each section focusing on different variables.
First, each section has 9-10 items with each item following a different scale. For instance, the first section has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section has 9 items with the first five items have six point Likert scale while in the remaining items the respondents have to choose from four specific options. The third section has 10 items with each following six point Likert scale. The fourth section has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options.
Second, in some of the items the respondents were also allowed to select multiple answers for the same item.
Now my question is, how to calculate the "Cronbach's Alpha" for this questionnaire? If we cannot calculate the "Cronbach's Alpha", what are the alternative to find the reliability and internal consistency of the questionnaire.
I would like to know which is the best way to analyse test-retest in non-normal data. If ICC is not recommended in those cases, which test should I choose?
What if the Cronbach's Alpha of a scale (4-items) measuring a control variable is between the .40 -.50 in your research. However, the scale is the same scale used in previous research in which the scale received a Cronbach's Alpha of .73.
Do you have to make some adjustments to the scale or can you use this scale because previous research showed it is reliable?
What do you think?
I have set of independent data whose final output (result) is in Binary form (0 or 1). Which form of reliability analysis can be used for such datasets ? I have seen FOSM, AFOSM methods, all of them are applicable for continuous data.
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
Recently, I read that we do not validate the questionnaire, but the scores obtained through this questionnaire. So is it wrong the papers with the title"Validation of the XXXXXX questionnaire"?
Need to publish my research paper on Reliability analysis of a industrial system in SCI of Q1/Q2 category urgently. Can any one suggest me the journal?
I did a reliability analysis on my current project using SPSS version 20. most of the results I am getting are between 0.5 and .66 coefficients even after item deleted. can i say my items are reliable with the above findings. if no , pls., advice.
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
I used Neper to generate the tessellation file and the meshing file, and assigned the crystal orientation. How to import these files into ABAQUS for crystal plastic finite element analysis(CPFEA, CPFEM)?
Hi all, I am conducting a study on a flexible work arrangement. Section 1, consist of 7 questions. Q1-Q3 the selection of answer is YES/NO, Q4 ask you to rank the given answer, Q5-Q7 is on the Likert scale. This is the problems came. How do I carry reliability analysis on this Section 1???
Hello all,
I am trying to do an agreement analysis to verify how similar are the time-series measurements taken by two devices. Basically I have 2 curves representing values measured over time with each device, and I want to say how similar these measurements are.
I have other metrics in my analysis, but I was looking into CMC (Kadaba, 1989) to be a global metric. I know it is often used in gait analysis literature for reliability analysis, where curves taken by the same measurement device, but in different days, are compared. This coefficient represents similarity between two curves, so I was considering using it as a metric of agreement between the two time-series measurements I have, one from each device. I was wondering if there is any statistical assumption behind CMC that prevents me from doing that, I couldn't find much about it.
Thank you!
Hello,
I am coding some metrics from different articles to run a meta-analysis and I had a simple question.
Let's say one of my variable of interest is Brand loyalty. In some articles, brand loyalty is often decomposed in two different variables (Attitudinal loyalty and behavioral loyalty) with two different metrics: two different AVE, CR, Alpha coefficients, Means and Standard deviations.
I would like to summarize these two variables in a single one. Thus, how do I get the value of the AVE, CR, Alpha, Mean and SD for the variable Brand loyalty (which is the variable gathering attitudinal and behavioral loyalty)? Should I do the average of the values given in the article?
Thanks in advance for you reply,
Best regards,
Kathleen
Hi everybody,
I need to perform reliability analysis on my ERP data. Specifically, I would like to estimate internal consistency reliability through Spearman-Brown corrected split-half reliability. Could anybody help me with this? Do I need to use all the trials for each participant?
I'm not sure how to start the analysis, using trials or averages.....
I hope to get some answer here.
Thanks in advance.
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Hello
Dear all,
I am looking for a reference that considered the rebar diameter as a random variable (e.g. having a normal distribution with standard deviation) in reliability analysis, however, I am not able to find any reference that rebar diameter is a random variable, similar to yield stress, fy, and etc.
Does anybody know any more information?
Regards,
Greeting!
I have performed the reliability analysis using Cronbach alpha for my questionnaire and I obtained a value of 0.507. There are 3 items to be deleted as shown in SPSS.
May I know what is the maximum number of items can I delete for my questionnaire? As I have came across one forum stated that only 20% questions can be deleted from questionnaire in order to preserve the content of the questionnaire. However, there is no reference found for this suggestion.
Please advice, thanks in advance!
I wish to know the difference between the BN and Markov model. In what type of problems one is better than other?
In case of reliability analysis of a power plant, where equipment failures are considered, which model should be used and why?
Thank You!
For a dynamic Bayesian network (DBN) with a warm spare gate having one primary and one back-up component:
If the primary component P is active at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [alpha*lambda (S1)].
If the primary component P fails at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [lambda (S1)].
My question is, above are the conditional probabilities of primary and backup component. In a DBN, prior failure probability is also required. What will the prior failure probability of back up component? Will it be calculated using lambda (S1) or alpha*lambda (S1)?
Thank you
regards
Sanchit
Dear all,
We conducted a research on college students using Maslach Burnout Inventory-Student Form of Schaufeli et. 2002. As you all know this scale consists of three factors, namely exhaustion, cynicism and professional efficacy.
My question is about the internal consistency coefficient of the factor professional efficacy. The Cronbach's Alpha for this factor is .59 and split-half reliability coefficient is .61.
In our research we also measure general self-efficacy of the students.
Therefore, what should we do?
For my idea, the best option is omitting the factor from the analysis since we also measure general self-efficacy.
What do you think?
Thanks in advance.
Meryem
I need ETAP software. Can anyone please share the link? As I am new user of ETAP, I also need a user guide please.
I did items reliability analysis and the Chronbach alpha value is 0.956. The professor is saying "this too high, go a read what to do?" The professor don't gives any hint. What I should or need to do?
Hi everyone, grad student in need of help!
I have distributed two surveys, they are very similar but one was for teachers and one for students, as part of a needs assessment for e-learning. I wrote the survey to have variables assessing readiness, enthusiasm, and accessibility.
1) How do I properly assess the reliability of my surveys...the participation rates were low for one, and okay for the other, which makes me wonder whether or not EFA is going to be effective. Alternatively, in SPSS you can run the reliability analysis and get your Cronbach's alpha. How does EFA and the reliability analysis differ?
I have 5 Likert scale questions in my questionnaire that seeks to measure Construct A. It has an acceptable Cronbach's Alpha value of above 0.7.
Does this mean that I am able to create (compute) a new variable; where I average the score of the 5 questions for each response to derive a score for Construct A for each respondent? I want to use this new variable (as a representative of Construct A) to conduct statistical tests with other variables.
Does anyone know; is there any special formula for inspection or preventive maintenance intervals or scheduling, when we are applying the artificial neural network models instead of conventional methods such as weibull?
Thanks in advance
Hi!
I am working on reliability analysis of PV Grid connected systems. I have no Background of Reliability analysis. So I just want to start from scratch on reliability analysis. Can someone please recommend me books on Reliability analysis of PV systems and Wind energy systems. Research papers just skip many things so I want to start from detailed study of books...
Thanks in advance...
I am conducting a study where the Team Climate Inventory variable represents a second-order construct with four sub-dimensions (i.e. vision, task orientation, participative safety and support for innovation).
How should I estimate a composite reliability score in this case with the use of Lisrel?
Thanks for all the hints!
Best,
Lukasz
Hi
A school in Jordan is doing an impact study on its alumni. The variables are a list of traits and values (innovation, leadership, empathy, etc…). I’m responsible for preparing the questionnaire.
My methodology is:
1- For each value/trait, find an inventory or scale that measures it.
2- Choose three items from the inventory/scale.
3- Combine the three items from all the inventories/scales to create the new questionnaire (about 60 items).
I need an expert who can review the final questionnaire and give an approval and recommendations to improve the questionnaire.
Any volunteers?
SOLVED!!! Don't see how I can delete this question?
I am testing a survey about personality types and self-disclosure on Instagram. I gathered 105 respondents and used the mini IPIP scale by Donnellan to measure the big five personality types. I have reversed coded the items that were negative and double checked with a Phd researcher who confirmed I did it correctly. When running the reliability analysis for the mean of each variable, I get the results in the attached photo. I was told that this could be because some respondents were unreliable and clicking random answers and it could help to remove the outliers. So I did the Mahalanobis Distance in SPSS to identify the outliers (see attachment). I am not sure if I did it correctly but from what I can gather, there are no outliers since none are below .001? I am not sure now how to save my data and how to make it more reliable. I can go back and gather more respondents but it's been hard to do so and I am running out of time. Please advise. Thank you in advance.




It has been seen that the instrument rating of an instrument mentioned in the operating manual is different from what is mentioned in the technical manual (Not all, but few). If we disregard the typing error, what are the actual reasons accounting for this difference?
NFF (No Fault Found) has the major contribution to reduce the operational availability, resources and increase the cost of Maintenance of any aircraft in aviation. The likely causes are human factors, maintenance training, faults reporting, fault analysis, corrective Maintenance and procedures. However, mitigating these issues are completely a tedious process wherein management skill can't achieve the desired results. So, what are the other parameters/ technical factors that need to be considered?
Hi everyone,
I've conducted an EFA and ended up with 5 factors. A few of the items are cross-loading over 2-3 factors. I have already removed 10 items that either do not correlate or cross-load significantly.
I am fairly happy with the factors, however, the cross-loading items are confusing me and I have a few questions.
1. When calculating the total scores, means and Cronbach's alphas for each factor, do I include the items which cross load with other items?
2. When I present the final scale/solution, how do I present the cross-loading items?
3. There is one factor which is negatively predicted ('Lack of Support' [all items have a negative value]), however, I have changed the scoring so it positively predicts the factor (Support). There is one item in this subscale which cross-loads with another. How does this impact the scoring? Should I try to remove this item?
4. I started with a 37-item scale and I now have 27 items. How many items are too many to delete? At what point should I just accept it as an overall scale with a good Cronbach's alpha (.921) and say further research into factors and subscales is needed?
I am reluctant to delete the few cross-loading items I have remaining, as when they are removed from the analysis, the reliability score decreases for the individual factors and the overall scale.
This is my first time doing an EFA and so I would be very grateful for any advice or recommendations you may have.
Thank you.
My respondents are 92 and I have 180 questions. I am using SPSS, and the software says "scale or part of scale has zero variance and will be bypassed". Can anyone help me?
Dear colleagues,
has the calculation of McDonalds' Omega been implemented in SPSS 25 so far? I found some older threads concerning this question on RG but nothing in the recent past.
In case you know how to do this in SPSS or Mplus, I would be very grateful. I would kindly ask you not to suggest using R because I am not familiar with the programme.
Thank you in advance and kind regards
Marcel Grieger
so I came across a situation where a cross sectional Servery based research was done with a questionnaire that they designed themselves
and they finished the data collocation and have collocated the whole target sample
The Research team didn't do a pilot study.
When they wanted to start the analysis they wanted to do cronbach alpha to measure reliability.
- Cronbach alpha happened to be 94% (showing excellent reliability).
- but they have only done face validity for the questionnaire, and didnt do anything else like Principal Component analysis [PCA].
Q1/ so can they just go with the flow and write in the methodology, and result section that they have done cronbach alpha and it showed great result of etc etc... ??
Q2/and can they say that the questionnaire is a valid questionnaire ??
Hi there,
My questionnaire consists of 18 MCQs, each with one correct answer and 3 incorrect answers. I'm measuring participant scores on the questionnaire before and after watching a video in two seperate groups.
From what I've read Cronbach's Alpha is used to test scaled data (i.e. Likert scales) for reliability, so I'm unsure as to whether its appropriate for my questionnaire.
Can I also use it on my questionnaire, or is there an alternative more appropriate for my data?
If the answer is yes I can use it, do I analyse it exactly the same in SPSS as I would scaled data? i.e.: Analyze> Scale> Reliability Analysis, all questions into the 'items' box, tick descriptive statistics options 'item' 'scale' 'scale if item deleted' and 'correlations' in inter-item options?
Thank you in advance!
David
Do you have any experience with probabilistic software for structural reliability assessment? Any links?
I am using answers to a questionnaire in which existing scales from scientific papers are used. One particular (set of) concepts is measured with 24 questions, which are divided across three different subscales by the original author who developed the questions. However, reliability analysis of these subscales (using the collected answers) shows that for two of the three subscales, Cronbach's alpha is lower than 0.7. Furthermore, all subscales contain one or two questions which, if removed, would increase the Cronbach's alpha, although for two subscales, the resulting Cronbach's alpha would still be lower than 0.7.
Is it acceptable to remove certain questions from the subscales, or should I continue to use the original subscales in this situation?
Thank you in advance.
Dear collegues,
I performed a cross-cultural study using two questionnaires (66 and 12 items). The original version of these questionnaires was used in the first country, and these tools were also translated into the second country´s language to be administered here. It was the first time the translated version was used for research purposes. The number of participants in the first country was 216, in the second 265. Is it required to perform confirmatory factor analyisis (for the two lignuistical versions separatedly), or is it enough to report internal consistency coefficients in this particular publication? What I am actually supposed to do to fulfill the required standards of reporting psychometric properties of translated questionnaires?
Thank you for your suggestions in advance.
I'm currently making an investigation of how to apply RCM with Preventative Maintenance to a truck fleet of fuel transport. In my company there's not a list of functional failure or failure modes, but according to some books' authors like Dhillon, Pistalleri or Dixon, said that there are generic database failures. I was searching this kind of data in google and google scholar but didn't find anything. So, this is why I ask for you to help if you know any public generic failure database that can give my team a base of information to improve our investigation.
PD: Sorry if there's any spelling or semantic error. I'm not native English, and almost any kind of material are in this language.
Thanks,
Hi everyone,
I'm trying to determine which test to run to assess the accuracy of a model that's classifying vegetation. I have ground-truth values and the values that the model has produced. I've considered Pearson Correlation and Intra-Class Correlation, though there are many tests, so I'm stumped on which to decide on. I've seen past literature using Pearson Correlation though my data aren't normal even with a log transformation.
Thanks much!
COVID-19 is affecting all kinds of human activities, research is not exempted. Many ongoing research studies are not paused because of COVID-19, patient recruitment cannot be continued, follow up visits are not stict to schedule, intervention procedures may be delayed, blood test monitor are postponed.
I would expect a higher loss to follow up rate during this period, which would affect the reliability of research. Even after COVID-19, will the recruited subjects have some difference than those recruited before?
What do you think?
Hi,
My main analysis is on an intention to treat dataset, although I am looking at the per-protocol dataset to confirm if there were any differences. When running Cronbach's Alpha on my many scales, should I run it on the ITT dataset, the PP data, or both?
Thanks,
Max
Update
I have a scale (12 items)
I go to Analysis -> Scale -> Reliability analysis and get my Cronbach alpha (0,5)
BUT 2 of my items are «inverse». If I recode this two items as it was not inverse I get alpha=0.8
Am I right? I should recode this items before counting Cronbach alpha?
written earlier
I conducted a study (correlation plan).
I used (including) 2 psychological tests, which were adapted by another author according to all the rules.
And I run into problems:
Situation1 (solved)
My first test (14 items) has 2 subscales. In Ukrainian adaptation, the Cronbach alpha for the scales is 0.73 and 0.68. But I did my own research and counted Cronbach's alpha. 0.65 and 0.65 came out.
Question1: Should I count correlations with this test or, maybe, exclude this test from analysis?
Situation 2 (see update)
My second test is Zimbardo’s Time Perspective Inventory (56 items). In Ukrainian adaptation, four of five scales have Cronbach Alpha above 0.7. One scale is 0.65.
But in my research everything is ok only with 3 scales, they are higher than 0.7.
Two scales have a very low Cronbach Alpha: 0.55 and 0.49.
Question2: should I exclude this two low scales and count correlations with only that 3 scales which Cronbach Alpha more than 0,7?
PS: N=336 in my study
I study the reliability of the system and use the RBD (Reliability Block Diagram). I should find the failure rate but I couldn't be sure of the data set size. Is there any rule about size?
In general, the approximated probability density function of the performance function can be obtained by the moment method. Is there any other method ?
Hi!
Currently I am analyzing my data and I've got some results of which I don't know what to do with it. I've tested whipped cream on whipping time at three different moments: day 0, day 1 and day 2 (three different groups). Therefore I used a one-way ANOVA test to analyze if there's a difference between the means of the groups. This test is significant, however, when I use a Post Hoc test to analyze which groups differ, these results are all insignificant. The variances are equal so I used the Turkey test (but any other test I can use in my program give the same insignificant results).
I think this is because the ANOVA may give a type I error (incorrect rejection of the H0 hypothesis that there's no difference between the groups) and the Post-Hoc can do a more reliable analysis between the groups. But I don't know exactly how it works.
Does somebody might know how to draw a clear conclusion from these results! I would be very grateful for any help you can provide!

A correct interpretation of reliability analysis is determinative for researchers and industrial developers to refine their design and also prepare proper maintenance scheduling and safety analysis. However, I still see that many designers prefer to use classical safety factors instead of reliability analysis techniques. what's your sense about this.
For example, imagine that you are going to buy a bearing and I say you this bearing's reliability is 94% for the expected life of 5 years. it means that if you test 100 bearings under normal performance, almost 6 of them should fail after 5 years. does this analysis makes sense for you in your research and development?
and if the answer is yes, how do you use the outcome of reliability analysis in your research area. the answer is important for me because I am going to start developing commercial software for reliability analysis and it is important to see what are the expectations of experts from the reliability analysis methods.
Thanks,
Sajad
I am evaluating reliability and availability of a hydropower plant using dynamic fault tree gates. To evaluate the top event probability Dynamic Bayesian Network (DBN) is used. I am unable to figure out that how many number of time slices should I consider for my network. Also, should my time slice be 1 month-6 month-1year, or 1year-2year-5year.
Also, should all the power plant components with static and dynamic gates be represented with different time slices in the DBN or only the power plant components with dynamic gates be represented with time slices?
Now, I am a Master Student and doing Master thesis related with coordination in construction.My idea is to rank the 59 coordination factors as the most importance and most time consuming according to the Questionnaire surveys data. For measuring the most important factors, I will use Relative important index method and for measuring time consuming of factors, which method
will be suitable? Another question is that I will use Reliability analysis and descriptive statistics as general analysis and what types of analysis can also be suitable?
hello researchers
i have a question regarding switching method for selection of standby units for operation in any complex system for reliability analysis. is there any appropriate method for select an unit(standby).
Hi
Historical data is used to forecast the number of functional failures in passenger trains. However, there is always differenc in forecast and actually observed data. Am wondering which technique or approach is suitable to minimize the forecast error in case of railway data.
Shall be greatful if you can share some useful article or case study.
Hi
I have run PCA for 3 years of data so by this I got factor scores for each item/subject for every year. Now, I need a single value to use in my model from three values.
Should i take average of factor scores or should i use a recent value? What is an appropriate way to use factor scores for transport delay data analysis?
I am not sure if there exists a novel method to relate the state estimation method to the prediction of a survival function or reliability distribution function, which is more like a conventional method, in prognostics. Can anyone answer the question please?
For example,
A program which evaluates LOLE and SAIDI and CAIDI with a test network.
We all know there are scales that measure purchase intention (or willingness) for a product or service. I am interested to know if there is some (dependable) scale that measures the intention (propensity) to stop purchasing a product after an unpleasant experience (dissatisfaction).
Thank you!
Here is my situation: I have used the standardized Health Literacy Questionnaire (HLQ) (a tool comprising 9 scales) to look at musicians' health literacy for the first time. However, the HLQ has never been validated on musicians. After having collected 479 responses from musicians, I cleaned the data and ran a CFA (using AMOS). The model was unfit, so I ran an EFA (in SPSS) which suggested I may have about 4 factors (instead of 9) with one of them having a Cronbach's alpha of less than .7. I then ran a CFA again, but the CFA doesn't fit with the EFA at all - what shall I do to test construct validity?
For the EFA, I used Eigenvalue > 1 & parallel analysis; conducted an orthogonal rotation (varimax); and supressed small coefficients of below .4.
Many, MANY thanks!
Dear all,
Does anyone know how can I estimate the NHPP reliability function through Non-parametric method?!
I know the Kernel density estimation is widely used in this area, but seems it has very complicated theory.
I was wondering if you could suggest an example or statistical software directly.
Also, I am just attaching the needed formulas of Kernel model.
Thanks for your attention.
Best/Hamzeh

Hello! I am lost as to know which tests I should use. I am doing a master thesis in marketing/IS.
I have two groups (treatment received used AI) and control (did not use AI). I used an experimental design with scenarios. Both groups went through the same scenario and purchased a luxury brand. I would like to verify if using AI during a consumer's purchasing process changes (or not) their perception of the brand, based on 4 constructs (uniqueness of the brand, quality of the brand, modernity and conspicuousness). I am using reflective scales item (I think?)
I have been looking at papers and online and I believe I should test do a PCA and a Manova, but isn't there another step to take before running a Manova? and why?
Thank you very much.
I am studying complex system reliability analysis with BN now and looking for a project case which is a multi-state system with no less than 3 states in one node.
If you have some cases that meet the above conditions, please helpe me, thanks very much !
Hi.
I am currently completing the final steps of my school work which is evaluation of performance reliability of Zlín aircraft on our school. The aim of my work is to prove that our fleet is safe. I have already wrote about every single failure which appeared in the last 6 years and discussed about our aircraft maintenance. The last part is calculation of reliability. I have already calculated mean time between failures (easy one) but that is not enough in my opinion. I would like to add other numbers to make the prove better. Do you know any other formulas to calculate things connected to reliability of aircraft? The data i have are types of failures, number of failures, total time flown during each year for every aircraft and calculated mean time between failures. I have found some formulas on the internet (picture added) but the description and way how to calculate it is too much for me (as i am not a mathematician student). Therefore i am looking for a simpler formulas or a program which would help me. I am really grateful for every answer.

I am looking to test item discrimination of my newly constructed psychological well-being scale and would appreciate any references for suggested ranges of poor, good and excellent discriminatory values.
What should be the scientific approach to draw test framework to justify validity and reliability of the data used in research works?
Research use to be done based on data inputs from primary and secondary data source.
To validate qualitative and quantitative data, its recommended to come out with effective test framework which can help in justification of the validity and reliability of the data.
I have problems with Polynomial Chaos Expansion Regression Method. I have some questions related with these topic and its Matlab code.
If anyone help me I will be very appriciated.
My Mail adress: muratbarissarigul@gmail.com
If the problem could be solved i can give you gift or money since the problem is the main handikap of my thesis and money is not problem at all from now.
Matlab codes are given in m.files;
CR is often advocated as an alternative option due to the usual violation of the tau-equivalency assumption by Cronbach's Alpha. My Alpha returned a value of 0.64 (low but I guess I can proceed since I've seen such a practice before and since authors such as Hair and Kline accept a threshold between 0.6 and 0.7). Anyway and since my factor is homogeneous but has different loadings for all the 4 items involved, I think CR would be a better alternative. Surprisingly, my CR returned a value of 0.787 using a calculator based on the formula provided by Raykov (1997).
Is such a high difference possible and logical between the 2 coefficients? One paper (Peterson & Kim 2012) said that although CR is a better estimate, there isn't much a difference between the values.
Assuming that CR is indeed correct, can I proceed any further and do a multiple regression analysis based on the reliability provided by CR and not Cronbach? Thank you.
EDIT: I am using this calculator/formula.
Is factor analysis a MUST when adopting or adapting research instruments in different cultures?
Hello,
I am looking for Standards discharges of treated wastewater into the receiving environment in the Ontario Region, Canada.
I tried several websites, but without success.
Standars discharges for (TSS, COD, BOD, TN, TP).
Thank you for your help.
The uncertainty fuzzy set theory (FST) plays a significant role in real world problems such as operation research, medical decision making, risk assessment, social science, decision making, reliability analysis.
Papers:
L.A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965) 338-356.
S.H.Wei, S.M. Chen, A new approach for fuzzy risk analysis based on similarity measures of generalized fuzzy numbers, Expert Syst. Appl. 36(1) (2009) 589598.
J. Ye , The Dice similarity measure between generalized trapezoidal fuzzy numbers based on the expected interval and its multicriteria group decision-making method, Journal of the Chinese Institute of Industrial Engineers, 29:6 (2012) 375-382, DOI: 10.1080/10170669.2012.710879.
To reduce size of the large BDD, different approaches are used – optimal ordering (OBDD), Reduction Operations (ROBDD), Zero-Suppressed BDD (ZBDD), etc. But sometimes, for very large fault trees, even after using of these approaches, it is impossible to build full (exact) BDD and so we should use some Cut-Off. Please, recommend me some approaches or give me some references for BDD Cut-Off. Thanks a lot beforehand. Regards, Sergey.