Science topic

Reliability Analysis - Science topic

Explore the latest questions and answers in Reliability Analysis, and find Reliability Analysis experts.
Questions related to Reliability Analysis
  • asked a question related to Reliability Analysis
Question
5 answers
Hi,
I have developed a semi-structured interview assessment tool for a clinical population which gives scores - 1,2,3 (each score is qualitatively described on ordinal scale), using a mixed methodology. The tool has 31 questions.
The content validation was done in the Phase 1&2 and the tool was administered on a small sample (N=50) in Phase 3 to establish psychometric properties. The interview was re-administered on 30 individuals for retest reliability. The conceptual framework on which the tool is developed is multidimensional.
When I ran cronbach alpha for the entire interview it is .75 but for subscales it is coming between 0.4-0.5. The inter-item correlation is pretty low and some are negative and many are non-signifcant. The item-total correlation for many is below 0.2 and also has a negative correlation- based on the established criteria we will have to remove many items.
I am hitting a roadblock in analysis and was wondering if there is any other way in whcih we can establish reliablity of a qualitative tool with low sample, or other ways to interpret the data where a mixed methodology has been used (QUAL-quan).
Since the sample is small I will be unable to do factor analysis, but will be establishing convergent/divergent validity with other self-report scales.
Thanks in advance
Relevant answer
Answer
To get a high level of internal consistency, all of your measures need to be positively correlated. So, each of your scores would need to indicate frequently, mostly, always for strongly related behaviors. If any of them are in the "wrong" direction (i.e., negatively scored), they will be to reversed.
  • asked a question related to Reliability Analysis
Question
4 answers
I am trying to calculate the reliability of a product(like multiple devices of the item), I would like to know what are the various techniques that can be used and different methods of determining an ideal failure situation. I am already aware of the different indices like MTBF, MTTF, basic Reliability index for a single item of a product. I appreciate your suggestions and help. Thank you.
Relevant answer
Answer
There are so many techniques (from qualitative to quantitative aproaches, from previsional to operational approaches) that there is no "best" technique : this stongly depends of what you want to do and what field feedback is available. This ranges from simple FMECA/HAZOP@ to stochastic Petri nets and includes Markovian and Boolean approacches (Reliability Block diagrams, Fault trees , event trees).
You could have look to my book "Reliability assessmentof safety and production systems". It provides a comprehensive description of the relevant techniques for dependability (Reliability, Availability, maintenability, safety, ...) analyses and calculations.
Perhaps you may have e free access through a subscription of your company/university.
  • asked a question related to Reliability Analysis
Question
7 answers
Hello! I have this scale which had 10 items initially. I had to remove items 8 and 10 because they correlated negatively with the scale, and then I removed item 9 because Cronbach's alpha and McDonald's omega were both below .7, and after removing it they are now above .7, as it shows in the picture.
My question is, should I also remove item 7 (IntEm_7) because it would raise the reliability coefficients even more and its item-rest correlation is low (0.16), or should I leave it in? Is it necessary to remove it? And also, would it be a problem if I'm now left with only 6 items out of 10?
My goal is to see the correlation between three scales and this is one of them. I am using Jasp.
Any input is appreciated, thank you!
Relevant answer
Answer
Pandia Vadivu Pandian When you cut and paste an answer from an AI, you should state your source, just as you would in any other academic setting.
  • asked a question related to Reliability Analysis
Question
11 answers
Which method is more accurate and popular for testing the validity and reliability of my scales?
Relevant answer
Answer
Cronbach's alpha is used for reliability, AVE is used for construct validity.
  • asked a question related to Reliability Analysis
Question
5 answers
Which would be prefered first (factor analysis or reliability analysis of a Likert type scale)
Relevant answer
Answer
First Reliability testing and then Factor Analysis. If the data is not reliable there is no point of making factors out of it.
  • asked a question related to Reliability Analysis
Question
1 answer
H-INDEX & CITATION EVALUATIONS OF ACADEMICIANS, HOW MUCH RELIABLE !?
Relevant answer
Answer
The opinion that the first author always did the most for the results and for writing an article is not correct. Sometimes the leader of a group of authors is at the first place, sometimes all authors are given in alphabetic order.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi,
I have conducted an EFA on three items, and all items load on one factor. I then ran a reliability analysis with the three variables to ensure internal reliability using Chronbachs Alpha.
My question: Should I run a reliability analysis before or after the EFA?
Does the order really matter in this case?
Thank you in advance!
Relevant answer
It would be best if you first ran a principal component analysis. This will help you group your observed measures into groups or constructs. A reliability analysis follows the PCA to check if each construct (factor or group) measures what it is supposed to measure, also known as internal consistency.
  • asked a question related to Reliability Analysis
Question
2 answers
I'm working on my thesis about reliability analysis of composite structures. Can I use maximum deflection of the plate structure as failure criteria to define limit state function to perform reliability analysis or I should use conventional failure criteria's (i.e. Tsai-Wu criteria)
Relevant answer
Answer
Good day!
Here, there is a difference between the material failure criterion and the specific structure failure criterion. In general, the nucleation of material damage is not necessarily a condition for the failure of the structure as a whole (subcritical damage), also the structure can lose its functionality without accompanying material damage (loss of stability, change of shape, etc.). If for a specific structure and for specific conditions it is more convenient for you to use simplified criteria, for example, strain-based one, then it should still be justified quantitatively, including on the basis of generally accepted criteria for the materials fracture.
  • asked a question related to Reliability Analysis
Question
5 answers
Hello everyone, I did a survey with a 4 item 3 point-likert scales ( for example; not true, sometimes and true) and a 5 item 4 point-likert scales ( for example; strongly disagree, disagree, agree, strongly agree). I conducted a cronbach's alpha, but it was too low due to a low number of items.
Would there be an alternative to report the reliability analysis that would result in a higher score? Ideally, I would like to create scales from these items.
I also thought of reporting only the correlation coefficient of these scales.
Could you help me?
Relevant answer
Answer
Cronbach's Alpha in SPSS Statistics - procedure, output and interpretation of the output using a relevant example | Laerd Statistics.
  • asked a question related to Reliability Analysis
Question
1 answer
Hello all,
Could anyone recommend me good references on using PIMS data to predictive analysis of failure of equipment and machines in industry?
Relevant answer
Answer
Sure, here are some references on using PIMS data for reliability analysis and fault prediction:
  1. "Predictive maintenance using PIMS data: A case study on a centrifugal compressor" by E. S. Onyango, M. M. Islam, and S. M. Islam. This paper presents a case study of using PIMS data to predict the remaining useful life of a centrifugal compressor.
  2. "Predictive maintenance of induction motors using PIMS data" by M. Shahjalal, M. M. Islam, and S. M. Islam. This paper discusses the use of PIMS data for predictive maintenance of induction motors.
  3. "Fault detection and diagnosis using PIMS data: A review" by W. L. Cai, Y. Q. Yang, and Y. B. Cui. This paper provides a comprehensive review of the use of PIMS data for fault detection and diagnosis.
  4. "Predictive maintenance based on PIMS data for the hydraulic system of an excavator" by L. Wei, Y. Zhou, and S. Guo. This paper presents a case study of using PIMS data for predictive maintenance of the hydraulic system of an excavator.
  5. "Data-driven prognostics for predictive maintenance of industrial systems: A review of methods and applications" by X. Li and J. Lee. This paper provides a comprehensive review of data-driven prognostics methods for predictive maintenance of industrial systems, including the use of PIMS data.
I hope you find these references helpful!
  • asked a question related to Reliability Analysis
Question
1 answer
I have some questions regarding running internal consistency reliability test for a scale.
1. Should I run an internal consistency reliability test for an existing scale when I adopt one, even if the purpose of my study is not creating a new scale?
2. When I have some subdomains in a scale, should I run the reliability analysis on the subdomains only, or both the subdomains and the whole scale?
3. If the result of the reliability test indicated that the internal consistency for the whole scale is great, alpha level > .70. However, the internal consistency for the subdomains is for poor, alpha level < .70. Then, should I delete some item, or even a domain?
Thank you for helping!
Relevant answer
Answer
1. It usually does make sense to check the scale reliability in your sample as the reliability coefficient by definition is a variance-dependent statistic (reliability = true score variance / observed variance). Therefore, the reliability coefficients reported for a given scale in different populations may may vary, and estimated from the previous literature may not apply to your population/sample.
2. If you deal with a multidimensional scale with multiple subscales/subdomains, it does not make sense to apply, for example, Cronbach's alpha to the overall scale score (combining multiple subscales). The reason is that Cronbach's alpha is based on the model of (essential) tau-equivalence of classical test theory (CTT). Tau-equivalence implies a single-factor (unidimensional) model with equal factor loadings. Applying alpha to a multidimensional sum score is meaningless and can also be highly misleading.
3. It is not surprising that coefficient alpha would be higher for the whole scale since alpha is partly a function of the number of items (one of the fundamental laws of CTT is that all other things being equal, longer scales are more reliable because errors of measurement have more opportunity to average out). When you look at subscales, these obviously consist of fewer items and are therefore almost guaranteed to produce smaller alpha values. However, as I stated in (2), alpha would be more or less incorrect/misleading/meaningless if applied to an overall (multidimensional) scale score. Low reliability of the subscales may be due to having too few items (or unreliable items). I would only delete items if they violate the implicit unidimensionality assumption. This can be best assessed by applying confirmatory factor analysis (or IRT models) to each subscale, that is, by testing a single-factor (unidimensional) model for each subscale.
  • asked a question related to Reliability Analysis
Question
3 answers
I would like to ask for advice on data weighting - i.e. whether or not I should use post-stratification weights in my analyses? I have data collected by quota sampling, which are weighted with post-stratification weights to accurately represent the target population. Due to missing values, I am working with a smaller sample. I wanted to ask if I should do the missing value analysis and then further analysis on the smaller sample on the weighted data? Among the specific analyses, I am applying descriptive statistics, reliability testing, CFA and MGCFA.
I have tried both (weighted x unweighted) and there is not much difference in the case of descriptive statistics, but in CFA and MGCFA the results come out relatively quite different.
Thanks in advance for any advice and tips on how I should best proceed.
Relevant answer
Answer
Obviously, given that factor analysis also includes the variables defined as latent and the related estimates.
Therefore, the absence or presence of weights can modify the results, even substantially.
It all depends on the purpose of your research.
  • asked a question related to Reliability Analysis
Question
8 answers
These items are taken from WVS7 questionnaire to find religious tolerance :
  1. Do you trust people of other religion?
  2. Whenever science and religion conflict, religion is always right
  3. The only acceptable religion is my religion
  4. Mention if you would not have people of a different religion as your neighbor
All of them are in 4 point scale. Higher value would indicate higher tolerance. Alpha value is below 0.2.
What should be done? Should I carry on ignoring the alpha? Is alpha even appropriate in this case?
Relevant answer
Answer
It is not clear to me that all your items are in the same direction. To be certain, you should examine a correlation matrix to check whether you have only positive correlations.
  • asked a question related to Reliability Analysis
Question
4 answers
Hi guys,
I might be publishing my Master's dissertation paper and my supervisor adjusted my data and now 2 of my constructs' cronbach's alpha are 0.6.
I have been looking for a good journal article where their cronbach's alpha values were 0.6 to see how they present it and also use it as a reference but I cannot find one. I have a lot of articles stating that 0.6 to 0.7 is the lower level of acceptability, however I have not found any articles using less than 0.7 values.
Any suggestions?
Relevant answer
Answer
Hii. I'm in the same boat with you.
I got this article which used Cronbach's alpha 0.6 . Furthermore, I completed the explanation with Dillon-alpha Goldstein's calculation that refers to Sanchez, G. (2018). PLS Path Modeling with R. www.gastonsanchez.com
  • asked a question related to Reliability Analysis
Question
1 answer
What other similar graphical approaches/tools do you know when we attempt to depict the degradation state or reliability performance of a system, aside from Markov chain and Petri net?
(Any relevant references are welcome.)
Thank you in advance.
Relevant answer
Answer
What I did on the job (portraying the maintenance process from Plan to Approve to Schedule to Work to Closeout) was make a "bubble chart" with arrows from stage to stage (including skips and reversals, such as the plan was not approved and kicked back to planning) with arrows and the average time to go on the path and the number of packages to go on the path in a given time frame (such as a month).
With modern graphics, one could actually animate with ants going from mound to mound I suspect.
  • asked a question related to Reliability Analysis
Question
6 answers
I have translated and culturally adapted a survey from English to another language. Then, I conducted a pilot study to assess the face validity of the adapted version. So, would it be necessary to conduct a reliability analysis for the adapted version even if the original version didn't go through the process of reliability analysis?
Relevant answer
Answer
Steven Cooke, Thank you so much for your input!
  • asked a question related to Reliability Analysis
Question
1 answer
I have a question regarding moderation effect.
I am testing a model with one one IV (A) and one DV (B) and I want to test the moderating effect of M on this path.
Is it necessary to investiagte the reliability and validity for cross construct(B*M) ?
or I only have to investigate reliability and validity for A construct and B construct ?
Help from one of the PLS-Experts in this forum would be highly appreciated!
Relevant answer
Answer
  • No there is no need to include moderation in both analysis just report the other variables [refer to this paper 10.1002/mde.3422]
  • asked a question related to Reliability Analysis
Question
4 answers
To protect safety-critical systems against soft errors (induced by radiations), we usually use redundancy-based fault tolerance techniques.
Recently, to cut down unacceptable overheads imposed by redundancy, we can only protect the most critical parts of the system, i.e., selective fault tolerance. To identify such parts, we can use fault injection.
There are two methodologies based on fault injection widely presented in the literature toward improving the system's fault tolerance, called: Reliability assessment and Vulnerability assessment. Both use fault injection. I wonder, what is the main difference between these two concepts, i.e., Reliability assessment and Vulnerability assessment?
Relevant answer
Answer
Both answers of Steven Cooke and O.S. Abejide are correct. in addition, The reliability is the systematic calculations and prediction of the probability of limit state violation, and Vulnerability assessment is the weakest/ critical point in a system where failure is likely to start first before spreading to other members of the system
  • asked a question related to Reliability Analysis
Question
4 answers
i am trying to do reliability analysis for short rc columns. i am referring a paper "reliability analysis of eccentrically loaded columns" by "Maria M. Szerszen, Aleksander Szwed and Andrzej S. Nowak".
in the end, based on a plot of 'strain' vs 'strength reduction factor', they have proposed a new values of strength reduction factor as a function of strain.
two models have been proposed, the dotted one is for all the points except black
while the solid line is for black points.
black points are depicting reinforcement ratio < 2
green, blue, red colors are showing reinforcement ratios in excess and equal to 2
my question is how they fitted these two lines, or, how they have measured the transition zone from the given scatter plot?
Relevant answer
Answer
Steven Prevette that makes two of us. many thanks again for your comment. i have done a similar analysis and will post my result here once done.
  • asked a question related to Reliability Analysis
Question
2 answers
Hello everyone,
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
Relevant answer
Answer
Dear Pia, thank you very much for your helpful recommendation!
  • asked a question related to Reliability Analysis
Question
17 answers
Researchers in the social sciences have to report some measure of reliability. Standard statistics packages provide functions to calculate (Cronbach's) Alpha or procedures to estimate (MacDonalds) Omega in straightforward way. However, things become a bit more complicated when your data have a nested structure. For instance, in experience sampling research (ESM) researchers usually have self-reports or observations nested in persons. In this case, Geldhof et al. (2014) suggest that reliability be estimated for each level of analysis separately. Albeit this is easy to do with commerical packages like MPlus, R users face some challenges. To the best of my knowledge most multilevel packages in R do not provide a function to estimate reliability at the within vs. the between person level of analysis (e.g., misty or multilevel).
So far, I have been using a tool created by Francis Huang (2016) which works fine for Alpha. However, more and more researchers prefer (MacDonalds) Omega instead (e.g., Hayes & Coutts, 2020).
After working with workarounds for years I accidentially found that the R package semTools provides a function to estimate multilevel Alpha, different variants of Omega, and average variance extracted for multilevel data. I would like to use this post to share this with anyone struggling with estimation of multilevel reliability in R.
I find this post helpful, feel free to let me know.
Oliver
Bliese, P. (o. J.). multilevel: Multilevel Functions. Comprehensive R Archive Network (CRAN). [Computer software]. https://CRAN.R-project.org/package=multilevel
Geldhof, G. J., Preacher, K. J., & Zyphur, M. J. (2014). Reliability estimation in a multilevel confirmatory factor analysis framework. Psychological Methods, 19(1), 72–91. https://doi.org/10.1037/a0032138
Huang, F. L. (2016). Conducting multilevel confirmatory factor analysis using R. http://faculty.missouri.edu/huangf/data/mcfa/MCFAinRHUANG.pdf
Hayes, A. F., & Coutts, J. J. (2020). Use Omega Rather than Cronbach’s Alpha for Estimating Reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
Yanagida, T. (2020). misty: Miscellaneous Functions „T. Yanagida“ (0.3.2) [Computer software]. https://CRAN.R-project.org/package=misty
Relevant answer
Answer
The R package semTools now has a new compRelSEM() function, that estimates composite reliability from estimated lavaan models. For multilevel measurement models, reliability indices defined by Lai (2021) are implemented, as well as Geldhof et al.'s (2014) less useful "hypothetical reliability" of level-specific latent components. Until version 0.5-6 is available on CRAN, the development version can be installed with syntax provided in my description here: https://github.com/simsem/semTools/issues/106
  • asked a question related to Reliability Analysis
Question
3 answers
Anyone who can assist with a guide to carrying out a reliability analysis using MATLAB software would be greatly appreciated. Thanks
Relevant answer
Answer
I suggest you analyze the "Tieset & Reliability analysis of a System" source code given in the below link.
  • asked a question related to Reliability Analysis
Question
6 answers
Every tutorial and guide I can find for scale analyses in SPSS are specifically about Likery Scales. My study is not making use of a Likert Scale and is instead using a 0 - 100 scale.
Whats reliability analysis is best used for such a scale?
Relevant answer
Answer
Items that are measured on a 0 - 100 scale might make it even easier to assess reliability because they can (potentially) be treated as (quasi-)continuous (metrical, interval scale) variables, whereas Likert items are, strictly speaking, only ordinal in nature, often requiring special treatment in psychometric analyses.
If you have multiple items that are supposed to measure one or more factors/latent variables, the best course of action would be to run a confirmatory factor analysis (CFA) with the items as indicators of one or more latent factors to test the hypothesized factor structure first. If you find that the hypothesized factor model fits your data well/is appropriate, you can directly use the reliability estimates that are provided as part of a CFA (R-squared values for the items). In addition, composite reliability indices (reliability of the aggregate [sum or mean] of the items for a given factor) can be inferred as well from CFA. Depending on the assumptions made in the specific factor model, this may be, for example, Spearman-Brown, Cronbach's alpha, or McDonald's Omega.
  • asked a question related to Reliability Analysis
Question
12 answers
Greetings,
I am a DBA student conducting a study about "Factors Impacting Employee Turnover in the Medical Device Industry in the UAE."
My research model consists of 7 variables, out of which:
  • 5 Variables measured using multi-item scales adapted from literature ex. Perceived External Prestige (6 items), Location (4 items), Flextime (4 items),.. etc.
  • 2 are nominal variables
I want to conduct a reliability analysis using SPSS & I thought I need to do the below?
  1. Conduct reliability test using SPSS Cronbach's alpha for each construct (except for nominal variables)
  2. Deal with low alpha coefficients (how to do so?)
  3. Conduct Exploratory Factor Analysis to test for discriminant validity
Am I thinking right? Attached are my results up to now..
Thank you
Relevant answer
Answer
The issue is not my specialty , with my best wishes
  • asked a question related to Reliability Analysis
Question
9 answers
Hello, I have a questionnaire that consists of five sections. The first section (related to drivers' knowledge) has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section (related to drivers' habits) has 9 items with the first five items having a six-point Likert scale while in the remaining items the respondents have to choose one question from four specific options. The third section (related to drivers' behavioral intentions) has 10 items with each following a six-point Likert scale. The fourth section (related to drivers' psychological conditions) has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options. Finally, the last section consists of questions regarding drivers' profiles (age, gender, education, driving experience, profession, frequency of driving through tunnels, etc.)
Now my question is, what kind of statistical tests or analysis can I perform here to investigate the relationship between the variables in the drivers' profile and other sections/items. For instance, how I can analyze which group of drivers (in terms of age, gender, experience, etc.) are more knowledgable (section 1) or adopt appropriate habits (section 2).
I am also open to all kinds of suggestions and collaborations on this research.
P.S: I am attaching my questionnaire as a file. Hope it will help to understand my question and questionnaire better.
Relevant answer
Answer
A couple of things to remember here: in a prospectus assemble the following:
a. The research questions and summarize responses by data type
b. The potential tests of the hypotheses from a
c choose tests based on a and b
d choose sample sizes based on chosen tests and type I and Ii requirements
e conduct methods of d
f answer the research questions based on e
g check the assumptions required in e
h
Prepare summary report of results
Suggestions- 0 Proper Prior Planning Prevents Poor Performance
1 get a good book on research design
2 Study it
3 Follow the suggestions for each step
4 assemble the overall plan
5 collect data as in plan 4
6see f, g and h above
7 do not ever do what you did in this
question again
Good luck David Booth
  • asked a question related to Reliability Analysis
Question
4 answers
I am doing a reliability analysis of a motivation questionnaire on a sample of athletes in different sports. I do the reliability analysis in order to check the reliability of the translation of the questionnaire into another language.
Thank you.
Relevant answer
Answer
The most common way to assess reliability with a single point in time is with coefficient alpha.
  • asked a question related to Reliability Analysis
Question
6 answers
It's an online assessment for which there are 14 learning objectives. I had 3 groups (Novice, intermediate, and experts) take the assessment that had 4 items for each of the 14 objectives. Ultimately I want the assessment to randomly select only 1 item for each learning objective (a 14-item assessment) from 3 possible items. What test(s) will help me choose the best 3 items for each learning objective? I already have data from 169 test-takers (77 novice / 55 intermediate / 37 experts).
Relevant answer
Answer
Hi Mark. You should study the chapter on reliability from Julie Pallant book on how to use SPSS. It gives you practical hints on running reliability tests and eliminating the items that do not fit. Any questions? Please feel free to ask.
  • asked a question related to Reliability Analysis
Question
10 answers
I want to learn the reliability coefficient of a scale I used in my study (an assignment for my experimental psychology class). I read about how to find Cronbach alpha, I can run a reliability analysis in SPSS to find it. But I read that in order to run reliability analysis, each item has to have a normal distribution, but my data is not normally distributed. Can I run a reliability analysis with non-normally distributed data? Is there an alternative to reliability analysis for non-normal distribution?
Relevant answer
Answer
One of the reasons why our data is not normal is the presence of outliers. Outliers are data that have extreme scores, either extreme high or extreme low. It's better if we throw out these data outliers, so that a normal distribution is obtained. If we have removed the outliers, then we retest the normality of data with kolmogorov-smirnov.
  • asked a question related to Reliability Analysis
Question
8 answers
I'm doing a split-half estimation on the following data:
trial one: mean = 5.12 (SD = 5.76)
trial two: mean = 7.62 (SD = 8.5)
trial three: mean = 8.57 (SD = 12.66)
trial four: mean = 8.11 (SD = 10.7)
(SD = standard deviation)
Where i'm creating two subset scores (from trial one & two; and from trial three & four - I realise this is not the usual odd/even split):
Subset 1 (t1 & t2): mean = 12.73 (SD = 11.47)
Subset 2 (t3 & 4): mean = 16.68 (SD= 17.92)
I'm then computing a correlation between these two subsets, after which I'm computing the reliability of this correlation using the Spearman-Brown formulation.
However, in the literature I've found, it all suggests that the data must meet a number of assumptions, specifically that the mean and variance of the subsets (and possibly the items of these subsets) must all be equivalent.
As one source states:
“the adequacy of the split-half approach once again rests on the assumption that the two halves are parallel tests. That is, the halves must have equal true scores and equal error variance. As we have discussed, if the assumptions of classical test theory and parallel tests are all true, then the two halves should have equal means and equal variances.”
Excerpt From: R. Michael Furr. “Psychometrics”. Apple Books.
My question is, must variance and means be equal for a split-half estimate of reliability? If so, how can equality be tested? And is there a guide to the range, which means can be similar (surely it cannot be expected for means and variance across subsets to be 1:1 equal?!)?
Relevant answer
Answer
Yes, unfortunately it's common practice to just compute Cronbach's alpha without first testing whether the variables are essentially or strictly tau-equivalent. This may in part be because SPSS calls the procedure MODEL = ALPHA (which does not make sense in my opinion) but does not provide a test of fit for essential or strict tau-equivalence as part of the procedure (for whatever reason). When the variables are not at least essentially tau-equivalent (when they are "only" congeneric, i.e., have different loadings), Cronbach's alpha leads to an underestimate of reliability (McDonald's omega is appropriate for congeneric measures). Even worse is the (probably frequent!) case where the indicators are multidimensional (i.e., they measure more than one factor/true score). In that case, Cronbach's alpha is completely meaningless, yet you wouldn't know from SPSS output.
Essential tau-equivalence can be tested in lavaan and other SEM/CFA programs by specifying a 1-factor model with all factor loadings fixed to one and intercepts and error variances freely estimated (not set equal across variables). Strict tau-equivalence requires equal intercepts (means) across variables (otherwise same specification as essential tau equivalence).
  • asked a question related to Reliability Analysis
Question
17 answers
Hello, I have a questionnaire that consist of four sections with each section focusing on different variables.
First, each section has 9-10 items with each item following a different scale. For instance, the first section has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section has 9 items with the first five items have six point Likert scale while in the remaining items the respondents have to choose from four specific options. The third section has 10 items with each following six point Likert scale. The fourth section has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options.
Second, in some of the items the respondents were also allowed to select multiple answers for the same item.
Now my question is, how to calculate the "Cronbach's Alpha" for this questionnaire? If we cannot calculate the "Cronbach's Alpha", what are the alternative to find the reliability and internal consistency of the questionnaire.
Relevant answer
Answer
Amjad Pervez Strictly speaking, Cronbach's alpha only makes sense when your variables are measured on an interval scale (i.e., when you have continuous/metrical/scale-level variables) and when the variables are in line with the classical test theory (CTT) model of (essential) tau equivalence (or stricter models). Essential tau-equivalence implies that the variables/items measure a single factor/common true score variable (i.e., that they are unidimensional) with equal loadings. For variables that are only congeneric (measure a single factor/dimension but have different factor loadings), Cronbach's alpha underestimates reliability. For multidimensional scales, Cronbach's alpha tends to be completely meaningless. For categorical (binary and ordinal) variables, psychometric models and scaling procedures of item response theory are usually more appropriate that procedures derived from CTT which assumes continuous (scale-level) variables.
Maybe you could describe the content of your variables (and the answer options) in a bit more detail. That would make it easier for folks on Researchgate to see which procedure may be appropriate for you.
  • asked a question related to Reliability Analysis
Question
1 answer
I would like to know which is the best way to analyse test-retest in non-normal data. If ICC is not recommended in those cases, which test should I choose?
Relevant answer
Answer
Hello
In the non-normal situation, Spearman correlation is a suitable method
  • asked a question related to Reliability Analysis
Question
6 answers
What if the Cronbach's Alpha of a scale (4-items) measuring a control variable is between the .40 -.50 in your research. However, the scale is the same scale used in previous research in which the scale received a Cronbach's Alpha of .73.
Do you have to make some adjustments to the scale or can you use this scale because previous research showed it is reliable?
What do you think?
Relevant answer
Answer
Hello Lisa,
as I suspected. These are clearly not indicator of a common underlying factor. Hence, alpha and every other internal consistency approach towards reliability are inappropriate. For its control function, however, the scale will do its job as it can be regarded as a composite of specific facets. And, yes, each of the facets won't be perfect error free indicators of their underlying attribute but that should not hurt much.
All the best,
Holger
  • asked a question related to Reliability Analysis
Question
4 answers
I have set of independent data whose final output (result) is in Binary form (0 or 1). Which form of reliability analysis can be used for such datasets ? I have seen FOSM, AFOSM methods, all of them are applicable for continuous data.
Relevant answer
Answer
If this is a measurement tool such as a questionnaire or interview form, calculating a composite reliability coefficient may not be right for you. Because such a measurement tool is not standard, it cannot measure a latent trait. If it is not an objective measurement tool, it may be more accurate to deal with concepts such as inter-rater reliability.
Good luck
  • asked a question related to Reliability Analysis
Question
6 answers
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
Relevant answer
Answer
Your research approach is problematic. Before you ask a research question or ponder the answer to a problem, you are starting with a method and trying to fit the method to a field, not even to a specific problem. You should first ask a research question, formulate the problem, build the model and then find a suitable optimization method to solve it.
  • asked a question related to Reliability Analysis
Question
7 answers
Recently, I read that we do not validate the questionnaire, but the scores obtained through this questionnaire. So is it wrong the papers with the title"Validation of the XXXXXX questionnaire"?
Relevant answer
Answer
Early in my career I encountered Anne Anastasi's book "Psychological Testing". I was struck, and have remained impressed all my life, by her thought that a "questionnaire is a sample of behaviour". That is, a questionnaire is a *sample* from which the analyst makes predictions about the *ensemble* of behaviours the respondent may be said to exhibit. It makes complete sense therefore to investigate the ability of a questionnaire to allow the prediction to be made with validity (does the sample really measure the ensemble of interest?) and reliability (does the sample fluctuate at random?) Using unvalidated questionnaires is, as I used to drone to my dear students till they knew this lesson, pseudo-science and snake-oil. As in all inferential statistics, the actual *data* (in the original question, "the scores") is an accident that has just taken place. What the investigator wants to know is: what does the accident tell us ("the questionnaire")? Welcome to the wonderful world of Psychometrics!
  • asked a question related to Reliability Analysis
Question
1 answer
Need to publish my research paper on Reliability analysis of a industrial system in SCI of Q1/Q2 category urgently. Can any one suggest me the journal?
Relevant answer
Answer
Il faut chercher dans scimago
  • asked a question related to Reliability Analysis
Question
8 answers
I did a reliability analysis on my current project using SPSS version 20. most of the results I am getting are between 0.5 and .66 coefficients even after item deleted.  can i say my items are reliable with the above findings. if no , pls., advice.
Relevant answer
Answer
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273-1296.
  • asked a question related to Reliability Analysis
Question
8 answers
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
  • asked a question related to Reliability Analysis
Question
18 answers
I used Neper to generate the tessellation file and the meshing file, and assigned the crystal orientation. How to import these files into ABAQUS for crystal plastic finite element analysis(CPFEA, CPFEM)?
Relevant answer
Answer
Neper will give you .inp with grains as element sets. You can write a py script to use these sets in a separate input, for example, which you can import to Abaqus CAE. You should then be able to see indiv grains as separate entities within CAE.
  • asked a question related to Reliability Analysis
Question
2 answers
Hi all, I am conducting a study on a flexible work arrangement. Section 1, consist of 7 questions. Q1-Q3 the selection of answer is YES/NO, Q4 ask you to rank the given answer, Q5-Q7 is on the Likert scale. This is the problems came. How do I carry reliability analysis on this Section 1???
Relevant answer
Answer
Dear Rafel,
If you mean regression correlation, Minitab will work just fine, but it is usually done on each question response bank independently.
If you mean the "reliability" of the questionnaire, that is a different type of study requiring test groups and sampling. It SHOULD have been done already if it is a survey in use. Minitab or other statistical programs or calculations could also be used to evaluate that data. Here are two references to get you started:
Best regards,
Steven
  • asked a question related to Reliability Analysis
Question
5 answers
Hello all,
I am trying to do an agreement analysis to verify how similar are the time-series measurements taken by two devices. Basically I have 2 curves representing values measured over time with each device, and I want to say how similar these measurements are.
I have other metrics in my analysis, but I was looking into CMC (Kadaba, 1989) to be a global metric. I know it is often used in gait analysis literature for reliability analysis, where curves taken by the same measurement device, but in different days, are compared. This coefficient represents similarity between two curves, so I was considering using it as a metric of agreement between the two time-series measurements I have, one from each device. I was wondering if there is any statistical assumption behind CMC that prevents me from doing that, I couldn't find much about it.
Thank you!
Relevant answer
Answer
Completely in agreement with the magnificent answer of the also magnificent researcher Dr. Pervaiz Iqbal; curiously I was going to hang something similar.
  • asked a question related to Reliability Analysis
Question
4 answers
Hello,
I am coding some metrics from different articles to run a meta-analysis and I had a simple question.
Let's say one of my variable of interest is Brand loyalty. In some articles, brand loyalty is often decomposed in two different variables (Attitudinal loyalty and behavioral loyalty) with two different metrics: two different AVE, CR, Alpha coefficients, Means and Standard deviations.
I would like to summarize these two variables in a single one. Thus, how do I get the value of the AVE, CR, Alpha, Mean and SD for the variable Brand loyalty (which is the variable gathering attitudinal and behavioral loyalty)? Should I do the average of the values given in the article?
Thanks in advance for you reply,
Best regards,
Kathleen
Relevant answer
Answer
Souza, A. C. D., Alexandre, N. M. C., & Guirardello, E. D. B. (2017). Psychometric properties in instruments evaluation of reliability and validity. Epidemiologia e Serviços de Saúde, 26, 649-659.
  • asked a question related to Reliability Analysis
Question
2 answers
Hi everybody,
I need to perform reliability analysis on my ERP data. Specifically, I would like to estimate internal consistency reliability through Spearman-Brown corrected split-half reliability. Could anybody help me with this? Do I need to use all the trials for each participant?
I'm not sure how to start the analysis, using trials or averages.....
I hope to get some answer here.
Thanks in advance.
Relevant answer
Answer
Not sure, but this paper could help:
  • asked a question related to Reliability Analysis
Question
1 answer
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Relevant answer
Usually + or - 25%
  • asked a question related to Reliability Analysis
Question
1 answer
Hello
Dear all,
I am looking for a reference that considered the rebar diameter as a random variable (e.g. having a normal distribution with standard deviation) in reliability analysis, however, I am not able to find any reference that rebar diameter is a random variable, similar to yield stress, fy, and etc.
Does anybody know any more information?
Regards,
Relevant answer
Answer
Look up probability density function. The rebar diameter is the diameter of a steel cylander called rebar Then look the whole thing up in a civil engineering handbook to see how to use it. Best, D. Booth
  • asked a question related to Reliability Analysis
Question
1 answer
Greeting!
I have performed the reliability analysis using Cronbach alpha for my questionnaire and I obtained a value of 0.507. There are 3 items to be deleted as shown in SPSS.
May I know what is the maximum number of items can I delete for my questionnaire? As I have came across one forum stated that only 20% questions can be deleted from questionnaire in order to preserve the content of the questionnaire. However, there is no reference found for this suggestion.
Please advice, thanks in advance!
Relevant answer
I believe that there is no fixed amount for this.
What is recommended is that you exclude items in an amount so as not to fail to assess fundamental elements of your latent trait. Delete one item at a time and observe how much the correlation between the items, on average, increases (Conbrach's alpha).
I also recommend using other reliability metrics, such as McDonald's Omega
  • asked a question related to Reliability Analysis
Question
1 answer
I wish to know the difference between the BN and Markov model. In what type of problems one is better than other?
In case of reliability analysis of a power plant, where equipment failures are considered, which model should be used and why?
Thank You!
Relevant answer
Answer
Dear Sanchit Saran Agarwal , Here is the answer
BAYESIAN
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
MARKOV
An example of a Markov random field. Each edge represents dependency. In this example: A depends on B and D. B depends on A and D. D depends on A, B, and E. E depends on D and C. C depends on E.
In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be Markov random field if it satisfies Markov properties.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite.
  • asked a question related to Reliability Analysis
Question
1 answer
For a dynamic Bayesian network (DBN) with a warm spare gate having one primary and one back-up component:
If the primary component P is active at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [alpha*lambda (S1)].
If the primary component P fails at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [lambda (S1)].
My question is, above are the conditional probabilities of primary and backup component. In a DBN, prior failure probability is also required. What will the prior failure probability of back up component? Will it be calculated using lambda (S1) or alpha*lambda (S1)?
Thank you
regards
Sanchit
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
1 answer
Dear all,
We conducted a research on college students using Maslach Burnout Inventory-Student Form of Schaufeli et. 2002. As you all know this scale consists of three factors, namely exhaustion, cynicism and professional efficacy.
My question is about the internal consistency coefficient of the factor professional efficacy. The Cronbach's Alpha for this factor is .59 and split-half reliability coefficient is .61.
In our research we also measure general self-efficacy of the students.
Therefore, what should we do?
For my idea, the best option is omitting the factor from the analysis since we also measure general self-efficacy.
What do you think?
Thanks in advance.
Meryem
Relevant answer
  • asked a question related to Reliability Analysis
Question
6 answers
I need ETAP software. Can anyone please share the link? As I am new user of ETAP, I also need a user guide please.
  • asked a question related to Reliability Analysis
Question
1 answer
I did items reliability analysis and the Chronbach alpha value is 0.956. The professor is saying "this too high, go a read what to do?" The professor don't gives any hint. What I should or need to do?
Relevant answer
Generally, an alpha greater than 0.95 does indicate problems. I would recommend checking some aspects:
1- Is the questionnaire too long? Very long questionnaires have very high alpha. See if all items are really needed to measure the latent variable or trait.
2- Is there redundancy in my items? Are there items that have different semantics but that measure a very similar characteristic of the latent trait? Again, deleting items can help.
3- Alpha measures an average correlation between items. So it is important to check the correlation value between the items. Very high correlation items may be measuring the same characteristic as a latent trait. Checking the inflation factor of the variance can also help in the identification of multicollinearity.
I hope these tips help you.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi everyone, grad student in need of help!
I have distributed two surveys, they are very similar but one was for teachers and one for students, as part of a needs assessment for e-learning. I wrote the survey to have variables assessing readiness, enthusiasm, and accessibility.
1) How do I properly assess the reliability of my surveys...the participation rates were low for one, and okay for the other, which makes me wonder whether or not EFA is going to be effective. Alternatively, in SPSS you can run the reliability analysis and get your Cronbach's alpha. How does EFA and the reliability analysis differ?
Relevant answer
Metrics like Cronbach's alpha and McDonald's Omega measure the consistent internal of questionnaires. But, a robust reliability measure is more naturally obtained via Factor Analysis and SEM.
When the sample is small, traditional statistical methods are not recommended. But other methods like Discourse Analysis or Content Analysis.
  • asked a question related to Reliability Analysis
Question
16 answers
I have 5 Likert scale questions in my questionnaire that seeks to measure Construct A. It has an acceptable Cronbach's Alpha value of above 0.7.
Does this mean that I am able to create (compute) a new variable; where I average the score of the 5 questions for each response to derive a score for Construct A for each respondent? I want to use this new variable (as a representative of Construct A) to conduct statistical tests with other variables.
Relevant answer
Answer
yes, you may proceed further and create your variable.
  • asked a question related to Reliability Analysis
Question
12 answers
Does anyone know; is there any special formula for inspection or preventive maintenance intervals or scheduling, when we are applying the artificial neural network models instead of conventional methods such as weibull?
Thanks in advance
Relevant answer
Answer
I agree with
Mohammad Asjad
and Günter Becker . ANN is a prediction tool. If you have one or more equipment failure data, you can use Weibull distribution to analyze the failure.
Due to the Weibull distribution, you will be able to comment and create schedules about the behavior of the equipment and even the system.
Cost factor is the most important criterion in these scheduling. Thanks to the Weibull distribution, I suggest you take advantage of the relevant study below in order to determine maintenance periods about costs and failure.
  • asked a question related to Reliability Analysis
Question
5 answers
Hi!
I am working on reliability analysis of PV Grid connected systems. I have no Background of Reliability analysis. So I just want to start from scratch on reliability analysis. Can someone please recommend me books on Reliability analysis of PV systems and Wind energy systems. Research papers just skip many things so I want to start from detailed study of books...
Thanks in advance...
  • asked a question related to Reliability Analysis
Question
4 answers
I am conducting a study where the Team Climate Inventory variable represents a second-order construct with four sub-dimensions (i.e. vision, task orientation, participative safety and support for innovation). 
How should I estimate a composite reliability score in this case with the use of Lisrel?
Thanks for all the hints!
Best,
Lukasz   
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
3 answers
Hi
A school in Jordan is doing an impact study on its alumni. The variables are a list of traits and values (innovation, leadership, empathy, etc…). I’m responsible for preparing the questionnaire.
My methodology is:
1- For each value/trait, find an inventory or scale that measures it.
2- Choose three items from the inventory/scale.
3- Combine the three items from all the inventories/scales to create the new questionnaire (about 60 items).
I need an expert who can review the final questionnaire and give an approval and recommendations to improve the questionnaire.
Any volunteers?
Relevant answer
Answer
Do you mean someone has already reviewed it or it is done and needs to be reviewed? if it needs a review, you can send me. You're welcome.
  • asked a question related to Reliability Analysis
Question
3 answers
SOLVED!!! Don't see how I can delete this question?
I am testing a survey about personality types and self-disclosure on Instagram. I gathered 105 respondents and used the mini IPIP scale by Donnellan to measure the big five personality types. I have reversed coded the items that were negative and double checked with a Phd researcher who confirmed I did it correctly. When running the reliability analysis for the mean of each variable, I get the results in the attached photo. I was told that this could be because some respondents were unreliable and clicking random answers and it could help to remove the outliers. So I did the Mahalanobis Distance in SPSS to identify the outliers (see attachment). I am not sure if I did it correctly but from what I can gather, there are no outliers since none are below .001? I am not sure now how to save my data and how to make it more reliable. I can go back and gather more respondents but it's been hard to do so and I am running out of time. Please advise. Thank you in advance.
Relevant answer
Answer
Remove the poor loading item and rerun the reliability test
  • asked a question related to Reliability Analysis
Question
4 answers
It has been seen that the instrument rating of an instrument mentioned in the operating manual is different from what is mentioned in the technical manual (Not all, but few). If we disregard the typing error, what are the actual reasons accounting for this difference?
Relevant answer
Answer
Dear Steven Cooke.
This question is related to Aviation wherein the assessment of airworthiness of some instrument found to be different in Operation manuals and Technical Manual. Operational manual is used by the Pilot/Operator whereas Technical Manual is used by Maintenance Engineering/Team. The Rating of instrument means the avionics component's operating scale with tolerance limit. For more details you can refer the avionics instrument specifications issued by various avaition agencies.
Thank you.
  • asked a question related to Reliability Analysis
Question
12 answers
NFF (No Fault Found) has the major contribution to reduce the operational availability, resources and increase the cost of Maintenance of any aircraft in aviation. The likely causes are human factors, maintenance training, faults reporting, fault analysis, corrective Maintenance and procedures. However, mitigating these issues are completely a tedious process wherein management skill can't achieve the desired results. So, what are the other parameters/ technical factors that need to be considered?
Relevant answer
Answer
Absolutely right sir. @Mr.Russel King.
Thank you.
  • asked a question related to Reliability Analysis
Question
8 answers
Hi everyone,
I've conducted an EFA and ended up with 5 factors. A few of the items are cross-loading over 2-3 factors. I have already removed 10 items that either do not correlate or cross-load significantly.
I am fairly happy with the factors, however, the cross-loading items are confusing me and I have a few questions.
1. When calculating the total scores, means and Cronbach's alphas for each factor, do I include the items which cross load with other items?
2. When I present the final scale/solution, how do I present the cross-loading items?
3. There is one factor which is negatively predicted ('Lack of Support' [all items have a negative value]), however, I have changed the scoring so it positively predicts the factor (Support). There is one item in this subscale which cross-loads with another. How does this impact the scoring? Should I try to remove this item?
4. I started with a 37-item scale and I now have 27 items. How many items are too many to delete? At what point should I just accept it as an overall scale with a good Cronbach's alpha (.921) and say further research into factors and subscales is needed?
I am reluctant to delete the few cross-loading items I have remaining, as when they are removed from the analysis, the reliability score decreases for the individual factors and the overall scale.
This is my first time doing an EFA and so I would be very grateful for any advice or recommendations you may have.
Thank you.
Relevant answer
Answer
Hello Jessica,
Factor solutions can include variables that show salient affiliation with more than one factor (some personality measures are notorious for this type of structure). However, the concerns associated with cross-loading are usually: (a) the structure is more complex than Thurstone's idealized concept of "simple structure"; (b) it may make the task of characterizing what a factor represents more challenging; and (c) perhaps the variable isn't as well measured or defined as it could have been.
The answer to your question depends on your specific research aims as concerns this set of variables (and how these do or don't reflect the concerns listed above). If your goal is to derive the "cleanest" possible structure, then throwing out variables/items may be the way to go. Do recognize that the possible concern here is that you end up defining very restricted factors that may not fully represent the target constructs. As well, depending on your sample, it's possible that the resultant structure to choose may be overfitted to the sample and not generalize as well to other data sets.
In any event, if you elect to retain cross-loading items, then:
1. Yes, they are included in any representation of a factor (e.g., an estimated factor score, or just a summated score), or for score reliability estimates;
2. You present a factor structure matrix/table that shows all variables that you deem salient with each factor (factor pattern and factor inter-correlation matrices as well, if you used an oblique rotation);
3. If all loadings on a factor are negative, then you may reverse the sign and characterize the resultant variate as a reversed polarity construct estimate. (You may do the same with mixed sign loadings, as long as you reverse each variable's sign accordingly.)
4. The facetious answer is, if you get down to two items or fewer, you've likely gone too far! The more serious answer is, there is no way to predict in advance how many "keepers" there are from a preliminary set of variables that were constructed, identified, or adapted to tap one or more constructs. That's why people engage in EFA or CFA in the first place; to help identify what structures are supported by data and what structures are not.
The final note I would make is that Cronbach's alpha for an "overall" scale score might not be the best indicator, especially when you have identified multiple factors for that batch of variables. For individual factor scales, sure.
Good luck with your work.
  • asked a question related to Reliability Analysis
Question
1 answer
My respondents are 92 and I have 180 questions. I am using SPSS, and the software says "scale or part of scale has zero variance and will be bypassed". Can anyone help me?
Relevant answer
Answer
One likely way to get that error message is if everyone gave the same answer on one of your variables. So, you need to examine the distribution of each of the variables you are using in your scale.
  • asked a question related to Reliability Analysis
Question
14 answers
Dear colleagues,
has the calculation of McDonalds' Omega been implemented in SPSS 25 so far? I found some older threads concerning this question on RG but nothing in the recent past.
In case you know how to do this in SPSS or Mplus, I would be very grateful. I would kindly ask you not to suggest using R because I am not familiar with the programme.
Thank you in advance and kind regards
Marcel Grieger
Relevant answer
Answer
Dear colleagues,
I don't know if this is still of help for you, but SPSS does have a possibility to calculate the omega reliability.
There is a wonderful explanation for the installation of the extension:
written by
Prof. Andrew F. Hayes
on this page the SPSS-extension for omega calculation can be downloaded
Hope this is useful.
Best regards
  • asked a question related to Reliability Analysis
Question
5 answers
so I came across a situation where a cross sectional Servery based research was done with a questionnaire that they designed themselves
and they finished the data collocation and have collocated the whole target sample
The Research team didn't do a pilot study.
When they wanted to start the analysis they wanted to do cronbach alpha to measure reliability.
- Cronbach alpha happened to be 94% (showing excellent reliability).
- but they have only done face validity for the questionnaire, and didnt do anything else like Principal Component analysis [PCA].
Q1/ so can they just go with the flow and write in the methodology, and result section that they have done cronbach alpha and it showed great result of etc etc... ??
Q2/and can they say that the questionnaire is a valid questionnaire ??
Relevant answer
Answer
It is correct you should only present evidence of validity according to your objectives that are based on the inferences of the scores obtained from your study. You should consider for the evidence of internal validity the confirmatory factor analysis.
  • asked a question related to Reliability Analysis
Question
15 answers
Hi there,
My questionnaire consists of 18 MCQs, each with one correct answer and 3 incorrect answers. I'm measuring participant scores on the questionnaire before and after watching a video in two seperate groups.
From what I've read Cronbach's Alpha is used to test scaled data (i.e. Likert scales) for reliability, so I'm unsure as to whether its appropriate for my questionnaire.
Can I also use it on my questionnaire, or is there an alternative more appropriate for my data?
If the answer is yes I can use it, do I analyse it exactly the same in SPSS as I would scaled data? i.e.: Analyze> Scale> Reliability Analysis, all questions into the 'items' box, tick descriptive statistics options 'item' 'scale' 'scale if item deleted' and 'correlations' in inter-item options?
Thank you in advance!
David
Relevant answer
Answer
It is apparent from your question that you are interested in observing any changes in the variable of interest after administering the intervention (participant watching the video). This is an experimental design where you are in need of a scale to measure your variable of interest.
The question on internal consistency of the questionnaire calls for a psychometric discussion, which is a different domain. If you are not evaluating the psychometric properties of your questionnaire following all the steps in scale validation, there is little point in checking Cronbach's alpha alone for the scale. Simply, obtain the achieved scores (sum of correct responses) before and after the intervention, employ paired tests.
If the variable of interest demands measurement viz validated tools, it is always advisable to use previously validated scales measuring the construct, if there are any. If there aren't any, then the question is do you want to develop one? If yes, follow the rigorous validation process.
  • asked a question related to Reliability Analysis
Question
17 answers
Do you have any experience with probabilistic software for structural reliability assessment? Any links?
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
3 answers
I am using answers to a questionnaire in which existing scales from scientific papers are used. One particular (set of) concepts is measured with 24 questions, which are divided across three different subscales by the original author who developed the questions. However, reliability analysis of these subscales (using the collected answers) shows that for two of the three subscales, Cronbach's alpha is lower than 0.7. Furthermore, all subscales contain one or two questions which, if removed, would increase the Cronbach's alpha, although for two subscales, the resulting Cronbach's alpha would still be lower than 0.7.
Is it acceptable to remove certain questions from the subscales, or should I continue to use the original subscales in this situation?
Thank you in advance.
Relevant answer
Answer
Hi Wk,
Existing questionnaires are not perfect and are appropriate to designed situations. Surely you may improve them by modifying questions. Especially these which are problematic in terms of collective answering. I may suggest to modify questions or add more appropriate instead of removed ones.
Best
Leszek
  • asked a question related to Reliability Analysis
Question
11 answers
Dear collegues,
I performed a cross-cultural study using two questionnaires (66 and 12 items). The original version of these questionnaires was used in the first country, and these tools were also translated into the second country´s language to be administered here. It was the first time the translated version was used for research purposes. The number of participants in the first country was 216, in the second 265. Is it required to perform confirmatory factor analyisis (for the two lignuistical versions separatedly), or is it enough to report internal consistency coefficients in this particular publication? What I am actually supposed to do to fulfill the required standards of reporting psychometric properties of translated questionnaires?
Thank you for your suggestions in advance.
Relevant answer
Answer
It would be advisable to do one for each country and then a general one with multi-group invariance to evaluate the differences.
  • asked a question related to Reliability Analysis
Question
4 answers
I'm currently making an investigation of how to apply RCM with Preventative Maintenance to a truck fleet of fuel transport. In my company there's not a list of functional failure or failure modes, but according to some books' authors like Dhillon, Pistalleri or Dixon, said that there are generic database failures. I was searching this kind of data in google and google scholar but didn't find anything. So, this is why I ask for you to help if you know any public generic failure database that can give my team a base of information to improve our investigation.
PD: Sorry if there's any spelling or semantic error. I'm not native English, and almost any kind of material are in this language.
Thanks,
Relevant answer
Answer
In general, no, there does not seem to be an easily accessible database of failure probabilities. Some references might be found to get started, as the following:
My own experience in a similar situation was to recognize that time moves on... START collecting that data for your operations NOW! It will be useful shortly, and sooner than you may expect. Spending too much time looking for previous solutions just prolongs the time before your own database is populated more accurately.
Oftentimes the vehicle manufacturer will have that data as well.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi everyone,
I'm trying to determine which test to run to assess the accuracy of a model that's classifying vegetation. I have ground-truth values and the values that the model has produced. I've considered Pearson Correlation and Intra-Class Correlation, though there are many tests, so I'm stumped on which to decide on. I've seen past literature using Pearson Correlation though my data aren't normal even with a log transformation.
Thanks much!
Relevant answer
Answer
Hello Charles,
Can you please explain/think of your objective a bit more clearly? It is not clear whether you want to assess the accuracy of the classification method or you want to run some goodness of fit tests to understand if the model lacks something. If it is the former (as it seems from the way you framed the question), a confusion matrix might be most suitable to understand the results of your algorithm. There are several other measures to give an idea about the performance of a clustering/classification algorithm, Cohen's kappa, purity index, RAND index being some examples. Each one has its own advantage and disadvantage. I would strongly recommend you to read about them and make a decision depending on the type of dataset and problem you're working on. Hope that helps!
Thanks.
  • asked a question related to Reliability Analysis
Question
14 answers
COVID-19 is affecting all kinds of human activities, research is not exempted. Many ongoing research studies are not paused because of COVID-19, patient recruitment cannot be continued, follow up visits are not stict to schedule, intervention procedures may be delayed, blood test monitor are postponed.
I would expect a higher loss to follow up rate during this period, which would affect the reliability of research. Even after COVID-19, will the recruited subjects have some difference than those recruited before?
What do you think?
Relevant answer
Answer
I am in a contiuous activities during this mandatory vacation but at home office..
Wish you all healthy life.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi,
My main analysis is on an intention to treat dataset, although I am looking at the per-protocol dataset to confirm if there were any differences. When running Cronbach's Alpha on my many scales, should I run it on the ITT dataset, the PP data, or both?
Thanks,
Max
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
8 answers
Update
I have a scale (12 items)
I go to Analysis -> Scale -> Reliability analysis and get my Cronbach alpha (0,5)
BUT 2 of my items are «inverse». If I recode this two items as it was not inverse I get alpha=0.8
Am I right? I should recode this items before counting Cronbach alpha?
written earlier
I conducted a study (correlation plan).
I used (including) 2 psychological tests, which were adapted by another author according to all the rules.
And I run into problems:
Situation1 (solved)
My first test (14 items) has 2 subscales. In Ukrainian adaptation, the Cronbach alpha for the scales is 0.73 and 0.68. But I did my own research and counted Cronbach's alpha. 0.65 and 0.65 came out.
Question1: Should I count correlations with this test or, maybe, exclude this test from analysis?
Situation 2 (see update)
My second test is Zimbardo’s Time Perspective Inventory (56 items). In Ukrainian adaptation, four of five scales have Cronbach Alpha above 0.7. One scale is 0.65.
But in my research everything is ok only with 3 scales, they are higher than 0.7.
Two scales have a very low Cronbach Alpha: 0.55 and 0.49.
Question2: should I exclude this two low scales and count correlations with only that 3 scales which Cronbach Alpha more than 0,7?
PS: N=336 in my study
Relevant answer
Answer
No matter create or use Oleksandra Shatilova any measurement tool must be valid. And cronbach alpha's say nothing direct things about validity. When you handle with reverse questions, Cronbach's alpha must be rise. But it is not sufficient.
So i agree with Robert Trevethan 's advices
  • asked a question related to Reliability Analysis
Question
11 answers
I study the reliability of the system and use the RBD (Reliability Block Diagram). I should find the failure rate but I couldn't be sure of the data set size. Is there any rule about size?
Relevant answer
Answer
Hi there,
There are some references that have addressed the reliability analysis structures with stating the required data.
Please find them link below:
Hope you found them helpful and please try to cite them in your future works!
Please do not hesitate if you require any further assistance.
With the best Regards
Hamzeh
  • asked a question related to Reliability Analysis
Question
1 answer
In general, the approximated probability density function of the performance function can be obtained by the moment method. Is there any other method ?
Relevant answer
Answer
I suggest that you take a look to the following site :
This may be helpful.
Regards.
  • asked a question related to Reliability Analysis
Question
4 answers
Hi!
Currently I am analyzing my data and I've got some results of which I don't know what to do with it. I've tested whipped cream on whipping time at three different moments: day 0, day 1 and day 2 (three different groups). Therefore I used a one-way ANOVA test to analyze if there's a difference between the means of the groups. This test is significant, however, when I use a Post Hoc test to analyze which groups differ, these results are all insignificant. The variances are equal so I used the Turkey test (but any other test I can use in my program give the same insignificant results).
I think this is because the ANOVA may give a type I error (incorrect rejection of the H0 hypothesis that there's no difference between the groups) and the Post-Hoc can do a more reliable analysis between the groups. But I don't know exactly how it works.
Does somebody might know how to draw a clear conclusion from these results! I would be very grateful for any help you can provide!
Relevant answer
Answer
In order to support your explanation, I think that you must be sure that the ANOVA meets the assumptions. That gives you a guarantee about the inferences in your analysis. If the answer goes to the right way, to support your conclusions, you must calculate the power of your ANOVA. This will be an objective answer to concluded about your analysis
  • asked a question related to Reliability Analysis
Question
3 answers
A correct interpretation of reliability analysis is determinative for researchers and industrial developers to refine their design and also prepare proper maintenance scheduling and safety analysis. However, I still see that many designers prefer to use classical safety factors instead of reliability analysis techniques. what's your sense about this.
For example, imagine that you are going to buy a bearing and I say you this bearing's reliability is 94% for the expected life of 5 years. it means that if you test 100 bearings under normal performance, almost 6 of them should fail after 5 years. does this analysis makes sense for you in your research and development?
and if the answer is yes, how do you use the outcome of reliability analysis in your research area. the answer is important for me because I am going to start developing commercial software for reliability analysis and it is important to see what are the expectations of experts from the reliability analysis methods.
Thanks,
Sajad
Relevant answer
Answer
Thank you Sajad for sharing this interesting discussion.
I think the results of design reliability with the aid of laboratory tests such as Accelerated Life Test as well as operational reliability through Failure rates over time can be useful for designers and maintenance engineers to make a good decision about component or system.
I wonder what do you mean by the classical safety parameters ?!
  • asked a question related to Reliability Analysis
Question
4 answers
I am evaluating reliability and availability of a hydropower plant using dynamic fault tree gates. To evaluate the top event probability Dynamic Bayesian Network (DBN) is used. I am unable to figure out that how many number of time slices should I consider for my network. Also, should my time slice be 1 month-6 month-1year, or 1year-2year-5year.
Also, should all the power plant components with static and dynamic gates be represented with different time slices in the DBN or only the power plant components with dynamic gates be represented with time slices?
Relevant answer
Answer
Thank you Sir.
  • asked a question related to Reliability Analysis
Question
6 answers
Now, I am a Master Student and doing Master thesis related with coordination in construction.My idea is to rank the 59 coordination factors as the most importance and most time consuming according to the Questionnaire surveys data. For measuring the most important factors, I will use Relative important index method and for measuring time consuming of factors, which method
will be suitable? Another question is that I will use Reliability analysis and descriptive statistics as general analysis and what types of analysis can also be suitable?
Relevant answer
Answer
Hello Kaung,
I think that ranking the times, based on mean or median values, would suffice for your purposes.
Good luck with your work.
  • asked a question related to Reliability Analysis
Question
3 answers
hello researchers
i have a question regarding switching method for selection of standby units for operation in any complex system for reliability analysis. is there any appropriate method for select an unit(standby).
Relevant answer
  • asked a question related to Reliability Analysis
Question
3 answers
Hi
Historical data is used to forecast the number of functional failures in passenger trains. However, there is always differenc in forecast and actually observed data. Am wondering which technique or approach is suitable to minimize the forecast error in case of railway data.
Shall be greatful if you can share some useful article or case study.
Relevant answer
Answer
Mohammed El Genidy many thanks, i will definitelt read your paper. Hope it will help me
  • asked a question related to Reliability Analysis
Question
5 answers
Hi
I have run PCA for 3 years of data so by this I got factor scores for each item/subject for every year. Now, I need a single value to use in my model from three values.
Should i take average of factor scores or should i use a recent value? What is an appropriate way to use factor scores for transport delay data analysis?
Relevant answer
Answer
Great. You are progressing quite rapidly. Keep Going and good luck By the way Monash Univ. has a great statistics dept. The greatest statistician of all time (R.A. Fisher) is buried there
  • asked a question related to Reliability Analysis
Question
3 answers
I am not sure if there exists a novel method to relate the state estimation method to the prediction of a survival function or reliability distribution function, which is more like a conventional method, in prognostics. Can anyone answer the question please?
Relevant answer
Answer
Thank you for your time, Dr. Jensen.
  • asked a question related to Reliability Analysis
Question
3 answers
For example,
A program which evaluates LOLE and SAIDI and CAIDI with a test network.
Relevant answer
Answer
Dear seyed, no only for Simplorer or Portunus. Regard
  • asked a question related to Reliability Analysis
Question
5 answers
We all know there are scales that measure purchase intention (or willingness) for a product or service. I am interested to know if there is some (dependable) scale that measures the intention (propensity) to stop purchasing a product after an unpleasant experience (dissatisfaction).
Thank you!
Relevant answer
Answer
You could read the two papers i have written on my page.They have purchase intention scales
  • asked a question related to Reliability Analysis
Question
31 answers
Here is my situation: I have used the standardized Health Literacy Questionnaire (HLQ) (a tool comprising 9 scales) to look at musicians' health literacy for the first time. However, the HLQ has never been validated on musicians. After having collected 479 responses from musicians, I cleaned the data and ran a CFA (using AMOS). The model was unfit, so I ran an EFA (in SPSS) which suggested I may have about 4 factors (instead of 9) with one of them having a Cronbach's alpha of less than .7. I then ran a CFA again, but the CFA doesn't fit with the EFA at all - what shall I do to test construct validity?
For the EFA, I used Eigenvalue > 1 & parallel analysis; conducted an orthogonal rotation (varimax); and supressed small coefficients of below .4.
Many, MANY thanks!
Relevant answer
Answer
Thank you again to all of you who responded to this! I have now contacted one of the authors of the HLQ who is a research methodologist who specialises in the application of survey methods to public health. I also obtained his permission to share his answer with you, as I thought this may be of interest. So, here goes:
"I take from your query that you were using the default confirmatory factor analysis (CFA) approach in Amos to fit your model. This uses maximum likelihood (ML) estimation which is *not appropriate* for the HLQ items. The HLQ items have 4 and 5 point ordinal response options. CFA (and exploratory factor analysis; EFA) of ordinal data with 4 and 5 point response options should be done with weighted least squares estimation and polychoric correlations* (the appropriate method for CFA is labelled WLSMV in Mplus and DWLS (diagonally-weighted least squares) in LISREL. Either one of these specialist structural equation modeling programs should be used to analyze the kind of ordinal response data generated by the HLQ.
That said, we have recently been using Bayesian structural equation modeling (BSEM) in Mplus to analyze HLQ data. I understand that from Version 7 onwards Amos has a Bayesian CFA option and this may be appropriate for use with the HLQ. The second edition of Barbara Byrne's book "Structural Equation Modeling with AMOS: Basic Concepts, Applications and Programming" (2nd Ed., 2010) has, I believe, information on the use of Bayesian SEM in Amos. Our use of BSEM involves setting informative (small variance around zero) priors for residual correlations and cross-loadings to provide some so-called 'wriggle room' around zero for these estimates. I doubt this option would be available in AMOS, however using Bayesian analysis in AMOS should provide you with a much more appropriate approach to CFA than the Amos default ML approach.
Using SPSS for exploratory factor analysis with HLQ items is, similarly, not at all appropriate. While I would strongly recommend that you don't immediately fall back on EFA without a thorough exploration of the reasons for any poor model fit in a Bayesian CFA analysis of HLQ responses, if you do need to do an appropriate EFA there is (or was) a free-ware program available on Prof. Michael Browne's home page at the Ohio State University. The software is called 'Comprehensive Exploratory Factor Analysis' (CEFA) and is very user friendly. 
Finally, there are many appropriate factor analysis tools in R for analyzing ordinal HLQ-type data.
*Polychoric correlations are actually based on the assumption that there is a normally distributed latent variable underlying the ordinal response continuum. As I understand it polychoric correlations handle non-normally distributed ordinal data more accurately than do Pearson correlations. In our situation with non-normally distributed ordinal responses to self-report items factor analyses using polychorics and an appropriate estimator (e.g. diagonally-weighted least squares) is the best option we have available aside from a full Bayesian analysis with small-variance priors. I’m not certain whether polychoric correlations are necessary with Bayesian analysis. Mplus has the option of declaring the data CATEGORICAL for a Bayesian analysis; I’ve experiments a little with this, but have found the results very similar whether or not this option is used and have not used it routinely. I don’t know whether Amos provides a polychoric option for Bayesian analysis."
So, based on his response, we have conducted a one-model CFA with weighted least squares estimation and polychoric correlations and goodness of fit scores look much better now. We still need to run a CFA for each of the nine factors/dimensions separately, as suggested by the same research methodologist...
  • asked a question related to Reliability Analysis
Question
4 answers
Dear all,
Does anyone know how can I estimate the NHPP reliability function through Non-parametric method?!
I know the Kernel density estimation is widely used in this area, but seems it has very complicated theory.
I was wondering if you could suggest an example or statistical software directly.
Also, I am just attaching the needed formulas of Kernel model.
Thanks for your attention.
Best/Hamzeh
Relevant answer
Answer
Dear Marcello Fera
I know maybe it is difullcut :) but generally its possible.
Thanks for your answer anyway.
Best.
  • asked a question related to Reliability Analysis
Question
3 answers
Hello! I am lost as to know which tests I should use. I am doing a master thesis in marketing/IS.
I have two groups (treatment received used AI) and control (did not use AI). I used an experimental design with scenarios. Both groups went through the same scenario and purchased a luxury brand. I would like to verify if using AI during a consumer's purchasing process changes (or not) their perception of the brand, based on 4 constructs (uniqueness of the brand, quality of the brand, modernity and conspicuousness). I am using reflective scales item (I think?)
I have been looking at papers and online and I believe I should test do a PCA and a Manova, but isn't there another step to take before running a Manova? and why?
Thank you very much.
Relevant answer
Answer
Hello - can you provide more details about the construct since you refer to both formative and reflective constructs in your question. In particular, do you conceptually think of brand perception as a formative construct or a reflective construct? The way you perceive or hypothesize it will impact the structure of your SEM model and likewise the parameterization. Either way, your solution lies within SEM approach. Despite this confusion in the question, I will try answer the question.
In a formative construct, the indicators (4 in your case) cause the construct. Thus, in this case your indicators will ‘cause’ (so to speak) your latent construct of brand perception, and which inturn will affect other endogenous variable(s) (not sure if you’ve other endogeneous variable, or is it just this ‘formative’ construct?)
On the other hand, reflective constructs are more like the conventional latent variables approach, where the indicators are caused BY the latent variable (which could be perception of the brand if you believe it is a reflective construct). In this case, assuming there is no other observed endogeneous variable, a measurement model would suffice. Of course, you want to include an indicator for the two groups (treatment/control). At a next step, given the significant amount of heterogeneity in consumers’ perceptions or purchase decisions, I would strongly recommend to go for a heterogeneity-based variant of the conventional SEM, or a mixed SEM at minimum.
Best, wali
  • asked a question related to Reliability Analysis
Question
2 answers
I am studying complex system reliability analysis with BN now and looking for a project case which is a multi-state system with no less than 3 states in one node.
If you have some cases that meet the above conditions, please helpe me, thanks very much !
Relevant answer
Answer
You can define the period/condition of each state changing, there can be as many states as you want. Some paper you may read :
Bayesian Networks for Reliability Analysis of Complex Systems
A Multigroup SEIR Epidemic Model with Vaccination on Heterogeneous Network
SEIQR-SIS epidemic network model and its stability
  • asked a question related to Reliability Analysis
Question
5 answers
Hi.
I am currently completing the final steps of my school work which is evaluation of performance reliability of Zlín aircraft on our school. The aim of my work is to prove that our fleet is safe. I have already wrote about every single failure which appeared in the last 6 years and discussed about our aircraft maintenance. The last part is calculation of reliability. I have already calculated mean time between failures (easy one) but that is not enough in my opinion. I would like to add other numbers to make the prove better. Do you know any other formulas to calculate things connected to reliability of aircraft? The data i have are types of failures, number of failures, total time flown during each year for every aircraft and calculated mean time between failures. I have found some formulas on the internet (picture added) but the description and way how to calculate it is too much for me (as i am not a mathematician student). Therefore i am looking for a simpler formulas or a program which would help me. I am really grateful for every answer.
Relevant answer
Answer
is it possible reliability? If i multiply it by 100 to get % reliability it will be too low percentage if i understand it right
  • asked a question related to Reliability Analysis
Question
22 answers
I am looking to test item discrimination of my newly constructed psychological well-being scale and would appreciate any references for suggested ranges of poor, good and excellent discriminatory values.
Relevant answer
Answer
I agree with the great comments above, and for the sake of completeness, I want to mention the "correlation with marker items (items that you highly trust) method", which I find very practical.
  • asked a question related to Reliability Analysis
Question
2 answers
What should be the scientific approach to draw test framework to justify validity and reliability of the data used in research works?
Research use to be done based on data inputs from primary and secondary data source.
To validate qualitative and quantitative data, its recommended to come out with effective test framework which can help in justification of the validity and reliability of the data.
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
3 answers
I have problems with Polynomial Chaos Expansion Regression Method. I have some questions related with these topic and its Matlab code.
If anyone help me I will be very appriciated.
If the problem could be solved i can give you gift or money since the problem is the main handikap of my thesis and money is not problem at all from now.
Matlab codes are given in m.files;
Relevant answer
Answer
Dear Murat Sari
You can use many eazy and free Statistics software pakage such as Minitab and Spss to do that.
  • asked a question related to Reliability Analysis
Question
30 answers
CR is often advocated as an alternative option due to the usual violation of the tau-equivalency assumption by Cronbach's Alpha. My Alpha returned a value of 0.64 (low but I guess I can proceed since I've seen such a practice before and since authors such as Hair and Kline accept a threshold between 0.6 and 0.7). Anyway and since my factor is homogeneous but has different loadings for all the 4 items involved, I think CR would be a better alternative. Surprisingly, my CR returned a value of 0.787 using a calculator based on the formula provided by Raykov (1997).
Is such a high difference possible and logical between the 2 coefficients? One paper (Peterson & Kim 2012) said that although CR is a better estimate, there isn't much a difference between the values.
Assuming that CR is indeed correct, can I proceed any further and do a multiple regression analysis based on the reliability provided by CR and not Cronbach? Thank you.
EDIT: I am using this calculator/formula.
Relevant answer
Answer
Internal consistency is a general term used for estimating the reliability of a measure by evaluating the within-scale consistency of the responses to the items of the measure. It is only applicable to multiple-item measurement instruments. Cronbach (Coefficient) Alpha is the most widely used method for estimating the internal consistency. Coefficient Alpha assumes: i) unideminsionality, and that ii) items are equally related to the construct therefore, interchangeable. In practice, this means that Alpha assumes factor loadings to be the same for all items. Composite reliability does not assume this but take into consideration the varying factor loadings of the items. If your items; i) measure the same single construct, ii) have exactly the same factor loadings, and iii) there are no error covariances, your composite reliability coefficient and  Alpha coefficient would be the same or very close. The more factor loadings fluctuate among items, the higher the discrepancy between the values of Composite reliability and Cronbach Alpha.   
Forwarded from answer to similar another question
  • asked a question related to Reliability Analysis
Question
9 answers
Is factor analysis a MUST when adopting or adapting research instruments in different cultures?
Relevant answer
Answer
Hi Ryan,
that's a difficult question. Most importantly would be to consider/evaluate whether the original set of questions truely imply a true common factor structure. Most questionnaires are either developed by using a principal component analysis which is simply computing a variance-remaining summative composite. A factor model implies that the factor represents an existing entity which is the cause of the item reponses. The essential testable implication of the factor model is conditional independence of the items given the factor (local independence). Most scales violate these assumptions - hence you do not know whether the reason is only some slight and unimportant violations or a fundamental problem with the structure (which is a problem for its validity).
IF the model holds it has some advantages when undertaking cross cultural research as you can test for "measurement invariance". I post some papers for your interest.
The other possibility is that the set of items simply form a "collective set" - which means that each/ or some items measure different things but the the "construct" is simply a set of these things (like an index or umbrella term). Actually, I do not know how to evaluate cross-cultural equivalence of such a composite. I could imagine, placing the set of items in a network together with validation criteria. The problem is how to evaluate the overall match. A topic for future research :)
Best,
Holger
References about invariance testing:
-----------------------------------------------------
Schaffer, B. S., & Riordan, C. M. (2003). A Review of cross-cultural methodologies for organizational research: A best-practices approach. Organizational Research Methods, 6, 169-215. doi:10.1177/1094428103251542
Taras, V., Rowney, J., & Steel, P. (2009). Half a century of measuring culture: Review of approaches, challenges, and limitations based on the analysis of 121 instruments for quantifying culture. Journal of International Management, 15(4), 357-373. doi:10.1016/j.intman.2008.08.005
Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25, 78-90.
Vandenberg, R. J. (2002). Toward a further understanding of and improvement in measurement invariance methods and procedures. Organizational Research Methods, 5(2), 139-158.
References about forms of constructs
-----------------------------------------------------
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: Towards an integrative and analytical framework. Organizational Research Methods, 4(2), 144-192.
Edwards, J. R. (2011). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370-388.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155-174.
And the difference between factor models and composites
-----------------------------------------------------
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J.-Y. (2003). The mismeasure of man(agement) and its implications for leadership research. The Leadership Quarterly, 14, 615-656.
Bandalos, D. L., & Boehm-Kaufman, M. R. (2009). Four common misconceptions in exploratory factor analysis. In C. E. Lance & R. J. Vandenberg (Eds.), (pp. 61-87). New York: Routledge.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., Strahan, E. J., MacCllum, R., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. doi:10.1037/1082-989X.4.3.272
  • asked a question related to Reliability Analysis
Question
4 answers
Hello,
I am looking for Standards discharges of treated wastewater into the receiving environment in the Ontario Region, Canada.
I tried several websites, but without success.
 Standars discharges for (TSS, COD, BOD, TN, TP).
Thank you for your help.
Relevant answer
Answer
This is Ontario weblink:
F-5-1 Determination Of Treatment Requirements For Municipal And Private Sewage Treatment Works. Requirements for treatment of municipal and private sewage discharge in surface waters.
  • asked a question related to Reliability Analysis
Question
2 answers
The uncertainty fuzzy set theory (FST) plays a significant role in real world problems such as operation research, medical decision making, risk assessment, social science, decision making, reliability analysis.
Papers:
L.A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965) 338-356.
S.H.Wei, S.M. Chen, A new approach for fuzzy risk analysis based on similarity measures of generalized fuzzy numbers, Expert Syst. Appl. 36(1) (2009) 589598.
J. Ye , The Dice similarity measure between generalized trapezoidal fuzzy numbers based on the expected interval and its multicriteria group decision-making method, Journal of the Chinese Institute of Industrial Engineers, 29:6 (2012) 375-382, DOI: 10.1080/10170669.2012.710879.
Relevant answer
Answer
Dear,
You can use intuitionistic fuzzy topsis method to deal with vagueness and uncertainty for MCDM.
Thank You
  • asked a question related to Reliability Analysis
Question
3 answers
To reduce size of the large BDD, different approaches are used – optimal ordering (OBDD), Reduction Operations (ROBDD), Zero-Suppressed BDD (ZBDD), etc. But sometimes, for very large fault trees, even after using of these approaches, it is impossible to build full (exact) BDD and so we should use some Cut-Off. Please, recommend me some approaches or give me some references for BDD Cut-Off. Thanks a lot beforehand. Regards, Sergey.
Relevant answer
Answer
you are welcome