Science topic

Reliability Analysis - Science topic

Explore the latest questions and answers in Reliability Analysis, and find Reliability Analysis experts.
Questions related to Reliability Analysis
  • asked a question related to Reliability Analysis
Question
4 answers
I'm concerned when entering items for reliability analysis (Cronbach alpha), are the original items replaced with revised items or not.
Guidance from experts will be highly appreciated
Relevant answer
Answer
Yes, the items need to be recoded before computing alpha. The items should all be positively correlated after recoding. It doesn't make sense to compute alpha prior to recoding because you would very likely end up with some negative inter-item correlations. The resulting alpha estimate would be incorrect/meaningless.
  • asked a question related to Reliability Analysis
Question
1 answer
I have a question regarding moderation effect.
I am testing a model with one one IV (A) and one DV (B) and I want to test the moderating effect of M on this path.
Is it necessary to investiagte the reliability and validity for cross construct(B*M) ?
or I only have to investigate reliability and validity for A construct and B construct ?
Help from one of the PLS-Experts in this forum would be highly appreciated!
Relevant answer
Answer
  • No there is no need to include moderation in both analysis just report the other variables [refer to this paper 10.1002/mde.3422]
  • asked a question related to Reliability Analysis
Question
3 answers
To protect safety-critical systems against soft errors (induced by radiations), we usually use redundancy-based fault tolerance techniques.
Recently, to cut down unacceptable overheads imposed by redundancy, we can only protect the most critical parts of the system, i.e., selective fault tolerance. To identify such parts, we can use fault injection.
There are two methodologies based on fault injection widely presented in the literature toward improving the system's fault tolerance, called: Reliability assessment and Vulnerability assessment. Both use fault injection. I wonder, what is the main difference between these two concepts, i.e., Reliability assessment and Vulnerability assessment?
Relevant answer
Answer
Both answers of Steven Cooke and O.S. Abejide are correct. in addition, The reliability is the systematic calculations and prediction of the probability of limit state violation, and Vulnerability assessment is the weakest/ critical point in a system where failure is likely to start first before spreading to other members of the system
  • asked a question related to Reliability Analysis
Question
4 answers
i am trying to do reliability analysis for short rc columns. i am referring a paper "reliability analysis of eccentrically loaded columns" by "Maria M. Szerszen, Aleksander Szwed and Andrzej S. Nowak".
in the end, based on a plot of 'strain' vs 'strength reduction factor', they have proposed a new values of strength reduction factor as a function of strain.
two models have been proposed, the dotted one is for all the points except black
while the solid line is for black points.
black points are depicting reinforcement ratio < 2
green, blue, red colors are showing reinforcement ratios in excess and equal to 2
my question is how they fitted these two lines, or, how they have measured the transition zone from the given scatter plot?
Relevant answer
Answer
Steven Prevette that makes two of us. many thanks again for your comment. i have done a similar analysis and will post my result here once done.
  • asked a question related to Reliability Analysis
Question
1 answer
Hi,
I have conducted an EFA on three items, and all items load on one factor. I then ran a reliability analysis with the three variables to ensure internal reliability using Chronbachs Alpha.
My question: Should I run a reliability analysis before or after the EFA?
Does the order really matter in this case?
Thank you in advance!
Relevant answer
Answer
I believe you are supposed to run a reliability analysis after the EFA since if you did it before you might be determining the consistency of the sample results for variables which show items which are loading on the wrong factors or cross-loading on multiple factors, and these factors need to be deleted. For further information click the following link chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://files.eric.ed.gov/fulltext/EJ1085767.pdf
  • asked a question related to Reliability Analysis
Question
1 answer
Dear all,
Trust you are doing great. I need a favor from all of you. My colleague and I are working on stability of iron and steel manufacturing plant and we are looking for information on the detailed process flowsheet and some historical data of the plant such as the maintenance and failure data for for at least the past 5 years. Could anyone assist with this data or provide me with information of a platform where I can get such data? During my university days, my professor once told us there is website which published industrial data on process control challenges. I am wondering if anyone can guide me to similar website that publish historical data of various process plants.
Looking forward to hearing from you all.
Relevant answer
Answer
I am reasonably familiar with iron and steel as a Civil Engineer, and my father was in the grey iron casting business so am familiar with that.
I would suggest before comparing your data to others, understand your own data. What are your process flows? Have you flowcharted out the process? If you have cycle times (such as how long to plan, approve, schedule, do the work, cleanup from the work) and loop-backs you should be able to understand your maintenance system and reliability.
What is of concern? What question are you trying to answer? If you in castings, the American Casting Society may be able to help.
I have worked as an Operations Analyst and teach stats and data analysis courses, so may be able to assist or guide your analysis. I would recommend the use of Statistical Process Control to understand your data.
  • asked a question related to Reliability Analysis
Question
2 answers
Hello everyone,
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
Relevant answer
Answer
Dear Pia, thank you very much for your helpful recommendation!
  • asked a question related to Reliability Analysis
Question
17 answers
Researchers in the social sciences have to report some measure of reliability. Standard statistics packages provide functions to calculate (Cronbach's) Alpha or procedures to estimate (MacDonalds) Omega in straightforward way. However, things become a bit more complicated when your data have a nested structure. For instance, in experience sampling research (ESM) researchers usually have self-reports or observations nested in persons. In this case, Geldhof et al. (2014) suggest that reliability be estimated for each level of analysis separately. Albeit this is easy to do with commerical packages like MPlus, R users face some challenges. To the best of my knowledge most multilevel packages in R do not provide a function to estimate reliability at the within vs. the between person level of analysis (e.g., misty or multilevel).
So far, I have been using a tool created by Francis Huang (2016) which works fine for Alpha. However, more and more researchers prefer (MacDonalds) Omega instead (e.g., Hayes & Coutts, 2020).
After working with workarounds for years I accidentially found that the R package semTools provides a function to estimate multilevel Alpha, different variants of Omega, and average variance extracted for multilevel data. I would like to use this post to share this with anyone struggling with estimation of multilevel reliability in R.
I find this post helpful, feel free to let me know.
Oliver
Bliese, P. (o. J.). multilevel: Multilevel Functions. Comprehensive R Archive Network (CRAN). [Computer software]. https://CRAN.R-project.org/package=multilevel
Geldhof, G. J., Preacher, K. J., & Zyphur, M. J. (2014). Reliability estimation in a multilevel confirmatory factor analysis framework. Psychological Methods, 19(1), 72–91. https://doi.org/10.1037/a0032138
Huang, F. L. (2016). Conducting multilevel confirmatory factor analysis using R. http://faculty.missouri.edu/huangf/data/mcfa/MCFAinRHUANG.pdf
Hayes, A. F., & Coutts, J. J. (2020). Use Omega Rather than Cronbach’s Alpha for Estimating Reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
Yanagida, T. (2020). misty: Miscellaneous Functions „T. Yanagida“ (0.3.2) [Computer software]. https://CRAN.R-project.org/package=misty
Relevant answer
Answer
The R package semTools now has a new compRelSEM() function, that estimates composite reliability from estimated lavaan models. For multilevel measurement models, reliability indices defined by Lai (2021) are implemented, as well as Geldhof et al.'s (2014) less useful "hypothetical reliability" of level-specific latent components. Until version 0.5-6 is available on CRAN, the development version can be installed with syntax provided in my description here: https://github.com/simsem/semTools/issues/106
  • asked a question related to Reliability Analysis
Question
3 answers
Anyone who can assist with a guide to carrying out a reliability analysis using MATLAB software would be greatly appreciated. Thanks
Relevant answer
Answer
I suggest you analyze the "Tieset & Reliability analysis of a System" source code given in the below link.
  • asked a question related to Reliability Analysis
Question
6 answers
Every tutorial and guide I can find for scale analyses in SPSS are specifically about Likery Scales. My study is not making use of a Likert Scale and is instead using a 0 - 100 scale.
Whats reliability analysis is best used for such a scale?
Relevant answer
Answer
Items that are measured on a 0 - 100 scale might make it even easier to assess reliability because they can (potentially) be treated as (quasi-)continuous (metrical, interval scale) variables, whereas Likert items are, strictly speaking, only ordinal in nature, often requiring special treatment in psychometric analyses.
If you have multiple items that are supposed to measure one or more factors/latent variables, the best course of action would be to run a confirmatory factor analysis (CFA) with the items as indicators of one or more latent factors to test the hypothesized factor structure first. If you find that the hypothesized factor model fits your data well/is appropriate, you can directly use the reliability estimates that are provided as part of a CFA (R-squared values for the items). In addition, composite reliability indices (reliability of the aggregate [sum or mean] of the items for a given factor) can be inferred as well from CFA. Depending on the assumptions made in the specific factor model, this may be, for example, Spearman-Brown, Cronbach's alpha, or McDonald's Omega.
  • asked a question related to Reliability Analysis
Question
12 answers
Greetings,
I am a DBA student conducting a study about "Factors Impacting Employee Turnover in the Medical Device Industry in the UAE."
My research model consists of 7 variables, out of which:
  • 5 Variables measured using multi-item scales adapted from literature ex. Perceived External Prestige (6 items), Location (4 items), Flextime (4 items),.. etc.
  • 2 are nominal variables
I want to conduct a reliability analysis using SPSS & I thought I need to do the below?
  1. Conduct reliability test using SPSS Cronbach's alpha for each construct (except for nominal variables)
  2. Deal with low alpha coefficients (how to do so?)
  3. Conduct Exploratory Factor Analysis to test for discriminant validity
Am I thinking right? Attached are my results up to now..
Thank you
Relevant answer
Answer
The issue is not my specialty , with my best wishes
  • asked a question related to Reliability Analysis
Question
9 answers
Hello, I have a questionnaire that consists of five sections. The first section (related to drivers' knowledge) has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section (related to drivers' habits) has 9 items with the first five items having a six-point Likert scale while in the remaining items the respondents have to choose one question from four specific options. The third section (related to drivers' behavioral intentions) has 10 items with each following a six-point Likert scale. The fourth section (related to drivers' psychological conditions) has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options. Finally, the last section consists of questions regarding drivers' profiles (age, gender, education, driving experience, profession, frequency of driving through tunnels, etc.)
Now my question is, what kind of statistical tests or analysis can I perform here to investigate the relationship between the variables in the drivers' profile and other sections/items. For instance, how I can analyze which group of drivers (in terms of age, gender, experience, etc.) are more knowledgable (section 1) or adopt appropriate habits (section 2).
I am also open to all kinds of suggestions and collaborations on this research.
P.S: I am attaching my questionnaire as a file. Hope it will help to understand my question and questionnaire better.
Relevant answer
Answer
A couple of things to remember here: in a prospectus assemble the following:
a. The research questions and summarize responses by data type
b. The potential tests of the hypotheses from a
c choose tests based on a and b
d choose sample sizes based on chosen tests and type I and Ii requirements
e conduct methods of d
f answer the research questions based on e
g check the assumptions required in e
h
Prepare summary report of results
Suggestions- 0 Proper Prior Planning Prevents Poor Performance
1 get a good book on research design
2 Study it
3 Follow the suggestions for each step
4 assemble the overall plan
5 collect data as in plan 4
6see f, g and h above
7 do not ever do what you did in this
question again
Good luck David Booth
  • asked a question related to Reliability Analysis
Question
4 answers
I am doing a reliability analysis of a motivation questionnaire on a sample of athletes in different sports. I do the reliability analysis in order to check the reliability of the translation of the questionnaire into another language.
Thank you.
Relevant answer
Answer
The most common way to assess reliability with a single point in time is with coefficient alpha.
  • asked a question related to Reliability Analysis
Question
6 answers
It's an online assessment for which there are 14 learning objectives. I had 3 groups (Novice, intermediate, and experts) take the assessment that had 4 items for each of the 14 objectives. Ultimately I want the assessment to randomly select only 1 item for each learning objective (a 14-item assessment) from 3 possible items. What test(s) will help me choose the best 3 items for each learning objective? I already have data from 169 test-takers (77 novice / 55 intermediate / 37 experts).
Relevant answer
Answer
Hi Mark. You should study the chapter on reliability from Julie Pallant book on how to use SPSS. It gives you practical hints on running reliability tests and eliminating the items that do not fit. Any questions? Please feel free to ask.
  • asked a question related to Reliability Analysis
Question
10 answers
Which method is more accurate and popular for testing the validity and reliability of my scales?
Relevant answer
Answer
Cronbach’s alpha, which is a measure used to assess the reliability, or internal consistency, of a set of scale or test items.
  • asked a question related to Reliability Analysis
Question
4 answers
I did a three-variable data analysis with 19 items. In two of them, I got a reliability analysis around 0.70 and the remaining variable 0.347, so when I removed two items from that scale with low-reliability results and I got a reliability analysis around 0.60 So is there a problem with removing the item from the scale to increase the reliability results?
Relevant answer
Answer
Yes, by removing data outliers
  • asked a question related to Reliability Analysis
Question
10 answers
I want to learn the reliability coefficient of a scale I used in my study (an assignment for my experimental psychology class). I read about how to find Cronbach alpha, I can run a reliability analysis in SPSS to find it. But I read that in order to run reliability analysis, each item has to have a normal distribution, but my data is not normally distributed. Can I run a reliability analysis with non-normally distributed data? Is there an alternative to reliability analysis for non-normal distribution?
Relevant answer
Answer
One of the reasons why our data is not normal is the presence of outliers. Outliers are data that have extreme scores, either extreme high or extreme low. It's better if we throw out these data outliers, so that a normal distribution is obtained. If we have removed the outliers, then we retest the normality of data with kolmogorov-smirnov.
  • asked a question related to Reliability Analysis
Question
8 answers
I'm doing a split-half estimation on the following data:
trial one: mean = 5.12 (SD = 5.76)
trial two: mean = 7.62 (SD = 8.5)
trial three: mean = 8.57 (SD = 12.66)
trial four: mean = 8.11 (SD = 10.7)
(SD = standard deviation)
Where i'm creating two subset scores (from trial one & two; and from trial three & four - I realise this is not the usual odd/even split):
Subset 1 (t1 & t2): mean = 12.73 (SD = 11.47)
Subset 2 (t3 & 4): mean = 16.68 (SD= 17.92)
I'm then computing a correlation between these two subsets, after which I'm computing the reliability of this correlation using the Spearman-Brown formulation.
However, in the literature I've found, it all suggests that the data must meet a number of assumptions, specifically that the mean and variance of the subsets (and possibly the items of these subsets) must all be equivalent.
As one source states:
“the adequacy of the split-half approach once again rests on the assumption that the two halves are parallel tests. That is, the halves must have equal true scores and equal error variance. As we have discussed, if the assumptions of classical test theory and parallel tests are all true, then the two halves should have equal means and equal variances.”
Excerpt From: R. Michael Furr. “Psychometrics”. Apple Books.
My question is, must variance and means be equal for a split-half estimate of reliability? If so, how can equality be tested? And is there a guide to the range, which means can be similar (surely it cannot be expected for means and variance across subsets to be 1:1 equal?!)?
Relevant answer
Answer
Yes, unfortunately it's common practice to just compute Cronbach's alpha without first testing whether the variables are essentially or strictly tau-equivalent. This may in part be because SPSS calls the procedure MODEL = ALPHA (which does not make sense in my opinion) but does not provide a test of fit for essential or strict tau-equivalence as part of the procedure (for whatever reason). When the variables are not at least essentially tau-equivalent (when they are "only" congeneric, i.e., have different loadings), Cronbach's alpha leads to an underestimate of reliability (McDonald's omega is appropriate for congeneric measures). Even worse is the (probably frequent!) case where the indicators are multidimensional (i.e., they measure more than one factor/true score). In that case, Cronbach's alpha is completely meaningless, yet you wouldn't know from SPSS output.
Essential tau-equivalence can be tested in lavaan and other SEM/CFA programs by specifying a 1-factor model with all factor loadings fixed to one and intercepts and error variances freely estimated (not set equal across variables). Strict tau-equivalence requires equal intercepts (means) across variables (otherwise same specification as essential tau equivalence).
  • asked a question related to Reliability Analysis
Question
16 answers
Hello, I have a questionnaire that consist of four sections with each section focusing on different variables.
First, each section has 9-10 items with each item following a different scale. For instance, the first section has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section has 9 items with the first five items have six point Likert scale while in the remaining items the respondents have to choose from four specific options. The third section has 10 items with each following six point Likert scale. The fourth section has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options.
Second, in some of the items the respondents were also allowed to select multiple answers for the same item.
Now my question is, how to calculate the "Cronbach's Alpha" for this questionnaire? If we cannot calculate the "Cronbach's Alpha", what are the alternative to find the reliability and internal consistency of the questionnaire.
Relevant answer
Answer
Amjad Pervez Strictly speaking, Cronbach's alpha only makes sense when your variables are measured on an interval scale (i.e., when you have continuous/metrical/scale-level variables) and when the variables are in line with the classical test theory (CTT) model of (essential) tau equivalence (or stricter models). Essential tau-equivalence implies that the variables/items measure a single factor/common true score variable (i.e., that they are unidimensional) with equal loadings. For variables that are only congeneric (measure a single factor/dimension but have different factor loadings), Cronbach's alpha underestimates reliability. For multidimensional scales, Cronbach's alpha tends to be completely meaningless. For categorical (binary and ordinal) variables, psychometric models and scaling procedures of item response theory are usually more appropriate that procedures derived from CTT which assumes continuous (scale-level) variables.
Maybe you could describe the content of your variables (and the answer options) in a bit more detail. That would make it easier for folks on Researchgate to see which procedure may be appropriate for you.
  • asked a question related to Reliability Analysis
Question
1 answer
I would like to know which is the best way to analyse test-retest in non-normal data. If ICC is not recommended in those cases, which test should I choose?
Relevant answer
Answer
Hello
In the non-normal situation, Spearman correlation is a suitable method
  • asked a question related to Reliability Analysis
Question
6 answers
What if the Cronbach's Alpha of a scale (4-items) measuring a control variable is between the .40 -.50 in your research. However, the scale is the same scale used in previous research in which the scale received a Cronbach's Alpha of .73.
Do you have to make some adjustments to the scale or can you use this scale because previous research showed it is reliable?
What do you think?
Relevant answer
Answer
Hello Lisa,
as I suspected. These are clearly not indicator of a common underlying factor. Hence, alpha and every other internal consistency approach towards reliability are inappropriate. For its control function, however, the scale will do its job as it can be regarded as a composite of specific facets. And, yes, each of the facets won't be perfect error free indicators of their underlying attribute but that should not hurt much.
All the best,
Holger
  • asked a question related to Reliability Analysis
Question
4 answers
I have set of independent data whose final output (result) is in Binary form (0 or 1). Which form of reliability analysis can be used for such datasets ? I have seen FOSM, AFOSM methods, all of them are applicable for continuous data.
Relevant answer
Answer
If this is a measurement tool such as a questionnaire or interview form, calculating a composite reliability coefficient may not be right for you. Because such a measurement tool is not standard, it cannot measure a latent trait. If it is not an objective measurement tool, it may be more accurate to deal with concepts such as inter-rater reliability.
Good luck
  • asked a question related to Reliability Analysis
Question
5 answers
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
Relevant answer
Answer
Your research approach is problematic. Before you ask a research question or ponder the answer to a problem, you are starting with a method and trying to fit the method to a field, not even to a specific problem. You should first ask a research question, formulate the problem, build the model and then find a suitable optimization method to solve it.
  • asked a question related to Reliability Analysis
Question
7 answers
Recently, I read that we do not validate the questionnaire, but the scores obtained through this questionnaire. So is it wrong the papers with the title"Validation of the XXXXXX questionnaire"?
Relevant answer
Answer
Early in my career I encountered Anne Anastasi's book "Psychological Testing". I was struck, and have remained impressed all my life, by her thought that a "questionnaire is a sample of behaviour". That is, a questionnaire is a *sample* from which the analyst makes predictions about the *ensemble* of behaviours the respondent may be said to exhibit. It makes complete sense therefore to investigate the ability of a questionnaire to allow the prediction to be made with validity (does the sample really measure the ensemble of interest?) and reliability (does the sample fluctuate at random?) Using unvalidated questionnaires is, as I used to drone to my dear students till they knew this lesson, pseudo-science and snake-oil. As in all inferential statistics, the actual *data* (in the original question, "the scores") is an accident that has just taken place. What the investigator wants to know is: what does the accident tell us ("the questionnaire")? Welcome to the wonderful world of Psychometrics!
  • asked a question related to Reliability Analysis
Question
1 answer
Need to publish my research paper on Reliability analysis of a industrial system in SCI of Q1/Q2 category urgently. Can any one suggest me the journal?
Relevant answer
Answer
Il faut chercher dans scimago
  • asked a question related to Reliability Analysis
Question
8 answers
I did a reliability analysis on my current project using SPSS version 20. most of the results I am getting are between 0.5 and .66 coefficients even after item deleted.  can i say my items are reliable with the above findings. if no , pls., advice.
Relevant answer
Answer
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273-1296.
  • asked a question related to Reliability Analysis
Question
8 answers
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
  • asked a question related to Reliability Analysis
Question
16 answers
I used Neper to generate the tessellation file and the meshing file, and assigned the crystal orientation. How to import these files into ABAQUS for crystal plastic finite element analysis(CPFEA, CPFEM)?
Relevant answer
Answer
Dear
Sofia Yassir
No, as far as I know, but generalizing my code for non-columnar grains should be possible, but not easy.
Neper will provide surfaces with corresponding vertices to define grain boundaries and you need to do the corresponding partition in Abaqus.
You can try with the partition using "n-sided patch". That may work.
Good luck.
Best Regards,
Nicolò Grilli
National University of Singapore
  • asked a question related to Reliability Analysis
Question
2 answers
Hi all, I am conducting a study on a flexible work arrangement. Section 1, consist of 7 questions. Q1-Q3 the selection of answer is YES/NO, Q4 ask you to rank the given answer, Q5-Q7 is on the Likert scale. This is the problems came. How do I carry reliability analysis on this Section 1???
Relevant answer
Answer
Dear Rafel,
If you mean regression correlation, Minitab will work just fine, but it is usually done on each question response bank independently.
If you mean the "reliability" of the questionnaire, that is a different type of study requiring test groups and sampling. It SHOULD have been done already if it is a survey in use. Minitab or other statistical programs or calculations could also be used to evaluate that data. Here are two references to get you started:
Best regards,
Steven
  • asked a question related to Reliability Analysis
Question
5 answers
Hello all,
I am trying to do an agreement analysis to verify how similar are the time-series measurements taken by two devices. Basically I have 2 curves representing values measured over time with each device, and I want to say how similar these measurements are.
I have other metrics in my analysis, but I was looking into CMC (Kadaba, 1989) to be a global metric. I know it is often used in gait analysis literature for reliability analysis, where curves taken by the same measurement device, but in different days, are compared. This coefficient represents similarity between two curves, so I was considering using it as a metric of agreement between the two time-series measurements I have, one from each device. I was wondering if there is any statistical assumption behind CMC that prevents me from doing that, I couldn't find much about it.
Thank you!
Relevant answer
Answer
Completely in agreement with the magnificent answer of the also magnificent researcher Dr. Pervaiz Iqbal; curiously I was going to hang something similar.
  • asked a question related to Reliability Analysis
Question
4 answers
Hello,
I am coding some metrics from different articles to run a meta-analysis and I had a simple question.
Let's say one of my variable of interest is Brand loyalty. In some articles, brand loyalty is often decomposed in two different variables (Attitudinal loyalty and behavioral loyalty) with two different metrics: two different AVE, CR, Alpha coefficients, Means and Standard deviations.
I would like to summarize these two variables in a single one. Thus, how do I get the value of the AVE, CR, Alpha, Mean and SD for the variable Brand loyalty (which is the variable gathering attitudinal and behavioral loyalty)? Should I do the average of the values given in the article?
Thanks in advance for you reply,
Best regards,
Kathleen
Relevant answer
Answer
Souza, A. C. D., Alexandre, N. M. C., & Guirardello, E. D. B. (2017). Psychometric properties in instruments evaluation of reliability and validity. Epidemiologia e Serviços de Saúde, 26, 649-659.
  • asked a question related to Reliability Analysis
Question
2 answers
Hi everybody,
I need to perform reliability analysis on my ERP data. Specifically, I would like to estimate internal consistency reliability through Spearman-Brown corrected split-half reliability. Could anybody help me with this? Do I need to use all the trials for each participant?
I'm not sure how to start the analysis, using trials or averages.....
I hope to get some answer here.
Thanks in advance.
Relevant answer
Answer
Not sure, but this paper could help:
  • asked a question related to Reliability Analysis
Question
1 answer
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Relevant answer
Answer
Usually + or - 25%
  • asked a question related to Reliability Analysis
Question
1 answer
Hello
Dear all,
I am looking for a reference that considered the rebar diameter as a random variable (e.g. having a normal distribution with standard deviation) in reliability analysis, however, I am not able to find any reference that rebar diameter is a random variable, similar to yield stress, fy, and etc.
Does anybody know any more information?
Regards,
Relevant answer
Answer
Look up probability density function. The rebar diameter is the diameter of a steel cylander called rebar Then look the whole thing up in a civil engineering handbook to see how to use it. Best, D. Booth
  • asked a question related to Reliability Analysis
Question
1 answer
Greeting!
I have performed the reliability analysis using Cronbach alpha for my questionnaire and I obtained a value of 0.507. There are 3 items to be deleted as shown in SPSS.
May I know what is the maximum number of items can I delete for my questionnaire? As I have came across one forum stated that only 20% questions can be deleted from questionnaire in order to preserve the content of the questionnaire. However, there is no reference found for this suggestion.
Please advice, thanks in advance!
Relevant answer
I believe that there is no fixed amount for this.
What is recommended is that you exclude items in an amount so as not to fail to assess fundamental elements of your latent trait. Delete one item at a time and observe how much the correlation between the items, on average, increases (Conbrach's alpha).
I also recommend using other reliability metrics, such as McDonald's Omega
  • asked a question related to Reliability Analysis
Question
1 answer
I wish to know the difference between the BN and Markov model. In what type of problems one is better than other?
In case of reliability analysis of a power plant, where equipment failures are considered, which model should be used and why?
Thank You!
Relevant answer
Answer
Dear Sanchit Saran Agarwal , Here is the answer
BAYESIAN
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
MARKOV
An example of a Markov random field. Each edge represents dependency. In this example: A depends on B and D. B depends on A and D. D depends on A, B, and E. E depends on D and C. C depends on E.
In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be Markov random field if it satisfies Markov properties.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite.
  • asked a question related to Reliability Analysis
Question
1 answer
For a dynamic Bayesian network (DBN) with a warm spare gate having one primary and one back-up component:
If the primary component P is active at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [alpha*lambda (S1)].
If the primary component P fails at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [lambda (S1)].
My question is, above are the conditional probabilities of primary and backup component. In a DBN, prior failure probability is also required. What will the prior failure probability of back up component? Will it be calculated using lambda (S1) or alpha*lambda (S1)?
Thank you
regards
Sanchit
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
1 answer
Dear all,
We conducted a research on college students using Maslach Burnout Inventory-Student Form of Schaufeli et. 2002. As you all know this scale consists of three factors, namely exhaustion, cynicism and professional efficacy.
My question is about the internal consistency coefficient of the factor professional efficacy. The Cronbach's Alpha for this factor is .59 and split-half reliability coefficient is .61.
In our research we also measure general self-efficacy of the students.
Therefore, what should we do?
For my idea, the best option is omitting the factor from the analysis since we also measure general self-efficacy.
What do you think?
Thanks in advance.
Meryem
Relevant answer
  • asked a question related to Reliability Analysis
Question
6 answers
I need ETAP software. Can anyone please share the link? As I am new user of ETAP, I also need a user guide please.
  • asked a question related to Reliability Analysis
Question
1 answer
I did items reliability analysis and the Chronbach alpha value is 0.956. The professor is saying "this too high, go a read what to do?" The professor don't gives any hint. What I should or need to do?
Relevant answer
Generally, an alpha greater than 0.95 does indicate problems. I would recommend checking some aspects:
1- Is the questionnaire too long? Very long questionnaires have very high alpha. See if all items are really needed to measure the latent variable or trait.
2- Is there redundancy in my items? Are there items that have different semantics but that measure a very similar characteristic of the latent trait? Again, deleting items can help.
3- Alpha measures an average correlation between items. So it is important to check the correlation value between the items. Very high correlation items may be measuring the same characteristic as a latent trait. Checking the inflation factor of the variance can also help in the identification of multicollinearity.
I hope these tips help you.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi everyone, grad student in need of help!
I have distributed two surveys, they are very similar but one was for teachers and one for students, as part of a needs assessment for e-learning. I wrote the survey to have variables assessing readiness, enthusiasm, and accessibility.
1) How do I properly assess the reliability of my surveys...the participation rates were low for one, and okay for the other, which makes me wonder whether or not EFA is going to be effective. Alternatively, in SPSS you can run the reliability analysis and get your Cronbach's alpha. How does EFA and the reliability analysis differ?
Relevant answer
Metrics like Cronbach's alpha and McDonald's Omega measure the consistent internal of questionnaires. But, a robust reliability measure is more naturally obtained via Factor Analysis and SEM.
When the sample is small, traditional statistical methods are not recommended. But other methods like Discourse Analysis or Content Analysis.
  • asked a question related to Reliability Analysis
Question
16 answers
I have 5 Likert scale questions in my questionnaire that seeks to measure Construct A. It has an acceptable Cronbach's Alpha value of above 0.7.
Does this mean that I am able to create (compute) a new variable; where I average the score of the 5 questions for each response to derive a score for Construct A for each respondent? I want to use this new variable (as a representative of Construct A) to conduct statistical tests with other variables.
Relevant answer
Answer
yes, you may proceed further and create your variable.
  • asked a question related to Reliability Analysis
Question
12 answers
Does anyone know; is there any special formula for inspection or preventive maintenance intervals or scheduling, when we are applying the artificial neural network models instead of conventional methods such as weibull?
Thanks in advance
Relevant answer
Answer
I agree with Mohammad Asjad and Günter Becker . ANN is a prediction tool. If you have one or more equipment failure data, you can use Weibull distribution to analyze the failure.
Due to the Weibull distribution, you will be able to comment and create schedules about the behavior of the equipment and even the system.
Cost factor is the most important criterion in these scheduling. Thanks to the Weibull distribution, I suggest you take advantage of the relevant study below in order to determine maintenance periods about costs and failure.
  • asked a question related to Reliability Analysis
Question
5 answers
Hi!
I am working on reliability analysis of PV Grid connected systems. I have no Background of Reliability analysis. So I just want to start from scratch on reliability analysis. Can someone please recommend me books on Reliability analysis of PV systems and Wind energy systems. Research papers just skip many things so I want to start from detailed study of books...
Thanks in advance...
  • asked a question related to Reliability Analysis
Question
4 answers
I am conducting a study where the Team Climate Inventory variable represents a second-order construct with four sub-dimensions (i.e. vision, task orientation, participative safety and support for innovation). 
How should I estimate a composite reliability score in this case with the use of Lisrel?
Thanks for all the hints!
Best,
Lukasz   
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
3 answers
Hi guys,
I might be publishing my Master's dissertation paper and my supervisor adjusted my data and now 2 of my constructs' cronbach's alpha are 0.6.
I have been looking for a good journal article where their cronbach's alpha values were 0.6 to see how they present it and also use it as a reference but I cannot find one. I have a lot of articles stating that 0.6 to 0.7 is the lower level of acceptability, however I have not found any articles using less than 0.7 values.
Any suggestions?
Relevant answer
Answer
The problem with a low alpha is that it will "attenuate" your ability to correlate these scales with other variables. More specifically, an alpha of .6 corresponds to only 36% reliable variance, versus 64% random variance, which is why most journal will not accept data with a low alpha.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi
A school in Jordan is doing an impact study on its alumni. The variables are a list of traits and values (innovation, leadership, empathy, etc…). I’m responsible for preparing the questionnaire.
My methodology is:
1- For each value/trait, find an inventory or scale that measures it.
2- Choose three items from the inventory/scale.
3- Combine the three items from all the inventories/scales to create the new questionnaire (about 60 items).
I need an expert who can review the final questionnaire and give an approval and recommendations to improve the questionnaire.
Any volunteers?
Relevant answer
Answer
Do you mean someone has already reviewed it or it is done and needs to be reviewed? if it needs a review, you can send me. You're welcome.
  • asked a question related to Reliability Analysis
Question
3 answers
SOLVED!!! Don't see how I can delete this question?
I am testing a survey about personality types and self-disclosure on Instagram. I gathered 105 respondents and used the mini IPIP scale by Donnellan to measure the big five personality types. I have reversed coded the items that were negative and double checked with a Phd researcher who confirmed I did it correctly. When running the reliability analysis for the mean of each variable, I get the results in the attached photo. I was told that this could be because some respondents were unreliable and clicking random answers and it could help to remove the outliers. So I did the Mahalanobis Distance in SPSS to identify the outliers (see attachment). I am not sure if I did it correctly but from what I can gather, there are no outliers since none are below .001? I am not sure now how to save my data and how to make it more reliable. I can go back and gather more respondents but it's been hard to do so and I am running out of time. Please advise. Thank you in advance.
Relevant answer
Answer
Remove the poor loading item and rerun the reliability test
  • asked a question related to Reliability Analysis
Question
4 answers
It has been seen that the instrument rating of an instrument mentioned in the operating manual is different from what is mentioned in the technical manual (Not all, but few). If we disregard the typing error, what are the actual reasons accounting for this difference?
Relevant answer
Dear Steven Cooke.
This question is related to Aviation wherein the assessment of airworthiness of some instrument found to be different in Operation manuals and Technical Manual. Operational manual is used by the Pilot/Operator whereas Technical Manual is used by Maintenance Engineering/Team. The Rating of instrument means the avionics component's operating scale with tolerance limit. For more details you can refer the avionics instrument specifications issued by various avaition agencies.
Thank you.
  • asked a question related to Reliability Analysis
Question
12 answers
NFF (No Fault Found) has the major contribution to reduce the operational availability, resources and increase the cost of Maintenance of any aircraft in aviation. The likely causes are human factors, maintenance training, faults reporting, fault analysis, corrective Maintenance and procedures. However, mitigating these issues are completely a tedious process wherein management skill can't achieve the desired results. So, what are the other parameters/ technical factors that need to be considered?
Relevant answer
Absolutely right sir. @Mr.Russel King.
Thank you.
  • asked a question related to Reliability Analysis
Question
8 answers
Hi everyone,
I've conducted an EFA and ended up with 5 factors. A few of the items are cross-loading over 2-3 factors. I have already removed 10 items that either do not correlate or cross-load significantly.
I am fairly happy with the factors, however, the cross-loading items are confusing me and I have a few questions.
1. When calculating the total scores, means and Cronbach's alphas for each factor, do I include the items which cross load with other items?
2. When I present the final scale/solution, how do I present the cross-loading items?
3. There is one factor which is negatively predicted ('Lack of Support' [all items have a negative value]), however, I have changed the scoring so it positively predicts the factor (Support). There is one item in this subscale which cross-loads with another. How does this impact the scoring? Should I try to remove this item?
4. I started with a 37-item scale and I now have 27 items. How many items are too many to delete? At what point should I just accept it as an overall scale with a good Cronbach's alpha (.921) and say further research into factors and subscales is needed?
I am reluctant to delete the few cross-loading items I have remaining, as when they are removed from the analysis, the reliability score decreases for the individual factors and the overall scale.
This is my first time doing an EFA and so I would be very grateful for any advice or recommendations you may have.
Thank you.
Relevant answer
Answer
Hello Jessica,
Factor solutions can include variables that show salient affiliation with more than one factor (some personality measures are notorious for this type of structure). However, the concerns associated with cross-loading are usually: (a) the structure is more complex than Thurstone's idealized concept of "simple structure"; (b) it may make the task of characterizing what a factor represents more challenging; and (c) perhaps the variable isn't as well measured or defined as it could have been.
The answer to your question depends on your specific research aims as concerns this set of variables (and how these do or don't reflect the concerns listed above). If your goal is to derive the "cleanest" possible structure, then throwing out variables/items may be the way to go. Do recognize that the possible concern here is that you end up defining very restricted factors that may not fully represent the target constructs. As well, depending on your sample, it's possible that the resultant structure to choose may be overfitted to the sample and not generalize as well to other data sets.
In any event, if you elect to retain cross-loading items, then:
1. Yes, they are included in any representation of a factor (e.g., an estimated factor score, or just a summated score), or for score reliability estimates;
2. You present a factor structure matrix/table that shows all variables that you deem salient with each factor (factor pattern and factor inter-correlation matrices as well, if you used an oblique rotation);
3. If all loadings on a factor are negative, then you may reverse the sign and characterize the resultant variate as a reversed polarity construct estimate. (You may do the same with mixed sign loadings, as long as you reverse each variable's sign accordingly.)
4. The facetious answer is, if you get down to two items or fewer, you've likely gone too far! The more serious answer is, there is no way to predict in advance how many "keepers" there are from a preliminary set of variables that were constructed, identified, or adapted to tap one or more constructs. That's why people engage in EFA or CFA in the first place; to help identify what structures are supported by data and what structures are not.
The final note I would make is that Cronbach's alpha for an "overall" scale score might not be the best indicator, especially when you have identified multiple factors for that batch of variables. For individual factor scales, sure.
Good luck with your work.
  • asked a question related to Reliability Analysis
Question
1 answer
My respondents are 92 and I have 180 questions. I am using SPSS, and the software says "scale or part of scale has zero variance and will be bypassed". Can anyone help me?
Relevant answer
Answer
One likely way to get that error message is if everyone gave the same answer on one of your variables. So, you need to examine the distribution of each of the variables you are using in your scale.
  • asked a question related to Reliability Analysis
Question
14 answers
Dear colleagues,
has the calculation of McDonalds' Omega been implemented in SPSS 25 so far? I found some older threads concerning this question on RG but nothing in the recent past.
In case you know how to do this in SPSS or Mplus, I would be very grateful. I would kindly ask you not to suggest using R because I am not familiar with the programme.
Thank you in advance and kind regards
Marcel Grieger
Relevant answer
Answer
Dear colleagues,
I don't know if this is still of help for you, but SPSS does have a possibility to calculate the omega reliability.
There is a wonderful explanation for the installation of the extension:
written by
Prof. Andrew F. Hayes
on this page the SPSS-extension for omega calculation can be downloaded
Hope this is useful.
Best regards
  • asked a question related to Reliability Analysis
Question
5 answers
so I came across a situation where a cross sectional Servery based research was done with a questionnaire that they designed themselves
and they finished the data collocation and have collocated the whole target sample
The Research team didn't do a pilot study.
When they wanted to start the analysis they wanted to do cronbach alpha to measure reliability.
- Cronbach alpha happened to be 94% (showing excellent reliability).
- but they have only done face validity for the questionnaire, and didnt do anything else like Principal Component analysis [PCA].
Q1/ so can they just go with the flow and write in the methodology, and result section that they have done cronbach alpha and it showed great result of etc etc... ??
Q2/and can they say that the questionnaire is a valid questionnaire ??
Relevant answer
Answer
It is correct you should only present evidence of validity according to your objectives that are based on the inferences of the scores obtained from your study. You should consider for the evidence of internal validity the confirmatory factor analysis.
  • asked a question related to Reliability Analysis
Question
16 answers
Hi there,
My questionnaire consists of 18 MCQs, each with one correct answer and 3 incorrect answers. I'm measuring participant scores on the questionnaire before and after watching a video in two seperate groups.
From what I've read Cronbach's Alpha is used to test scaled data (i.e. Likert scales) for reliability, so I'm unsure as to whether its appropriate for my questionnaire.
Can I also use it on my questionnaire, or is there an alternative more appropriate for my data?
If the answer is yes I can use it, do I analyse it exactly the same in SPSS as I would scaled data? i.e.: Analyze> Scale> Reliability Analysis, all questions into the 'items' box, tick descriptive statistics options 'item' 'scale' 'scale if item deleted' and 'correlations' in inter-item options?
Thank you in advance!
David
Relevant answer
Answer
It is apparent from your question that you are interested in observing any changes in the variable of interest after administering the intervention (participant watching the video). This is an experimental design where you are in need of a scale to measure your variable of interest.
The question on internal consistency of the questionnaire calls for a psychometric discussion, which is a different domain. If you are not evaluating the psychometric properties of your questionnaire following all the steps in scale validation, there is little point in checking Cronbach's alpha alone for the scale. Simply, obtain the achieved scores (sum of correct responses) before and after the intervention, employ paired tests.
If the variable of interest demands measurement viz validated tools, it is always advisable to use previously validated scales measuring the construct, if there are any. If there aren't any, then the question is do you want to develop one? If yes, follow the rigorous validation process.
  • asked a question related to Reliability Analysis
Question
17 answers
Do you have any experience with probabilistic software for structural reliability assessment? Any links?
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
3 answers
I am using answers to a questionnaire in which existing scales from scientific papers are used. One particular (set of) concepts is measured with 24 questions, which are divided across three different subscales by the original author who developed the questions. However, reliability analysis of these subscales (using the collected answers) shows that for two of the three subscales, Cronbach's alpha is lower than 0.7. Furthermore, all subscales contain one or two questions which, if removed, would increase the Cronbach's alpha, although for two subscales, the resulting Cronbach's alpha would still be lower than 0.7.
Is it acceptable to remove certain questions from the subscales, or should I continue to use the original subscales in this situation?
Thank you in advance.
Relevant answer
Answer
Hi Wk,
Existing questionnaires are not perfect and are appropriate to designed situations. Surely you may improve them by modifying questions. Especially these which are problematic in terms of collective answering. I may suggest to modify questions or add more appropriate instead of removed ones.
Best
Leszek
  • asked a question related to Reliability Analysis
Question
11 answers
Dear collegues,
I performed a cross-cultural study using two questionnaires (66 and 12 items). The original version of these questionnaires was used in the first country, and these tools were also translated into the second country´s language to be administered here. It was the first time the translated version was used for research purposes. The number of participants in the first country was 216, in the second 265. Is it required to perform confirmatory factor analyisis (for the two lignuistical versions separatedly), or is it enough to report internal consistency coefficients in this particular publication? What I am actually supposed to do to fulfill the required standards of reporting psychometric properties of translated questionnaires?
Thank you for your suggestions in advance.
Relevant answer
Answer
It would be advisable to do one for each country and then a general one with multi-group invariance to evaluate the differences.
  • asked a question related to Reliability Analysis
Question
8 answers
Hi guys,
Can anyone describe PCE in simple words, please? How can we find a PC basis and what is an appropriate sparse PC basis?
Thanks in advance
Relevant answer
Answer
PCE is a sum of truncated terms to estimate response of a dynamic system when uncertainties are involved, specifically in its design or parameters. Imagine you have a dynamic system such as a mass and spring in which your mass follows a normal distribution. If you want to find the mean and standard deviation of your system's response, let's say your mass acceleration, when such uncertainty exists, you can build a polynomial chaos expansion for your system and get your mean and standard deviation from it.
To build your PCE you need a set of basis functions which their types depend on your random variables' type, here your mass is normally distributed so based on the literature you should choose Hermite polynomials as your basis functions.
There are plenty of papers out there on this topic that explain this AWESOME tool in detail.
Good luck.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi everyone,
I'm trying to determine which test to run to assess the accuracy of a model that's classifying vegetation. I have ground-truth values and the values that the model has produced. I've considered Pearson Correlation and Intra-Class Correlation, though there are many tests, so I'm stumped on which to decide on. I've seen past literature using Pearson Correlation though my data aren't normal even with a log transformation.
Thanks much!
Relevant answer
Answer
Hello Charles,
Can you please explain/think of your objective a bit more clearly? It is not clear whether you want to assess the accuracy of the classification method or you want to run some goodness of fit tests to understand if the model lacks something. If it is the former (as it seems from the way you framed the question), a confusion matrix might be most suitable to understand the results of your algorithm. There are several other measures to give an idea about the performance of a clustering/classification algorithm, Cohen's kappa, purity index, RAND index being some examples. Each one has its own advantage and disadvantage. I would strongly recommend you to read about them and make a decision depending on the type of dataset and problem you're working on. Hope that helps!
Thanks.
  • asked a question related to Reliability Analysis
Question
4 answers
I'm currently making an investigation of how to apply RCM with Preventative Maintenance to a truck fleet of fuel transport. In my company there's not a list of functional failure or failure modes, but according to some books' authors like Dhillon, Pistalleri or Dixon, said that there are generic database failures. I was searching this kind of data in google and google scholar but didn't find anything. So, this is why I ask for you to help if you know any public generic failure database that can give my team a base of information to improve our investigation.
PD: Sorry if there's any spelling or semantic error. I'm not native English, and almost any kind of material are in this language.
Thanks,
Relevant answer
Answer
Dear Dr. Prince Capa,
have you considered Google's new service "Dataset Search" (https://toolbox.google.com/datasetsearch) as it proves to be a useful starting point for spotting appropriate datasets for your individual research demands?
You can find some data set sources at the following link:
and I suggest you to have a look at the following documents, in pdf format:
-Optimising pro-active maintenance decisions for a multi-component system By J.A.J. Hoeks (2012)
-Reliability modelling for maintenancescheduling of mobile mining equipment by Raymond Summit and David Halomoan (2015)
-On Data-Driven Predictive Maintenance of Heavy Vehicles by Andreas Falkoven (2017)
-Maintainability analysis of mining trucks with dataanalytics by Abdulgani Kahraman (2018)
Enjoy reading and best regards, Pierluigi Traverso
  • asked a question related to Reliability Analysis
Question
14 answers
COVID-19 is affecting all kinds of human activities, research is not exempted. Many ongoing research studies are not paused because of COVID-19, patient recruitment cannot be continued, follow up visits are not stict to schedule, intervention procedures may be delayed, blood test monitor are postponed.
I would expect a higher loss to follow up rate during this period, which would affect the reliability of research. Even after COVID-19, will the recruited subjects have some difference than those recruited before?
What do you think?
Relevant answer
Answer
I am in a contiuous activities during this mandatory vacation but at home office..
Wish you all healthy life.
  • asked a question related to Reliability Analysis
Question
3 answers
Hi,
My main analysis is on an intention to treat dataset, although I am looking at the per-protocol dataset to confirm if there were any differences. When running Cronbach's Alpha on my many scales, should I run it on the ITT dataset, the PP data, or both?
Thanks,
Max
Relevant answer
Answer
  • asked a question related to Reliability Analysis
Question
8 answers
Update
I have a scale (12 items)
I go to Analysis -> Scale -> Reliability analysis and get my Cronbach alpha (0,5)
BUT 2 of my items are «inverse». If I recode this two items as it was not inverse I get alpha=0.8
Am I right? I should recode this items before counting Cronbach alpha?
written earlier
I conducted a study (correlation plan).
I used (including) 2 psychological tests, which were adapted by another author according to all the rules.
And I run into problems:
Situation1 (solved)
My first test (14 items) has 2 subscales. In Ukrainian adaptation, the Cronbach alpha for the scales is 0.73 and 0.68. But I did my own research and counted Cronbach's alpha. 0.65 and 0.65 came out.
Question1: Should I count correlations with this test or, maybe, exclude this test from analysis?
Situation 2 (see update)
My second test is Zimbardo’s Time Perspective Inventory (56 items). In Ukrainian adaptation, four of five scales have Cronbach Alpha above 0.7. One scale is 0.65.
But in my research everything is ok only with 3 scales, they are higher than 0.7.
Two scales have a very low Cronbach Alpha: 0.55 and 0.49.
Question2: should I exclude this two low scales and count correlations with only that 3 scales which Cronbach Alpha more than 0,7?
PS: N=336 in my study
Relevant answer
Answer
No matter create or use Oleksandra Shatilova any measurement tool must be valid. And cronbach alpha's say nothing direct things about validity. When you handle with reverse questions, Cronbach's alpha must be rise. But it is not sufficient.
So i agree with Robert Trevethan 's advices
  • asked a question related to Reliability Analysis
Question
1 answer
In general, the approximated probability density function of the performance function can be obtained by the moment method. Is there any other method ?
Relevant answer
Answer
I suggest that you take a look to the following site :
This may be helpful.
Regards.
  • asked a question related to Reliability Analysis
Question
4 answers
Hi!
Currently I am analyzing my data and I've got some results of which I don't know what to do with it. I've tested whipped cream on whipping time at three different moments: day 0, day 1 and day 2 (three different groups). Therefore I used a one-way ANOVA test to analyze if there's a difference between the means of the groups. This test is significant, however, when I use a Post Hoc test to analyze which groups differ, these results are all insignificant. The variances are equal so I used the Turkey test (but any other test I can use in my program give the same insignificant results).
I think this is because the ANOVA may give a type I error (incorrect rejection of the H0 hypothesis that there's no difference between the groups) and the Post-Hoc can do a more reliable analysis between the groups. But I don't know exactly how it works.
Does somebody might know how to draw a clear conclusion from these results! I would be very grateful for any help you can provide!
Relevant answer
Answer
In order to support your explanation, I think that you must be sure that the ANOVA meets the assumptions. That gives you a guarantee about the inferences in your analysis. If the answer goes to the right way, to support your conclusions, you must calculate the power of your ANOVA. This will be an objective answer to concluded about your analysis
  • asked a question related to Reliability Analysis
Question
3 answers
A correct interpretation of reliability analysis is determinative for researchers and industrial developers to refine their design and also prepare proper maintenance scheduling and safety analysis. However, I still see that many designers prefer to use classical safety factors instead of reliability analysis techniques. what's your sense about this.
For example, imagine that you are going to buy a bearing and I say you this bearing's reliability is 94% for the expected life of 5 years. it means that if you test 100 bearings under normal performance, almost 6 of them should fail after 5 years. does this analysis makes sense for you in your research and development?
and if the answer is yes, how do you use the outcome of reliability analysis in your research area. the answer is important for me because I am going to start developing commercial software for reliability analysis and it is important to see what are the expectations of experts from the reliability analysis methods.
Thanks,
Sajad
Relevant answer
Answer
Hamzeh Soltanali thanks, Hamzeh. Classical safety factors, I mean over-design instead of taking into account the reliability analysis.
  • asked a question related to Reliability Analysis
Question
4 answers
I am evaluating reliability and availability of a hydropower plant using dynamic fault tree gates. To evaluate the top event probability Dynamic Bayesian Network (DBN) is used. I am unable to figure out that how many number of time slices should I consider for my network. Also, should my time slice be 1 month-6 month-1year, or 1year-2year-5year.
Also, should all the power plant components with static and dynamic gates be represented with different time slices in the DBN or only the power plant components with dynamic gates be represented with time slices?
Relevant answer
Answer
Thank you Sir.
  • asked a question related to Reliability Analysis
Question
6 answers
I conducted a diary study. 3 independent judges analysed the data (with thematic analysis) and after deliberation all together now we have 6 categories.
Do I need to do confirmatory factor analysis?
Thanks
Relevant answer
Answer
Yağmur Rumeli, confirmatory factor analyses will not help you unless you have quantitative data to analyze. You also need to have a hypothesized underlying factor structure, which is essentially a measurement model. You don't have a measurement model in these types of studies. Attached in my response is a PDF document that explains the assumptions and process of confirmatory factor analyses.
If you want to increase the validity of a qualitative research project such as this, then two of the most common methods are member checking and inter-rater reliability.
Inter-rater reliability will tell you the extent to which the 3 independent judges agreed on the categories. It helps to determine whether or not your themes can be replicated within the study. I always recommend doing this when you have multiple judges, as it's a fairly easy way to see how the coding scheme worked.
Member checking involves going back to the participants and ensuring that the themes that you pulled and the interpretations that you made accurately reflect what the participants were trying to convey. It's essentially an exit interview from the project.
I recommend that you do inter-rater reliability, as you should have the necessary data at your disposal. If you have the opportunity to do member checking, then that will add even more validity to the study, but that requires that you have the capacity to re-contact the participants.
Here is a link explaining how inter-rater reliability works:
Here is a link to some information about member checking:
  • asked a question related to Reliability Analysis
Question
6 answers
Now, I am a Master Student and doing Master thesis related with coordination in construction.My idea is to rank the 59 coordination factors as the most importance and most time consuming according to the Questionnaire surveys data. For measuring the most important factors, I will use Relative important index method and for measuring time consuming of factors, which method
will be suitable? Another question is that I will use Reliability analysis and descriptive statistics as general analysis and what types of analysis can also be suitable?
Relevant answer
Answer
Hello Kaung,
I think that ranking the times, based on mean or median values, would suffice for your purposes.
Good luck with your work.
  • asked a question related to Reliability Analysis
Question
3 answers
hello researchers
i have a question regarding switching method for selection of standby units for operation in any complex system for reliability analysis. is there any appropriate method for select an unit(standby).
Relevant answer
  • asked a question related to Reliability Analysis
Question
3 answers
Hi
Historical data is used to forecast the number of functional failures in passenger trains. However, there is always differenc in forecast and actually observed data. Am wondering which technique or approach is suitable to minimize the forecast error in case of railway data.
Shall be greatful if you can share some useful article or case study.
Relevant answer
Answer
Mohammed El Genidy many thanks, i will definitelt read your paper. Hope it will help me
  • asked a question related to Reliability Analysis
Question
2 answers
Hi! In my bachelor thesis am using a new scale (á 12 items) for sexual orientation and the reliability analysis resulted in Cronbachs Alpha .75. After consulting the inter-item-correlation, I decided that due to high correlations, high Alphas and due to the content to aggregate the items, which led me to only having 6 items. But now I am left with an Alpha of only .52. I could exclude one of the aggregated items, which would lead to a Alpha of .63. But that would would exclude "Attraction towards men" alltogether, which doesn't appear to be the most reasonable course.
How do I proceed? Is it valid to not do the aggregation and state why in my thesis?
Relevant answer
Answer