Questions related to Reliability Analysis
I'm concerned when entering items for reliability analysis (Cronbach alpha), are the original items replaced with revised items or not.
Guidance from experts will be highly appreciated
I have a question regarding moderation effect.
I am testing a model with one one IV (A) and one DV (B) and I want to test the moderating effect of M on this path.
Is it necessary to investiagte the reliability and validity for cross construct(B*M) ?
or I only have to investigate reliability and validity for A construct and B construct ?
Help from one of the PLS-Experts in this forum would be highly appreciated!
To protect safety-critical systems against soft errors (induced by radiations), we usually use redundancy-based fault tolerance techniques.
Recently, to cut down unacceptable overheads imposed by redundancy, we can only protect the most critical parts of the system, i.e., selective fault tolerance. To identify such parts, we can use fault injection.
There are two methodologies based on fault injection widely presented in the literature toward improving the system's fault tolerance, called: Reliability assessment and Vulnerability assessment. Both use fault injection. I wonder, what is the main difference between these two concepts, i.e., Reliability assessment and Vulnerability assessment?
i am trying to do reliability analysis for short rc columns. i am referring a paper "reliability analysis of eccentrically loaded columns" by "Maria M. Szerszen, Aleksander Szwed and Andrzej S. Nowak".
in the end, based on a plot of 'strain' vs 'strength reduction factor', they have proposed a new values of strength reduction factor as a function of strain.
two models have been proposed, the dotted one is for all the points except black
while the solid line is for black points.
black points are depicting reinforcement ratio < 2
green, blue, red colors are showing reinforcement ratios in excess and equal to 2
my question is how they fitted these two lines, or, how they have measured the transition zone from the given scatter plot?
I have conducted an EFA on three items, and all items load on one factor. I then ran a reliability analysis with the three variables to ensure internal reliability using Chronbachs Alpha.
My question: Should I run a reliability analysis before or after the EFA?
Does the order really matter in this case?
Thank you in advance!
Trust you are doing great. I need a favor from all of you. My colleague and I are working on stability of iron and steel manufacturing plant and we are looking for information on the detailed process flowsheet and some historical data of the plant such as the maintenance and failure data for for at least the past 5 years. Could anyone assist with this data or provide me with information of a platform where I can get such data? During my university days, my professor once told us there is website which published industrial data on process control challenges. I am wondering if anyone can guide me to similar website that publish historical data of various process plants.
Looking forward to hearing from you all.
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
Researchers in the social sciences have to report some measure of reliability. Standard statistics packages provide functions to calculate (Cronbach's) Alpha or procedures to estimate (MacDonalds) Omega in straightforward way. However, things become a bit more complicated when your data have a nested structure. For instance, in experience sampling research (ESM) researchers usually have self-reports or observations nested in persons. In this case, Geldhof et al. (2014) suggest that reliability be estimated for each level of analysis separately. Albeit this is easy to do with commerical packages like MPlus, R users face some challenges. To the best of my knowledge most multilevel packages in R do not provide a function to estimate reliability at the within vs. the between person level of analysis (e.g., misty or multilevel).
So far, I have been using a tool created by Francis Huang (2016) which works fine for Alpha. However, more and more researchers prefer (MacDonalds) Omega instead (e.g., Hayes & Coutts, 2020).
After working with workarounds for years I accidentially found that the R package semTools provides a function to estimate multilevel Alpha, different variants of Omega, and average variance extracted for multilevel data. I would like to use this post to share this with anyone struggling with estimation of multilevel reliability in R.
I find this post helpful, feel free to let me know.
Bliese, P. (o. J.). multilevel: Multilevel Functions. Comprehensive R Archive Network (CRAN). [Computer software]. https://CRAN.R-project.org/package=multilevel
Geldhof, G. J., Preacher, K. J., & Zyphur, M. J. (2014). Reliability estimation in a multilevel confirmatory factor analysis framework. Psychological Methods, 19(1), 72–91. https://doi.org/10.1037/a0032138
Huang, F. L. (2016). Conducting multilevel confirmatory factor analysis using R. http://faculty.missouri.edu/huangf/data/mcfa/MCFAinRHUANG.pdf
Hayes, A. F., & Coutts, J. J. (2020). Use Omega Rather than Cronbach’s Alpha for Estimating Reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
Yanagida, T. (2020). misty: Miscellaneous Functions „T. Yanagida“ (0.3.2) [Computer software]. https://CRAN.R-project.org/package=misty
Every tutorial and guide I can find for scale analyses in SPSS are specifically about Likery Scales. My study is not making use of a Likert Scale and is instead using a 0 - 100 scale.
Whats reliability analysis is best used for such a scale?
I am a DBA student conducting a study about "Factors Impacting Employee Turnover in the Medical Device Industry in the UAE."
My research model consists of 7 variables, out of which:
- 5 Variables measured using multi-item scales adapted from literature ex. Perceived External Prestige (6 items), Location (4 items), Flextime (4 items),.. etc.
- 2 are nominal variables
I want to conduct a reliability analysis using SPSS & I thought I need to do the below?
- Conduct reliability test using SPSS Cronbach's alpha for each construct (except for nominal variables)
- Deal with low alpha coefficients (how to do so?)
- Conduct Exploratory Factor Analysis to test for discriminant validity
Am I thinking right? Attached are my results up to now..
Hello, I have a questionnaire that consists of five sections. The first section (related to drivers' knowledge) has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section (related to drivers' habits) has 9 items with the first five items having a six-point Likert scale while in the remaining items the respondents have to choose one question from four specific options. The third section (related to drivers' behavioral intentions) has 10 items with each following a six-point Likert scale. The fourth section (related to drivers' psychological conditions) has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options. Finally, the last section consists of questions regarding drivers' profiles (age, gender, education, driving experience, profession, frequency of driving through tunnels, etc.)
Now my question is, what kind of statistical tests or analysis can I perform here to investigate the relationship between the variables in the drivers' profile and other sections/items. For instance, how I can analyze which group of drivers (in terms of age, gender, experience, etc.) are more knowledgable (section 1) or adopt appropriate habits (section 2).
I am also open to all kinds of suggestions and collaborations on this research.
P.S: I am attaching my questionnaire as a file. Hope it will help to understand my question and questionnaire better.
I am doing a reliability analysis of a motivation questionnaire on a sample of athletes in different sports. I do the reliability analysis in order to check the reliability of the translation of the questionnaire into another language.
It's an online assessment for which there are 14 learning objectives. I had 3 groups (Novice, intermediate, and experts) take the assessment that had 4 items for each of the 14 objectives. Ultimately I want the assessment to randomly select only 1 item for each learning objective (a 14-item assessment) from 3 possible items. What test(s) will help me choose the best 3 items for each learning objective? I already have data from 169 test-takers (77 novice / 55 intermediate / 37 experts).
Which method is more accurate and popular for testing the validity and reliability of my scales?
I did a three-variable data analysis with 19 items. In two of them, I got a reliability analysis around 0.70 and the remaining variable 0.347, so when I removed two items from that scale with low-reliability results and I got a reliability analysis around 0.60 So is there a problem with removing the item from the scale to increase the reliability results?
I want to learn the reliability coefficient of a scale I used in my study (an assignment for my experimental psychology class). I read about how to find Cronbach alpha, I can run a reliability analysis in SPSS to find it. But I read that in order to run reliability analysis, each item has to have a normal distribution, but my data is not normally distributed. Can I run a reliability analysis with non-normally distributed data? Is there an alternative to reliability analysis for non-normal distribution?
I'm doing a split-half estimation on the following data:
trial one: mean = 5.12 (SD = 5.76)
trial two: mean = 7.62 (SD = 8.5)
trial three: mean = 8.57 (SD = 12.66)
trial four: mean = 8.11 (SD = 10.7)
(SD = standard deviation)
Where i'm creating two subset scores (from trial one & two; and from trial three & four - I realise this is not the usual odd/even split):
Subset 1 (t1 & t2): mean = 12.73 (SD = 11.47)
Subset 2 (t3 & 4): mean = 16.68 (SD= 17.92)
I'm then computing a correlation between these two subsets, after which I'm computing the reliability of this correlation using the Spearman-Brown formulation.
However, in the literature I've found, it all suggests that the data must meet a number of assumptions, specifically that the mean and variance of the subsets (and possibly the items of these subsets) must all be equivalent.
As one source states:
“the adequacy of the split-half approach once again rests on the assumption that the two halves are parallel tests. That is, the halves must have equal true scores and equal error variance. As we have discussed, if the assumptions of classical test theory and parallel tests are all true, then the two halves should have equal means and equal variances.”
Excerpt From: R. Michael Furr. “Psychometrics”. Apple Books.
My question is, must variance and means be equal for a split-half estimate of reliability? If so, how can equality be tested? And is there a guide to the range, which means can be similar (surely it cannot be expected for means and variance across subsets to be 1:1 equal?!)?
Hello, I have a questionnaire that consist of four sections with each section focusing on different variables.
First, each section has 9-10 items with each item following a different scale. For instance, the first section has 10 items with no Likert scale and the participants have to choose from either two or three or more specific options. The second section has 9 items with the first five items have six point Likert scale while in the remaining items the respondents have to choose from four specific options. The third section has 10 items with each following six point Likert scale. The fourth section has 9 items with no Likert scale and the participants have to choose from three, or four or more specific options.
Second, in some of the items the respondents were also allowed to select multiple answers for the same item.
Now my question is, how to calculate the "Cronbach's Alpha" for this questionnaire? If we cannot calculate the "Cronbach's Alpha", what are the alternative to find the reliability and internal consistency of the questionnaire.
I would like to know which is the best way to analyse test-retest in non-normal data. If ICC is not recommended in those cases, which test should I choose?
What if the Cronbach's Alpha of a scale (4-items) measuring a control variable is between the .40 -.50 in your research. However, the scale is the same scale used in previous research in which the scale received a Cronbach's Alpha of .73.
Do you have to make some adjustments to the scale or can you use this scale because previous research showed it is reliable?
What do you think?
I have set of independent data whose final output (result) is in Binary form (0 or 1). Which form of reliability analysis can be used for such datasets ? I have seen FOSM, AFOSM methods, all of them are applicable for continuous data.
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
Recently, I read that we do not validate the questionnaire, but the scores obtained through this questionnaire. So is it wrong the papers with the title"Validation of the XXXXXX questionnaire"?
Need to publish my research paper on Reliability analysis of a industrial system in SCI of Q1/Q2 category urgently. Can any one suggest me the journal?
I did a reliability analysis on my current project using SPSS version 20. most of the results I am getting are between 0.5 and .66 coefficients even after item deleted. can i say my items are reliable with the above findings. if no , pls., advice.
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
I used Neper to generate the tessellation file and the meshing file, and assigned the crystal orientation. How to import these files into ABAQUS for crystal plastic finite element analysis(CPFEA, CPFEM)?
Hi all, I am conducting a study on a flexible work arrangement. Section 1, consist of 7 questions. Q1-Q3 the selection of answer is YES/NO, Q4 ask you to rank the given answer, Q5-Q7 is on the Likert scale. This is the problems came. How do I carry reliability analysis on this Section 1???
I am trying to do an agreement analysis to verify how similar are the time-series measurements taken by two devices. Basically I have 2 curves representing values measured over time with each device, and I want to say how similar these measurements are.
I have other metrics in my analysis, but I was looking into CMC (Kadaba, 1989) to be a global metric. I know it is often used in gait analysis literature for reliability analysis, where curves taken by the same measurement device, but in different days, are compared. This coefficient represents similarity between two curves, so I was considering using it as a metric of agreement between the two time-series measurements I have, one from each device. I was wondering if there is any statistical assumption behind CMC that prevents me from doing that, I couldn't find much about it.
I am coding some metrics from different articles to run a meta-analysis and I had a simple question.
Let's say one of my variable of interest is Brand loyalty. In some articles, brand loyalty is often decomposed in two different variables (Attitudinal loyalty and behavioral loyalty) with two different metrics: two different AVE, CR, Alpha coefficients, Means and Standard deviations.
I would like to summarize these two variables in a single one. Thus, how do I get the value of the AVE, CR, Alpha, Mean and SD for the variable Brand loyalty (which is the variable gathering attitudinal and behavioral loyalty)? Should I do the average of the values given in the article?
Thanks in advance for you reply,
I need to perform reliability analysis on my ERP data. Specifically, I would like to estimate internal consistency reliability through Spearman-Brown corrected split-half reliability. Could anybody help me with this? Do I need to use all the trials for each participant?
I'm not sure how to start the analysis, using trials or averages.....
I hope to get some answer here.
Thanks in advance.
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
I am looking for a reference that considered the rebar diameter as a random variable (e.g. having a normal distribution with standard deviation) in reliability analysis, however, I am not able to find any reference that rebar diameter is a random variable, similar to yield stress, fy, and etc.
Does anybody know any more information?
I have performed the reliability analysis using Cronbach alpha for my questionnaire and I obtained a value of 0.507. There are 3 items to be deleted as shown in SPSS.
May I know what is the maximum number of items can I delete for my questionnaire? As I have came across one forum stated that only 20% questions can be deleted from questionnaire in order to preserve the content of the questionnaire. However, there is no reference found for this suggestion.
Please advice, thanks in advance!
I wish to know the difference between the BN and Markov model. In what type of problems one is better than other?
In case of reliability analysis of a power plant, where equipment failures are considered, which model should be used and why?
For a dynamic Bayesian network (DBN) with a warm spare gate having one primary and one back-up component:
If the primary component P is active at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [alpha*lambda (S1)].
If the primary component P fails at the first time slice, then its failure rate is lambda (P) and the failure rate of back up component S1 is [lambda (S1)].
My question is, above are the conditional probabilities of primary and backup component. In a DBN, prior failure probability is also required. What will the prior failure probability of back up component? Will it be calculated using lambda (S1) or alpha*lambda (S1)?
We conducted a research on college students using Maslach Burnout Inventory-Student Form of Schaufeli et. 2002. As you all know this scale consists of three factors, namely exhaustion, cynicism and professional efficacy.
My question is about the internal consistency coefficient of the factor professional efficacy. The Cronbach's Alpha for this factor is .59 and split-half reliability coefficient is .61.
In our research we also measure general self-efficacy of the students.
Therefore, what should we do?
For my idea, the best option is omitting the factor from the analysis since we also measure general self-efficacy.
What do you think?
Thanks in advance.
Hi everyone, grad student in need of help!
I have distributed two surveys, they are very similar but one was for teachers and one for students, as part of a needs assessment for e-learning. I wrote the survey to have variables assessing readiness, enthusiasm, and accessibility.
1) How do I properly assess the reliability of my surveys...the participation rates were low for one, and okay for the other, which makes me wonder whether or not EFA is going to be effective. Alternatively, in SPSS you can run the reliability analysis and get your Cronbach's alpha. How does EFA and the reliability analysis differ?
I have 5 Likert scale questions in my questionnaire that seeks to measure Construct A. It has an acceptable Cronbach's Alpha value of above 0.7.
Does this mean that I am able to create (compute) a new variable; where I average the score of the 5 questions for each response to derive a score for Construct A for each respondent? I want to use this new variable (as a representative of Construct A) to conduct statistical tests with other variables.
I am working on reliability analysis of PV Grid connected systems. I have no Background of Reliability analysis. So I just want to start from scratch on reliability analysis. Can someone please recommend me books on Reliability analysis of PV systems and Wind energy systems. Research papers just skip many things so I want to start from detailed study of books...
Thanks in advance...
I am conducting a study where the Team Climate Inventory variable represents a second-order construct with four sub-dimensions (i.e. vision, task orientation, participative safety and support for innovation).
How should I estimate a composite reliability score in this case with the use of Lisrel?
Thanks for all the hints!
I might be publishing my Master's dissertation paper and my supervisor adjusted my data and now 2 of my constructs' cronbach's alpha are 0.6.
I have been looking for a good journal article where their cronbach's alpha values were 0.6 to see how they present it and also use it as a reference but I cannot find one. I have a lot of articles stating that 0.6 to 0.7 is the lower level of acceptability, however I have not found any articles using less than 0.7 values.
A school in Jordan is doing an impact study on its alumni. The variables are a list of traits and values (innovation, leadership, empathy, etc…). I’m responsible for preparing the questionnaire.
My methodology is:
1- For each value/trait, find an inventory or scale that measures it.
2- Choose three items from the inventory/scale.
3- Combine the three items from all the inventories/scales to create the new questionnaire (about 60 items).
I need an expert who can review the final questionnaire and give an approval and recommendations to improve the questionnaire.
SOLVED!!! Don't see how I can delete this question?
I am testing a survey about personality types and self-disclosure on Instagram. I gathered 105 respondents and used the mini IPIP scale by Donnellan to measure the big five personality types. I have reversed coded the items that were negative and double checked with a Phd researcher who confirmed I did it correctly. When running the reliability analysis for the mean of each variable, I get the results in the attached photo. I was told that this could be because some respondents were unreliable and clicking random answers and it could help to remove the outliers. So I did the Mahalanobis Distance in SPSS to identify the outliers (see attachment). I am not sure if I did it correctly but from what I can gather, there are no outliers since none are below .001? I am not sure now how to save my data and how to make it more reliable. I can go back and gather more respondents but it's been hard to do so and I am running out of time. Please advise. Thank you in advance.
It has been seen that the instrument rating of an instrument mentioned in the operating manual is different from what is mentioned in the technical manual (Not all, but few). If we disregard the typing error, what are the actual reasons accounting for this difference?
NFF (No Fault Found) has the major contribution to reduce the operational availability, resources and increase the cost of Maintenance of any aircraft in aviation. The likely causes are human factors, maintenance training, faults reporting, fault analysis, corrective Maintenance and procedures. However, mitigating these issues are completely a tedious process wherein management skill can't achieve the desired results. So, what are the other parameters/ technical factors that need to be considered?
I've conducted an EFA and ended up with 5 factors. A few of the items are cross-loading over 2-3 factors. I have already removed 10 items that either do not correlate or cross-load significantly.
I am fairly happy with the factors, however, the cross-loading items are confusing me and I have a few questions.
1. When calculating the total scores, means and Cronbach's alphas for each factor, do I include the items which cross load with other items?
2. When I present the final scale/solution, how do I present the cross-loading items?
3. There is one factor which is negatively predicted ('Lack of Support' [all items have a negative value]), however, I have changed the scoring so it positively predicts the factor (Support). There is one item in this subscale which cross-loads with another. How does this impact the scoring? Should I try to remove this item?
4. I started with a 37-item scale and I now have 27 items. How many items are too many to delete? At what point should I just accept it as an overall scale with a good Cronbach's alpha (.921) and say further research into factors and subscales is needed?
I am reluctant to delete the few cross-loading items I have remaining, as when they are removed from the analysis, the reliability score decreases for the individual factors and the overall scale.
This is my first time doing an EFA and so I would be very grateful for any advice or recommendations you may have.
My respondents are 92 and I have 180 questions. I am using SPSS, and the software says "scale or part of scale has zero variance and will be bypassed". Can anyone help me?
has the calculation of McDonalds' Omega been implemented in SPSS 25 so far? I found some older threads concerning this question on RG but nothing in the recent past.
In case you know how to do this in SPSS or Mplus, I would be very grateful. I would kindly ask you not to suggest using R because I am not familiar with the programme.
Thank you in advance and kind regards
so I came across a situation where a cross sectional Servery based research was done with a questionnaire that they designed themselves
and they finished the data collocation and have collocated the whole target sample
The Research team didn't do a pilot study.
When they wanted to start the analysis they wanted to do cronbach alpha to measure reliability.
- Cronbach alpha happened to be 94% (showing excellent reliability).
- but they have only done face validity for the questionnaire, and didnt do anything else like Principal Component analysis [PCA].
Q1/ so can they just go with the flow and write in the methodology, and result section that they have done cronbach alpha and it showed great result of etc etc... ??
Q2/and can they say that the questionnaire is a valid questionnaire ??
My questionnaire consists of 18 MCQs, each with one correct answer and 3 incorrect answers. I'm measuring participant scores on the questionnaire before and after watching a video in two seperate groups.
From what I've read Cronbach's Alpha is used to test scaled data (i.e. Likert scales) for reliability, so I'm unsure as to whether its appropriate for my questionnaire.
Can I also use it on my questionnaire, or is there an alternative more appropriate for my data?
If the answer is yes I can use it, do I analyse it exactly the same in SPSS as I would scaled data? i.e.: Analyze> Scale> Reliability Analysis, all questions into the 'items' box, tick descriptive statistics options 'item' 'scale' 'scale if item deleted' and 'correlations' in inter-item options?
Thank you in advance!
I am using answers to a questionnaire in which existing scales from scientific papers are used. One particular (set of) concepts is measured with 24 questions, which are divided across three different subscales by the original author who developed the questions. However, reliability analysis of these subscales (using the collected answers) shows that for two of the three subscales, Cronbach's alpha is lower than 0.7. Furthermore, all subscales contain one or two questions which, if removed, would increase the Cronbach's alpha, although for two subscales, the resulting Cronbach's alpha would still be lower than 0.7.
Is it acceptable to remove certain questions from the subscales, or should I continue to use the original subscales in this situation?
Thank you in advance.
I performed a cross-cultural study using two questionnaires (66 and 12 items). The original version of these questionnaires was used in the first country, and these tools were also translated into the second country´s language to be administered here. It was the first time the translated version was used for research purposes. The number of participants in the first country was 216, in the second 265. Is it required to perform confirmatory factor analyisis (for the two lignuistical versions separatedly), or is it enough to report internal consistency coefficients in this particular publication? What I am actually supposed to do to fulfill the required standards of reporting psychometric properties of translated questionnaires?
Thank you for your suggestions in advance.
Can anyone describe PCE in simple words, please? How can we find a PC basis and what is an appropriate sparse PC basis?
Thanks in advance
I'm trying to determine which test to run to assess the accuracy of a model that's classifying vegetation. I have ground-truth values and the values that the model has produced. I've considered Pearson Correlation and Intra-Class Correlation, though there are many tests, so I'm stumped on which to decide on. I've seen past literature using Pearson Correlation though my data aren't normal even with a log transformation.
I'm currently making an investigation of how to apply RCM with Preventative Maintenance to a truck fleet of fuel transport. In my company there's not a list of functional failure or failure modes, but according to some books' authors like Dhillon, Pistalleri or Dixon, said that there are generic database failures. I was searching this kind of data in google and google scholar but didn't find anything. So, this is why I ask for you to help if you know any public generic failure database that can give my team a base of information to improve our investigation.
PD: Sorry if there's any spelling or semantic error. I'm not native English, and almost any kind of material are in this language.
COVID-19 is affecting all kinds of human activities, research is not exempted. Many ongoing research studies are not paused because of COVID-19, patient recruitment cannot be continued, follow up visits are not stict to schedule, intervention procedures may be delayed, blood test monitor are postponed.
I would expect a higher loss to follow up rate during this period, which would affect the reliability of research. Even after COVID-19, will the recruited subjects have some difference than those recruited before?
What do you think?
My main analysis is on an intention to treat dataset, although I am looking at the per-protocol dataset to confirm if there were any differences. When running Cronbach's Alpha on my many scales, should I run it on the ITT dataset, the PP data, or both?
I have a scale (12 items)
I go to Analysis -> Scale -> Reliability analysis and get my Cronbach alpha (0,5)
BUT 2 of my items are «inverse». If I recode this two items as it was not inverse I get alpha=0.8
Am I right? I should recode this items before counting Cronbach alpha?
I conducted a study (correlation plan).
I used (including) 2 psychological tests, which were adapted by another author according to all the rules.
And I run into problems:
My first test (14 items) has 2 subscales. In Ukrainian adaptation, the Cronbach alpha for the scales is 0.73 and 0.68. But I did my own research and counted Cronbach's alpha. 0.65 and 0.65 came out.
Question1: Should I count correlations with this test or, maybe, exclude this test from analysis?
Situation 2 (see update)
My second test is Zimbardo’s Time Perspective Inventory (56 items). In Ukrainian adaptation, four of five scales have Cronbach Alpha above 0.7. One scale is 0.65.
But in my research everything is ok only with 3 scales, they are higher than 0.7.
Two scales have a very low Cronbach Alpha: 0.55 and 0.49.
Question2: should I exclude this two low scales and count correlations with only that 3 scales which Cronbach Alpha more than 0,7?
PS: N=336 in my study
In general, the approximated probability density function of the performance function can be obtained by the moment method. Is there any other method ?
Currently I am analyzing my data and I've got some results of which I don't know what to do with it. I've tested whipped cream on whipping time at three different moments: day 0, day 1 and day 2 (three different groups). Therefore I used a one-way ANOVA test to analyze if there's a difference between the means of the groups. This test is significant, however, when I use a Post Hoc test to analyze which groups differ, these results are all insignificant. The variances are equal so I used the Turkey test (but any other test I can use in my program give the same insignificant results).
I think this is because the ANOVA may give a type I error (incorrect rejection of the H0 hypothesis that there's no difference between the groups) and the Post-Hoc can do a more reliable analysis between the groups. But I don't know exactly how it works.
Does somebody might know how to draw a clear conclusion from these results! I would be very grateful for any help you can provide!
A correct interpretation of reliability analysis is determinative for researchers and industrial developers to refine their design and also prepare proper maintenance scheduling and safety analysis. However, I still see that many designers prefer to use classical safety factors instead of reliability analysis techniques. what's your sense about this.
For example, imagine that you are going to buy a bearing and I say you this bearing's reliability is 94% for the expected life of 5 years. it means that if you test 100 bearings under normal performance, almost 6 of them should fail after 5 years. does this analysis makes sense for you in your research and development?
and if the answer is yes, how do you use the outcome of reliability analysis in your research area. the answer is important for me because I am going to start developing commercial software for reliability analysis and it is important to see what are the expectations of experts from the reliability analysis methods.
I am evaluating reliability and availability of a hydropower plant using dynamic fault tree gates. To evaluate the top event probability Dynamic Bayesian Network (DBN) is used. I am unable to figure out that how many number of time slices should I consider for my network. Also, should my time slice be 1 month-6 month-1year, or 1year-2year-5year.
Also, should all the power plant components with static and dynamic gates be represented with different time slices in the DBN or only the power plant components with dynamic gates be represented with time slices?
I conducted a diary study. 3 independent judges analysed the data (with thematic analysis) and after deliberation all together now we have 6 categories.
Do I need to do confirmatory factor analysis?
Now, I am a Master Student and doing Master thesis related with coordination in construction.My idea is to rank the 59 coordination factors as the most importance and most time consuming according to the Questionnaire surveys data. For measuring the most important factors, I will use Relative important index method and for measuring time consuming of factors, which method
will be suitable? Another question is that I will use Reliability analysis and descriptive statistics as general analysis and what types of analysis can also be suitable?
i have a question regarding switching method for selection of standby units for operation in any complex system for reliability analysis. is there any appropriate method for select an unit(standby).
Historical data is used to forecast the number of functional failures in passenger trains. However, there is always differenc in forecast and actually observed data. Am wondering which technique or approach is suitable to minimize the forecast error in case of railway data.
Shall be greatful if you can share some useful article or case study.
Hi! In my bachelor thesis am using a new scale (á 12 items) for sexual orientation and the reliability analysis resulted in Cronbachs Alpha .75. After consulting the inter-item-correlation, I decided that due to high correlations, high Alphas and due to the content to aggregate the items, which led me to only having 6 items. But now I am left with an Alpha of only .52. I could exclude one of the aggregated items, which would lead to a Alpha of .63. But that would would exclude "Attraction towards men" alltogether, which doesn't appear to be the most reasonable course.
How do I proceed? Is it valid to not do the aggregation and state why in my thesis?