Questions related to Survey Analysis
If I ask to respondent a question and s/he has to answer the question from his/her perception then there is always a probability to answer the question in positive manner that do not underestimate the respondents. As for example, if I ask a question: Do you think you have the following capacity of your own?
Question: I can manage more than one complexity in a time.
and the scale for this question: Agree, no opinion, disagree.
Then every one will try to highlight the positive site and the answer will be yes, I agree.
In that case is there any way to avoid the error or any methodology to find out the real answer?
Please give me your valuable suggestions.
Thank you. in advance for your anticipated cooperation.
I have conducted a survey on paper. The questionnaire consists of 82 items divided into 15 subscales. That is, 5-6 items belong to a scale/variable I want to explore. I use a 6 point Likert scale for each of the items. Even though the questionnaire clearly states that the respondents should choose only one response per item, some respondents have chosen two options (e.g. 1 and 2, or 3 and 4). How do I handle this? I would really appreciate suggestions, thanks!
Could you recommend courses, papers, books or websites about mixed survey analysis methodologies?
Thank you for your attention and valuable support.
Hi everyone! I am currently conducting research on how the U.S. federal government’s response to COVID-19 shaped its perceived legitimacy among Americans, and is this mediated by affective polarization. My variables are:
IV: Party identification (“Generally speaking, do you think of yourself as a Democrat, a Republican, or Independent?” A follow up question was used: “Which of the following parties do you identify with the most? Strong Democrat, Not very strong Democrat, Strong Republican, Not very strong Republican, Independent, Independent - Dependent, Independent - Republican.”)
DV: Legitimacy (Six scale items were measured across a 5-point Likert scale ranging from Strongly Agree to Strongly Disagree)
Mediator: Affective Polarization (8-point trait item scale to measure positive and negative attitudes towards a respondent’s preferred political party compared to their opposed one - Democrat and Republican: "delighted,” “angry,” “happy,” “annoyed,” “joy,” “hateful,” “relaxed” and “disgusted.” A Feeling Thermometer was also used).
This is my first survey analysis and my first time using R, and needless to say there's been a steep learning curve on teaching myself how to use the software.
Question: does anyone have a good idea on where to start to tackle this? Do I start with CFA, re-measure Cronbach's Alpha for the scales? How do I aggregate the multi-items scale items? Is it necessary for me to use or discard one measure of the other? You're help is very much appreciated :-)
I would like to run a Kruskal-Wallis test on a large sample with a likert scale. I am having difficulties figuring out how to process with the ''rank_avg'', and rank sum with such a large sample. Here's how my data is presented for a typical question:
-Marketing department, 1200 respondants, 88% are favorable, 4% neutral and 8% not favorable to the question
-Eng. department, 500 respondants, 77% are favorable, 3% neutral, 20%... and so on
I would like to assess if results from each department are statistically significantly different from one another. Maybe this is not the right test to use, I would like your help since it is my first survey analysis.
Thank you and best regards
The initial use of meta anysis is to compare results from clinical trials.
However, could I use it for data obtained from questionaire.
For instance, the question for dentist would be: ''Do you sterilize your devices regularly?''. I will try to combine responsed from different studies.
Is there any manual or framework on that methodology?
I have been using the Thomas Lumley's "survey" package for complex survey analysis in R. I understood that multinomial regression model is not developed yet in "survey" package. Is there any other solution to address this problem?
Through the articles and video I've seen in the quantification of elements by XPS survey of Si based materials, I noticed that only Si 2p is quantified. Why Si 2s is ignored?
Thanks for any reply.
So, here is the story. I was give this Likert scale data for analysis, and I just can't get it how I should deal with it. It is a 1-7 scale with answers ranging from 1 being "extremely worse" to 7 being "extremely better". But here is the problem, 4 is "same as was before" and questions introduce the changes as an effect of a different variable, which is work from home (for example, "Compared to work from office, how much has your ability to plan your work so that it was done on time changed when working at home?").
Questions are separated into some groups to form variables, and mean should probably show each person's opinion on the change, right? But it just seems too strange to me to work with just 1 parameters and not go through full comparison of now vs before as 2 different constructs.
If you have any works or insight on the topic, can you please help me?
All the best and take care!
The question is actually very specific. I am doing a survey analysis.
I have 3 variables. Year of college education (4,5,6), whether the person has passed an exam (yes/no) and a score they got on a test. How can I compare if there is a difference in test scores only between 4th year students and people who passed or didn't pass the exam?
I hope to conduct a series of interviews/questionnaire surveys to collect information regarding urban flood management and the use of software tools for the same.
Fundamentally, decision-makers, flood modellers, general public and software modellers/developers are in my expected audience.
Could you please suggest what personal information should be considered when weighing them?
My assumptions are as follow;
1. Decision Makers: The age, level of education, years of service, the level in the organization, no of participations/decision makings in actual flood management activities
2. Flood modellers: educational status (MSc/PhD etc), years of experience, no of participations/decision makings in actual flood management activities
3. Software developers: years of experience, no of contributions in actual flood management software development and the role he/she played
4. General Public: The Age, the level of flood-affected to the person, educational level, experience with floods
The Prospect Theory (Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291) defines a "value function". I want to know if there is a way to estimate that curve based on answers of the questionnaires (prospects).
In the cumulative prospect theory they estimate some parameters but I didn't understand how (Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297-323.).
Can anyone could make a more simple explanation for me?
I have a question in my questionnaire regarding purchase intention, and the options to choose the answer are:
- Definitely Not
- Probably Not
From this question, I need to figure out the relation between purchase intention and three other factors (that are asked in 15 different likert scale questions)
I am doing a pretest, and so far 8 people have filled in the questionnaire and 7 of them have chosen 'Possibly' as their answers for the purchase intention question.
So, my question is if for example 90 percent of respondents in the final questionnaire choose the same answer for that question, can I still get the meaningful analysis from my data?
This is fascinating work. As a Workforce specialist in this field, I'm curious if you've had the chance to conduct similar analyses on the NCI Staff Stability Survey. Perhaps on correlations between staff stability and quality outcomes? Thanks so much!
I am working on validating a questionnaire and I need to ensure that there are few (or no) outliers that might affect the factor analysis process. Is the outlier labeling technique (Hoaglin, Iglewicz) applicable to non-normal data?
The Slovin's Formula is quite popularly use in my country for determining the sample size for a survey research, especially in undergraduate thesis in education and social sciences, may be because it is easy to use and the computation is based almost solely on the population size. The Slovin's Formula is given as follows: n = N/(1+Ne2), where n is the sample size, N is the population size and e is the margin of error to be decided by the researcher. However, its misuse is now also a popular subject of research here in my country and students are usually discourage to use the formula even though the reasons behind are not clear enough to them. Perhaps it will helpful if we could know who really is Slovin and what were the bases of his formula.
As a part of my MTech Research Project, I am doing a Questionnaire Survey for analysis the Concurrent Delay Scenario in Indian Construction Industry. Expert opinion is required for the same. Please take out some of your valuable time and help me by filling up the following survey:
I have conducted an online survey to assess public opinions to the Soft Drink Industry Levy (SDIL/ sugar tax) in the UK. The survey also investigates current soft drink consumption habits, and expected effectiveness of the SDIL. The survey also includes demographic questions such as age, gender, income and health.
I am now starting analysis. I have been advised to conducted one way ANOVA and logistic/ ordinal regressions in SPSS, however I am not sure which variables to use for which test, and how many participants/ data points are needed to conduct each test to make them statistically valid. Any help is much appreciated!
I am still collecting survey tools used by researchers who have used in studies of transgender men and women. The tools will be used to inform the development of a new survey tool that may be added to CDC's National HIV Behavioral Surveillance (NHBS) survey set. If you are willing to share your survey tool (and have not already done so) please do so.
All the best,
I have been using R for quite a while now but came across data that requires the use of survey weights for proper analyses. Therefore I tried to use the "survey"-package which seemed to work fine at first (after some struggle - but nonetheless, it was okay) - but when double checking my work with STATA, I realized my numbers were WAY off.
Despite reading through the (sparse) documentations and examples there are, and even buying the package-author's book, I still don't feel very confident with this type of work. Nonetheless I would love to learn to master this package as survey analysis comes up quite often in my research, and I would love to continue working in R.
Any help and advice will be welcomed! Thank you!
If I pick up data from a survey for only 10% and randomly generate the rest of 90% from an application. (based on the 10%) Will this work? I am in IS discipline.
I think many people do simulate things in other domains too.
A group of residents from two hospitals answered a survey anonymously scoring their confidence from 0 to 10 then a lecture provided. A follow up survey was answered by the same group using the same score. As I read, I can't use the paired t test as this requires pairing the same person which I can't tell due to the anonymity. Running independent t tests would make me lose power unnecessarily?
I would appreciate the help..
I've carried out a questionnaire, where participants had to rate on a likert scale 1-5 a list of response strategies with respect to their ability addressing a certain issue. In total there are five such major issues and they were asked to rank the strategies under each separately. What is the best analysis technique to analyse and summarize the above data.
My question was based on the knowledge, attitude, and behavior on Disaster Reduction. The respondent answer only (Agree, or Disagree or No idea).Results will be shown with comparative both of two group. I wanna know which statistics should I do for these data?
1. What are African American male’s perception of mental illness?
2. How much does stigmas, associated with mental illnesses impact African American men understanding of mental illness?
There is a need for additional studies to identify the perceptions of mental illness within the black population, specifically with African American men. The researcher will use qualitative research techniques to explore the African American male participants' perceptions of mental illness and mental health services. The focus of this study will examine 25 African American men, in Tallahassee, Florida, ranging between the ages of 18 and above. The selection of the specific geographical area is a convenience sample.
The purpose of this study is to explore African American adult men perceptions of mental illness. The researcher will explore stigmas and misconceptions of mental illness among the participants. Additionally, the researcher will attempt to identify if such stigmas and misconceptions influence the participants willingness to seek mental health services. The use of qualitative research is to identify any patterns and gain general knowledge of the phenomen, based on participant’s opinions.
I am an MA student in Dance Practises and I have to produce a research proposal poster for my final dissertation, I am going to 5 dance companies who teach disabled children to find out the methods they use and why (Physical guidance, tactile etc) I have looked at Social constructivist theory and using qualitative research methods, but would I use a mixed methods approach to analyse my information or stick with qualitative? Sorry its all very new to me!
I am currently conducting a dissertation for my final year at university. My research question is relating to attachment and giving behaviour. I have already tried the ECR-R, the 32-item list questionnaire, however, I am not achieving a high response rate due to the length.
I have heard about the ECR-S being only 12 items, and there is evidence that suggest that this questionnaire is just as a effective as the long 32-item ECR-R. Does anyone have any experience with the anaylising the data associated with the ECR-S attachment questionnaire?
i used to collect data by questionnaire, from the same unified unit of analysis ( example; employees only), now i have a question, if i have to variables in my research model that cant be collected from the same unit of analysis ( example questionnaire for employees and other for org. customer ) is it differ in analysis, or the same analysis techniques could be used?
I am studying the effect of knowledge management on service quality, but I need to measure service quality by employees as the respondents of my survey how can I do that and I need recent studies on service quality for my literature review
I administered a survey to parents of autistic children to see if a specific carbohydrate diet helped to improve the outcomes of sleep etc.I have around 6 such outcomes and the results are in form of yes/no. Also, there are demographic variables of income, ethnicity, sex. I am not sure what would be the best test to use. Also, any suggestions on how can I categorize data with demographic variables and 6-7 outcomes to test the intervention's success or failure? Thanks
I'm currently working on a systematic review of the impact of early palliative care intervention for patient diagnosed with advanced cancer. Selected publication that assessed the impact of those interventions on patient's quality of life have used different quality of life questionnaire. I already have found some MID value for QoL questionnaire but didn't ind nothing about the FACIT-Pal and FACIT-sp questionnaire.
So I would like to know if someone knows these values or some goods references about it.
I applied a questionnaire to social workers, but the number of completed questionnaires is very low (9). On a item I obtain 4 for option a, 3 for option b and 3 for option c. Bootstrap can be used to estimate the distribution?
Hi everyone. I am working on my quantitative chapter of my thesis and I would like to ask you about handling close ended questions using 5-point Likert scale questionnaire. My questionnaire is looking at students’ perspective towards a course called (Intensive English as a foreign language).
I have been looking at literature and I find it more confusing when it comes to cell range. I came across two methods of Mean distribution of the findings.
To determine the minimum and the maximum length of the 5-point Likert type scale, the range is calculated by (5 − 1 = 4) then divided by five as it is the greatest value of the scale (4 ÷ 5 = 0.80). Afterwards, number one which is the least value in the scale was added in order to identify the maximum of this cell. The length of the cells is determined below:
- From 1 to 1.80 represents (strongly disagree).
- From 1.81 until 2.60 represents (do not agree).
- From 2.61 until 3.40 represents (true to some extent).
- From 3:41 until 4:20 represents (agree).
- From 4:21 until 5:00 represents (strongly agree).
Second method is the traditional way:
- mean score from 0.01 to 1.00 is (strongly disagree);
- to 2.00 is (disagree);
- from 2.01 until 3.00 is (neutral);
- 3.01 until 4:00 is (agree);
- mean score from 4.01 until 5.00 is (strongly agree)
My questions are:
1 Which method should I use to present findings?
2 When and why the first method is used?
My intention is to apply a descriptive analysis by presenting: Frequencies, Mean and Standard Deviation of the questions them the total mean of each theme.
I really appreciate your help in this manner.
I'm looking at the National Diet and Nutrition Survey data. I'm interested in studying the association between some variables. My question is, although the survey is a 2 stage cluster survey, can i use the unweighted data as i'm not interested in the population parameters as much as i'm in the relation between variables. Must i use complex survey analysis or can i use normal analysis procedure. Again, I'm just interested in the relation bet some variables. The data set has several weighting variables to make results representative of the whole UK population.
I am trying to evaluate impact of an intervention that was implemented in very poor areas (more poor people, undeserved communities). In addition, the location of these areas were such that health services were limited because of various administrative reasons. Thus, the intervention areas had two problems: (1) individuals residing in these areas were mostly poor, illiterate and belonged to undeserved communities; (2) the geographical location of the area was also contributing to their vulnerability (as people with similar profile but living elsewhere (non-intervention areas) had better access to services. I have a cross sectional data about health service utilization from both types of areas at endline. There is no baseline data available for intervention and control. I am willing to do two analyses: (1) intent to treat analysis: Here, I wish to compare the service utilization in "areas" (irrespective of whether the household in intervention area was exposed to the intervention). The aim is to see whether the intervention could bring some change at "area" (village) level. My question is: can I use Propensity Score Analysis for this? (by matching intervention "areas" with control "areas" on aggregated values of covariates obtained from survey and Census?). For example, matching intervention areas with non-intervention areas in terms of % of poor households, % of illiterate population, etc. The second analysis is to examine the treatment effect: Here I am using Propensity score analysis at individual level (comparing those who were exposed in intervention areas with matched unexposed people from non-intervention areas). Is it right way of analysing data for my objective?
I am conducting a project in three different centers. I have sent questionnaire to 1,000 respondents in each center and got responses of as follows;
1) less than 200 in first two centers, and
2) more than 750 in third center
As response of last center is extremely large as compare to first two.
How I will compare these three centers data. Kindly guide me I want to apply t test and ANOVA.
thank you so much
I would like to study the opinion of European citizens' on European integration, using a European survey with direct questions in merit and others such as income, preferred political party, and so forth. Then I would like to understand if there is a correlation between the results of this regression and the level of inequality within each country.
I am afraid of using the Gini index (or any other index based on income) to proxy inequality in this second regression, since there could be a problem of correlated variables. What could I possibly do? Is there an alternative index that I could use and that would still make sense?
I am doing research on FDI in retail.
To examine "consumer’s perception towards FDI in retail." I had collected data from two cities meerut and agra. What type of test I should apply to analysis the data. Hereby Questionnaire is attached.
Dear all, in order to seperate the predictor from the criterion, I plan to collect data from two different groups of raters from the same company. Group A, 600 employees, will fill questionnaire 1 and Group B, also 600 employees, will fill questionnaire 2.
1. How do I select these groups with minimum differences which might impact the cause-effect relationship?
2. How do I measure or control for these differences.
Thank you very much in advance
Likert scale used to asses attitude or opinion of participants. Reliability of such a scale is assessed by Cronbach's alpha test.
Now to create a knowledge score on vaccine preventable diseases. Each participants will be asked to identify the vaccine preventable diseases from 8 selected diseases. The response would be yes or no (two way close ended) . One score for correct answer and zero for wrong answer. Therefore there will be scale measuring knowledge in a range 0 to 8. The higher the number the higher the knowledge.
Any way to test reliabilty of such knowledge scales like Likert scale?
I have generally seen the scale used as a continuous scale. One recommendation for cut points (high hope, etc.) would be to use one or two standard deviations above or below mean, although this would be dependent on the characteristics of those who take the scale.
Substantively I am attempting to show change in the hope of individuals with serous mental illness at the time of onset of involvement in mental health services over time. I can use simple change scores but in presentation it would be clearer to be able to say that the individuals moved from low hope to ...
I have conducted two identical surveys, 4 weeks apart, containing Likert scales. Some questions use the scale of "strongly agree" through to "strongly disagree" whilst the others use "very useful" to "not at all useful". The responses have been assigned numerical values in Excel and SPSS (e.g. strongly agree = 5). Both surveys were given to the same cohort of participants (as mentioned, 4 weeks apart).
The results were anonymous and no ID number was assigned to respondents. Hence, when I come to compare the results from the surveys (i.e. how the responses to the questions changed), I will be unable to match each individual's results. I was originally going to do the Wilcoxen signed rank test for my statistical analysis. However, I understand that this requires matched results (i.e. the responses from each individual directly aligned/in the same row as one another) which is not possible in this situation.
Has anyone got any advice? The only option at the moment appears to be the Mann whitney u test.
Thank you in advance
I am a psychology student exploring the impact of a mindfulness program on stress, mindfulness practice, resilience and self-compassion. I have used the FFMQ, alongside other measurements. I am looking for a paper that interprets the scores and gives me suggested cut-offs.
I am aware that the FFMQ cannot be used as a whole as Observe does not correlate with the other domains.
Thank you in advance.
I'd like to do a rating scale survey to find out whether or not people in an (Voluntary) organization accept the authority from above. I am looking for items I can measure this with.
We have created a questionnaire to measure prejudice and it is answer on a 5 points scale ( 1, agree to 5,disagree) , and I am trying to compare it to another questionnaire that is established and measures negative attitude. How do I run an analysis to find concurrent validity on SPSS?
I am doing my graduate research on Mincer model. How can I find the y0: earnings with no education and no experience. The data I will use will be my own distributed questionnaire data as micro data are not available for my country so can you please suggest what kind of data I should ask people for?
I have found studies which use a qualitative approach, specifically using semi-structured interviews. I am looking to find the problems which are faced by refugee entrepreneurs in Turkey based on which I would then propose some solutions. Is it possible to do this through surveys? I have worked on a survey based on a previous study which performed qualitative research. They created a model using different aspects which I then used to create a survey. I am also in the process of doing semi-structured interviews. I want to know how reliable would the survey results be in generalising the obstacles and proposing a solution? Should they be used separately or together in one paper?
I would like to use the Palmore's Facts on Aging Quiz for MSN students. Does anyone know where to get permission to use this tool?
My survey is on '' Lack of Critique and toughness of Sri Lankan English Literature-case at Advanced Technological Institute, Dehiwala, Sri Lanka''.The target population is the students of Higher National Diploma in English which is a two and half year diploma program. Those students are A/L qualified students.If any one is having some idea related to your local English Literature,kindly share with me to have some idea.
It seems Cohen's kappa requires that our unit (the sentence) be independent; however, the coded sentences are part of an interview, and so sentences follow each other along similar lines and the odds that a code will be repeated across several sentences is higher than by chance.
Also, our codes are not mutually exclusive, another condition of the kappa.
Which statistic should be used here instead?
I am looking for an article or a book which support about 15 subjects is appropriate for testing reliability of a questionnaire (4 point Likert Scale with 26 items).
Thank you very much in advance for your kind advice.
I am doing an analysis of a survey and people are part of either Group 1 or Group 2. I would like to compare how these two groups respond to the main measure in my survey. I was wondering what options I have for a quantitative analysis of this difference.
Folks, I have used Qualtrics panels - not sure if you are familiar with this, Qualtrics have panels and charge a certain amount per survey depending on how scarce the panel respondents are - for example CEOs may be $50 a survey, whereas a middle manager might be $8-10 depending on criterion. It is a little difficult to get response rate data in this case.
Another method some of my colleagues are using is Amazon Mechanical Turk. I have not tried this and am not familiar with the quality. It is definitely a less expensive option - I have heard it could be $1 per survey response.
What are your views and experience on some good ways to get quality survey data?
Hi, can anyone please inform me about a good survey for measuring the experience of visual work productivity? In particular I am interested in “visual work productivity”, such as reading and writing, in a visual ergonomic context. Kindly let me know of any related papers on this topic?
I have identified a lot of different independent variables, too many to put all of them in one survey. I would like to split the survey in two sections, part 1 is always the same while part 2 contains X random variables from a pool of all the other independent variables (that are not included in part 1). Can I still analyse this data in one model (e.g. a regression analysis)? My goal is to find out which of the independent variables explain most of the variance of my dependent variable. I would really aprreciate if someone can give me some ideas for methods or approaches that I can check out. Thank you!
We are developing and validating a questionnaire regarding chronic disease and disaster preparedness. The questions are all categorical in nature ie:
1) Which of the following items do you have as part of your household’s disaster preparedness?
Check all that apply
Water, two liters of water per person per day for at least three days
Food, at minimum a three-day supply of non-perishable food
2) Do you require supplemental oxygen?
Mark only one box
If no go to question 5
3) Do you have 72 hours of supplemental oxygen cylinders (compressed gas, not liquid)?
Mark only one box
I am struggling to find the best way to examine test-retest. I think I should be using Kappa Stats but I'm a bit confused on it and there is little literature on the subject matter.
I'm looking for something like the O'Brien article in Academic Medicine 2014 (Standards for Reporting Qualitative Research:A Synthesis of Recommendations) or CONSORT or something similar?
How many days between assessments is best for measuring test-retest reliability?
This is for a comprehensive structured interview for children and teens, to test the test-retest reliability (similar to the k-sads or scid for adults).
I am starting my dissertation related to spiritual well being and compassion fatigue, and I plan to use three different measurement surveys, such as the Professional Quality of Life (ProQOL) measure. What is the best way or ways to make comparisons, statistically among the three scales?
Dear researchers and Professors
I am going to run exploratory factor analysis for a questionnaire with 16 items. Prior to running an EFA, it was calculated the internal consistency with Cronbach's alpha. Alpha was estimated .41!
I have two questions:
1- Can I run that EFA?
2- If your answer is “No” (because of low alpha and inadequate internal consistency), what is your comment?
I have 4 items to a questionnaire in which the answers were the same type of scale: from 0 to 5, 0 for always false and 5 for always true. I want to study if an item score can determine certain scores on other items.
Variable in my dataset are item responses to multiple scales.
I want to conduct quantitative analysis of MMPs and TNF-alpha. Can anyone suggest better and reasonably cheaper method or assay?
MMP-8,9,13 anf TNF-alpha
Please advise on models or procedures (literature) I can consult for using a matching procedure to assign participants to a treatment and control group in a quasi-experiment. I'm aware that due to non-randomization, selection bias is a major threat to the internal validity of my research. Therefore, I wish to control for any biases of covariates in the design and data analysis. I've come across pair matching, stratification, and covariate adjustment but have found no practical procedure as to how I can propose to implement these models in my study. I know that matching participants to a control and treatment group, one must ensure that participants have a great deal of 'similarity' to reduce the non-equivalence.
So, simply put: Do you know of any practical models that i can use in my research to do matching or participants in a quasi-experiment?
I want to create an survey in in which I will put a person's picture and demand the respondent to guess his/her age. Then I will introduce an offer ("the average of guesses about his/her age is.....") and let him change his answer after this offer. So, how to create such a survey in which the respondent can see his previous answer and can change it. Thanks in advance.
Currently analyzing the data from my questionnaire. There I do the first step of a scale development process. I have around 45 items which I submit to exploratory factor analysis. Respondents answered all items for four countries. My question: When I run the EFA, should I run it separately for the answers of all four countries?
When I consider all answers of all items for the EFA I would have a within subject design as one person answered each item for all four countries.
So should I run four separate EFA's for each country's answers and then compare the results? What should I do if I receive different factor structures?
Specifically, ½ the sample was a stratified random sample of peri-urban households and ½ was a hand-selected sample of surrounding villages where all households were surveyed. I would like to do one regression analysis with this data so that I have enough power for significant results. However, I am concerned that the difference in sampling methods would invalidate the analysis or undermine its interpretation. Can anyone advise? Is it valid for me to put these two groups with different sampling methods into one analysis? How might that affect the interpretation of the analysis?
I am working on my thesis and I have a survey that has true/false and multiple choice questions. From what I understand, KR20 is used for dichotomous variables to estimate reliability coefficient. Also, if multiple choice questions are not equally weighted, I can use KR20. Am I correct in doing KR20 for both trues/false and multiple choice items? Any suggestions and input is greatly appreciated. If you can suggest references that would be great. Thank you!
Hi, I have run a questionnaire with 9 dimensions, each of them. Because they are measured using a 7-scale Likert scale, they are ordinal variables (please correct me if I am wrong). My hypothesis claims that there is a relationship between the dimensions, so I need to group the items into dimensions to be able to correlate them. I seem to remember that can be done with the mean with scale variables but not with the ordinal ones.
How can I group them then to check if there is a relationship between variables?
Thanks in advance!