Science method

Survey Research - Science method

Explore the latest questions and answers in Survey Research, and find Survey Research experts.
Questions related to Survey Research
  • asked a question related to Survey Research
Question
3 answers
Hi everyone,
I am currently in the process of reviewing feedback responses from my pilot survey (mixed methods with a mixture of closed and open ended questions).
One of the feedback that I have received from many of the respondents was they would like to choose more options in the question that I have asked e.g. Describe the type of service delivery that you provide (please pick up to three).
For this type of question, I don't think forced choice or check all that apply would be appropriate and I am trying to prevent people from satisficing answers for CATA. Most clinicians (targeted population) work with a variety of clients and my goal here is to get them to pick top three that they spent the most time in. I am not looking for them to rank based on my research question but what I am looking for is to see certain relationship e.g. respondents working in the education department seemed to be more confident in x. So basically in my case, all the options provided are of equal standing and what I am just looking for is for them to pick 3.
As mentioned, many of the respondents want to pick more than 3 and I am not sure what number is appropriate. I have tried looking for guidance by reading research articles related to survey designs and nothing relevant seemed to come up.
Basically my question is: does anyone know of any research evidence behind these multiple selection questions (why only three can be chosen) and if there is some formula to decide how many options the respondents should choose based on the number of options provided (e.g. if there are 6 options on the response stem choose only 2, if 12 choose 4 etc? ). If there is an expert in this field that has done specific research in this - please let me know of their names and I will look them up.
Thanks in advance.
Gen
Relevant answer
Answer
There is no clear-cut formula or rule when it comes to determining the number of options respondents should choose in multiple selection questions. It ultimately depends on the research question, the targeted population, and the nature of the options being provided. Some studies may require respondents to choose only one option, while others may allow them to select multiple options.
However, there are some best practices that researchers can follow when designing multiple selection questions. For example, researchers should ensure that the options provided are mutually exclusive and collectively exhaustive. This means that each option should represent a unique and distinct category, and that all possible categories are included in the options.
In terms of limiting the number of options respondents can choose, researchers can consider providing a clear rationale for why only a certain number of options can be selected. This can be explained in the survey instructions or in a pre-survey communication with respondents.
Overall, the key is to design questions that are clear, concise, and relevant to the research question at hand, while also taking into consideration the needs and preferences of the targeted population.
  • asked a question related to Survey Research
Question
3 answers
A number of people have asked on ResearchGate about acceptable response rates and others have asked about using nonprobability sampling, perhaps without knowing that these issues are highly related.  Some ask how many more observations should be requested over the sample size they think they need, implicitly assuming that every observation is at random, with no selection bias, one case easily substituting for another.   
This is also related to two different ways of 'approaching' inference: (1) the probability-of-selection-based/design-based approach, and (2) the model-based/prediction-based approach, where "prediction" means estimation for a random variable, not forecasting. 
Many may not have heard much about the model-based approach.  For that, I suggest the following reference:
Royall(1992), "The model based (prediction) approach to finite population sampling theory." (A reference list is found below, at the end.) 
Most people may have heard of random sampling, and especially simple random sampling where selection probabilities are all the same, but many may not be familiar with the fact that all estimation and accuracy assessments would then be based on the probabilities of selection being known and consistently applied.  You can't take just any sample and treat it as if it were a probability sample.  Nonresponse is therefore more than a problem of replacing missing data with some other data without attention to "representativeness."  Missing data may be replaced by imputation, or by weighting or reweighting the sample data to completely account for the population, but results may be degraded too much if this is not applied with caution.  Imputation may be accomplished various ways, such as trying to match characteristics of importance between the nonrespondent and a new respondent (a method which I believe has been used by the US Bureau of the Census), or, my favorite, by regression, a method that easily lends itself to variance estimation, though variance in probability sampling is technically different.  Weighting can be adjusted by grouping or regrouping members of the population, or just recalculation with a changed number, but grouping needs to be done carefully. 
Recently work has been done which uses covariates for either modeling or for forming pseudo-weights for quasi-random sampling, to deal with nonprobability sampling.  For reference, see Elliott and Valliant(2017), "Inference for Nonprobability Samples," and Valliant(2019), "Comparing Alternatives for Estimation from Nonprobability Samples."  
Thus, methods used for handling nonresponse, and methods used to deal with nonprobability samples are basically the same.  Missing data are either imputed, possibly using regression, which is basically also the model-based approach to sampling, working to use an appropriate model for each situation, with TSE (total survey error) in mind, or weighting is done, which attempts to cover the population with appropriate representation, which is mostly a design-based approach. 
If I am using it properly, the proverb "Everything old is new again," seems to fit here if you note that in Brewer(2014), "Three controversies in the history of survey sampling," Ken Brewer showed that we have been all these routes before, leading him to have believed in a combined approach.  If Ken were alive and active today, I suspect that he might see things going a little differently than he may have hoped in that the probability-of-selection-based aspect is not maintaining as much traction as I think he would have liked.  This, even though he first introduced 'modern' survey statistics to the model-based approach in a paper in 1963.  Today it appears that there are many cases where probability sampling may not be practical/feasible.  On the bright side, I have to say that I do not find it a particularly strong argument that your sample would give you the 'right' answer if you did it infinitely many times when you are doing it once, assuming no measurement error of any kind, and no bias of any kind, so relative standard error estimates there are of great interest, just as relative standard error estimates are important when using a prediction-based approach, and the estimated variance is the estimated variance of the prediction error associated with a predicted total, with model misspecification as a concern.  In a probability sample, if you miss an important stratum of the population when doing say a simple random sample because you don't know the population well, you could greatly over- or underestimate a mean or total.  If you have predictor data on the population, you will know the population better.  (Thus, some combine the two approaches: see Brewer(2002) and Särndal, Swensson, and Wretman(1992).) 
..........         
So, does anyone have other thoughts on this and/or examples to share for this discussion: Comparison of Nonresponse in Probability Sampling with Nonprobability Sampling?    
..........         
Thank you.
References:
Brewer, K.R.W.(2002), Combined Survey Sampling Inference: Weighing Basu's Elephants, Arnold: London and Oxford University Press
Brewer, K.R.W.(2014), "Three controversies in the history of survey sampling," Survey Methodology, Dec 2013 -  Ken Brewer -   Waksberg Award: 
Elliott, M.R., and Valliant, R.(2017), "Inference for Nonprobability Samples," Statistical Science, 32(2):249-264,
Royall, R.M.(1992), "The model based (prediction) approach to finite population sampling theory," Institute of Mathematical Statistics Lecture Notes - Monograph Series, Volume 17, pp. 225-240.   Information is found at
The paper is available under Project Euclid, open access: 
Särndal, C.-E., Swensson, B., and Wretman, J.(1992), Model Assisted Survey Sampling, Springer-Verlang
Valliant, R.(2019), "Comparing Alternatives for Estimation from Nonprobability Samples," Journal of Survey Statistics and Methodology, Volume 8, Issue 2, April 2020, Pages 231–263, preprint at 
Relevant answer
Answer
This is a very interesting perspective, James R Knaub , and one that you could well share on Frank Harrell's Datamethods discussion forum : https://discourse.datamethods.org
Other than that, I'm going to have a look at those references over a largeish pot of coffee before I say anything stupid (stupid plus references allows you to cover your retreat better!)
r
  • asked a question related to Survey Research
Question
3 answers
At the US Energy Information Administration (EIA), for various establishment surveys, Official Statistics have been generated using model-based ratio estimation, particularly the model-based classical ratio estimator.  Other uses of ratios have been considered at the EIA and elsewhere as well.  Please see
At the bottom of page 19 there it says "... on page 104 of Brewer(2002) [Ken Brewer's book on combining design-based and model-based inferences, published under Arnold], he states that 'The classical ratio estimator … is a very simple case of a cosmetically calibrated estimator.'" 
Here I would like to hear of any and all uses made of design-based or model-based ratio or regression estimation, including calibration, for any sample surveys, but especially establishment surveys used for official statistics. 
Examples of the use of design-based methods, model-based methods, and model-assisted design-based methods are all invited. (How much actual use is the GREG getting, for example?)  This is just to see what applications are being made.  It may be a good repository of such information for future reference.
Thank you.  -  Cheers. 
Relevant answer
Answer
In Canada they have a Monthly Miller’s Survey, and an Annual Miller’s Survey.  This would be a potential application, if used as I describe in a paper linked below. As in the case of a survey at the US Energy Information Administration for electric generation, fuel consumption and stocks for electric power plants, they collect data from the largest establishments monthly, and from the smallest ones just annually.  After the end of the year, for a given data item, say volume milled for a type of wheat, they could add the twelve monthly values for each given establishment, and with the annual data collected, there is then an annual census.  To predict totals each month, the previous annual census could be used for predictor data, and the new monthly data would be used for quasi-cutoff sample data, for each data item, and with a ratio model, one may predict totals each month for each data item, along with estimated relative standard errors.  Various techniques might apply, such as borrowing strength for small area predictions, adjustment of the coefficient of heteroscedasticity, and multiple regression when production shifts from say, one type of grain to another, as noted in the paper. 
Here are the mill surveys: 
Canadian Mill surveys: 
Monthly Miller’s Survey: 
Annual Miller’s Survey: 
This survey information is found on page 25 of the paper below, as of this date.  There will likely be some revisions to this paper.  This was presented as a poster paper at the 2022 Joint Statistical Meetings (JSM), on August 10, 2022, in Washington DC, USA.  Below are the poster and paper URLs. 
Poster:
The paper is found at
.........................     
If you can think of any other applications, or potential applications, please respond. 
Thank you. 
  • asked a question related to Survey Research
Question
38 answers
Research methodologists have identified serious problems with the use of "control variables" (aka nuissance variables, covariates), especially in survey research. Among the problems are uninterpretable parameter estimates, erroneous inferences, irreplicable results, and other barriers to scientific progress. For the sake of discussion, I propose that we stop using control variables altogether. Instead, any variable in a study should be treated like the others. If it is important enough to include in the research, it is important enough to be included in your theory or model. Thoughts for or against this proposition?
Relevant answer
Answer
There have been huge developments in this area in terms of concepts and techniques in the last ten years (building on some very old ideas) that go well beyond the simplistic notions of 'control' variables ; they are succinctly and very well summarised in Holder Steinmetz's reply to this question:
"there are five types of variables in any causal system (besides your target independent variable X and outcome Y):......"
I find http://dagitty.net/ helpful in thinking through these issues.
  • asked a question related to Survey Research
Question
6 answers
Kindly Provide your suggestions its an important part of my research work.
Relevant answer
Answer
1. A t-test contains an implicit statistical hypothesis = that two sets of data come from the same normally-distributed population.
2. No, a single hypothesis can be used to test more than one research question.
  • asked a question related to Survey Research
Question
5 answers
I have been trying to understand research paradigms (neo- positivism, interpretivism/social construction and critical realism) for a few days now, and I've been reading a number of resources, primarily Blaikie and priest's Social research: Paradigms in action (2017), and Tracy's Qualitative research method. In Blaikie and priest, they say that paradigms are used at the level of explanation, but when I read Tracy's work, I get the impression that paradigms come into play at the level of description as well. These various descriptions creates more confusion for me. At what level of research do these paradigms come into play?
In addition to this, I have been reading many articles that does no seem to follow the descriptions of the paradigms strictly. Are there some researches that don't usually follow?
In light of these two, do you think that survey research follows these paradigms?
Looking forward to reading your views and thought.
Relevant answer
Answer
Nice question and answers. All the best
  • asked a question related to Survey Research
Question
3 answers
In survey research, it is advised that the response rate should be high to avoid self-selection bias. What methods can be used to assess if the data is affected by biases resulting from low response rate,
Relevant answer
Answer
Hello Saima,
If you're lucky enough to have information about characteristics of the target population, and to have collected some of that information about your sample, you could:
1. Run comparisons to see whether your sample deviated notably from the population characteristics. For example, if the target population was 60% female, but your sample was 80% female, then you have evidence that your sample deviates from the population in one potentially important aspect.
2. You could apply weights for these variables to your sample data set, to represent how the results might have looked, had your sample more closely matched with the characteristics of the population.
Good luck with your work.
  • asked a question related to Survey Research
Question
24 answers
I supposed to collect data from 384 respondents. But, I only get 230 complete responses in return. In this case, my response rate is only 60%. Is it acceptable?
Relevant answer
Thank you, professors, for the good question and answers. Good information for me.
Kind Regards,
  • asked a question related to Survey Research
Question
3 answers
What are the components that should best describe the organization of a study related to a medical survey proposal?
Relevant answer
Answer
Dear researcher, at first you have to identify and write research gap of your topic by reviewing literatures. Then set the objective, Methodology : model of data analyses, Discussion of your topic, analysis your survey data, Discussion of the results, key findings and recommendations, conclusion. Best wishes for you
  • asked a question related to Survey Research
Question
5 answers
can I use a combination of a vignette (for one variable i.e. dependent variable) with a self-report survey questionnaire (for all other variables IVs, Mediators, and moderators)? if I can what types of analysis and software for that analysis I may use? if I can't what should I do? (scale development is not a good solution, neither scale for survey research is used nor available in previous research for that Dependent variable). I mean can I use a vignette for one variable with a self-report scale for all other variables in combination (it is somehow a mix of experimental and self-report methodology).
Relevant answer
Answer
Hi Ijaz,
I am assuming you would be using the vignette to set the stage for asking questions or seeking responses about it. If that is the case, the answer is yes, you can use a vignette as part of a survey questionnaire. Regarding what analysis or analyses you could use, that depends on the nature of your sampling and responses, the research questions you are asking, and the viability of your assumptions. Analyses could range from simple frequencies to much more complex statistics and even qualitative methods.
Good luck,
J. McLean
  • asked a question related to Survey Research
Question
5 answers
Is the sample size for CFA the same like that of EFA in educational survey research?
Relevant answer
Answer
1. 50+8k
k=number of independent variable
or,
2. 10 x number of statements/items
  • asked a question related to Survey Research
Question
3 answers
Hello Researchers,
Can anyone recommend a good and professional survey data-collecting platform in China?
Relevant answer
Answer
You can use Psytoolkit, it can also be run offline, it suports Chinese (traditional) and Chinese (simplified) languages.
You can run surveys but also reaction time experiments. The community is very welcoming and helpful.
  • asked a question related to Survey Research
Question
9 answers
Hi. I am a bachelor degree student and currently conducting a survey research about Knowledge, Attitude and Purchasing behaviour towards chicken egg, and willingness to pay for chicken egg. May I know what is the independent and dependent variable for this research? I'm still new with this type of research method and confused which variable is which. I really appreciate your help and thank you.
Relevant answer
  • asked a question related to Survey Research
Question
17 answers
Dear RG members!
I would like to know if the survey Research Paper should be published in a reputable journal or not?
Warmly welcoming your opinions
Relevant answer
Answer
Thank you very much for your contribution to the research.
God bless you and your families
  • asked a question related to Survey Research
Question
9 answers
Dear colleagues,
I am preparing an instrument to better understand factors influencing the research agenda setting of researchers working in academic and non-academic settings. I would like to ask your collaboration to make it better, by completing it and leave comments at the end of the questionnaire on how to improve it.  Many thanks in advance.
p.s.: the survey is voluntary and anonymous, and I would appreciate it if you could circulate it among colleagues that would be willing to contribute and help. Many thanks.
Relevant answer
Answer
Hugo Horta thank you so much dear. well appreciated.
  • asked a question related to Survey Research
Question
29 answers
As internet is, now-a-days, a useful tool for data collection and research, issues also go hand in hand with ethical approval and informed consent. As participants are, by default, give their consents as they comply with the survey, still the question of ethical approval remains unanswered. While there is no need to obtain ethical approval in secondary data and in some similar circumstances, what about primary research of internet based survey?
Relevant answer
Answer
Chiranjivi Adhikari Re: Does the internet based survey research need ethical approval? Yes, of course, you need to cover all the bases, but it may not be as difficult as you first think. Proper design and planning can ensure that the primary concerns are addressed. Informed participant consent can be obtained along with the primary data you seek. Maintaining participant privacy, confidentiality, and anonymity has to be built-in to the design and the process of the instrument.
A properly designed data collection instrument will attend to the administration of the tool as much as its design and validation.
The design and administration of a survey requires up-to-date knowledge of current trends in internet research ethics (IRE) and ethical considerations in internet based research (IBR).
I suggest a deep dive into the topics during the design phase. Elizabeth A. Buchanan, endowed chair in ethics and director of the Center for Applied Ethics at the University of Wisconsin (UW)-Stout, is informed and respected; her work is a good place to begin. For an overview, look at:
Standard entries:
  • Buchanan, E.A. and Zimmer, M., 2012. Internet research ethics. https://plato.stanford.edu/entries/ethics-internet-research/
  • Buchanan, E.A. and Ess, C.M., 2009. Internet research ethics and the institutional review board: Current practices and issues. Acm Sigcas Computers and Society, 39(3), pp.43-49.
Dated, but still valid- esp. re: ethics committees, which still have a lot of catching up to do!
  • Buchanan, E.A. and Hvizdak, E.E., 2009. Online survey tools: Ethical and methodological concerns of human research ethics committees. Journal of empirical research on human research ethics, 4(2), pp.37-48.
And numerous others (Google Scholar is useful) including:
Cheers,
Leo
  • asked a question related to Survey Research
Question
7 answers
This anonymous survey is open to all UK and Middle East academics, researchers, postgraduate students, and professionals. It takes 10 minutes to complete. At the end of the survey you will be offered the opportunity to fill in your details on a separate online form, in case you wish to be considered for the prize draw. To participate, please click on the link below. You are welcome to share the link with your professional and/or social network too.
This is a survey for a Master’s thesis and your support is greatly appreciated. The title of the study is ‘‘The role of leadership self-efficacy (LSE) in developing academic and professional leaders’’. You can find more information in the Participant Information Sheet, which is available with the survey.
Relevant answer
Answer
Thank you! More than 300 responses received so far. The survey will be open until midnight (London, UK Time) today. Thank you for helping.
  • asked a question related to Survey Research
Question
10 answers
I am going to conduct a survey among the experts who are working in power plant construction projects in my country. So far I know, construction of 30 mega projects are going on. The targeted experts are 7-categories, for example, contractors, sub-contractors, vendors, project director (PD), project manager (PM), site engineer, and consulting engineer (or consultant). The other variables are project size in terms of power generation capacity, budget, project location, experts' experience (year), and academic qualification. Please suggest me in a precise way to save my time? I am now in a critical moment. I have a presentation just after couple of weeks. Thank you for your patience and time. 
Relevant answer
Answer
Dear, Muhammad Saiful Islam, this link will help you in your research:
Determining the sample size in a quantitative research study is challenging. There are certain factors to consider, and there is no easy answer. Each experiment is different, with varying degrees of certainty and expectation. Typically, there are three factors, or variables, one must know about a given study, each with a certain numerical value. They are significance level, power and effect size. When these values are known, they are used with a table found in a statistician's manual or textbook or an online calculator to determine sample size.
Choose an appropriate significance level (alpha value). An alpha value of p = .05 is commonly used. This means that the probability that the results found are due to chance alone is .05, or 5%, and 95% of the time a difference found between the control group and the experimental group will be statistically significant and due to the manipulation or treatment.
Select the power level. Typically a power level of .8, or 80%, is chosen. This means that 80% of the time the experiment will detect a difference between the control and experimental groups if a difference actually exists.
Estimate the effect size. Generally, a moderate to large effect size of 0.5 or greater is acceptable for clinical research. This means that the difference resulting from the manipulation, or treatment, would account for about one half of a standard deviation in the outcome.
Organize your existing data. With the values for the three factors available, refer to the table in your statistician's manual or textbook; or enter the three values into an online calculator made for determining sample size.
  • asked a question related to Survey Research
Question
2 answers
I'm conducting a comparative research of omnichannel experience. Kindly ask to take part in this survey https://impresaluiss.qualtrics.com/jfe/form/SV_7WZOTN1rENKA4EC
Highly appreciate your help!
Relevant answer
Answer
Dear Marián Čvirik , Thank you for your participation and suggestion! I find them really relevant!
  • asked a question related to Survey Research
Question
7 answers
Dear academy colleagues,
I'm looking for a truly comprehensive resource for teaching graduate students the elements of conducting robust survey research, including proper survey development, validation, distribution, confidentiality, data security, collation, statistical analysis, interpretation, and sense-making. I've seen elements of these in resources here and there, but not complete, and usually not written in a way that is accessible to a graduate student.
Do you have recommendations about a single resource or a progression of resources that really help a student get from zero to fairly strong (obviously with practice and some mentoring)?
Relevant answer
Answer
Progression. What resources to which learners must be exposed to, this must be carefully done otherwise learners get confused.
  • asked a question related to Survey Research
Question
5 answers
I'm a bit confuse how to measure students' learning. I have used word 'learning' in my research topic and now I'm struck what kind of questionnaire or any other tool would be used to measure students' learning not their performance or achievement. I need a clarification about the term 'Learning"
Relevant answer
Answer
The word "learning" is broad and misleading as you have indicated in the clarification of your question. If you mean text-learning strategies, you could find the questionnaire used by Roegiers et al. (2020) helpful. To know more about various self-report approaches in student learning, you may go through the commentary by Pekrun (2020).
References
Pekrun, R. (2020). Self-report is indispensable to assess students’ learning. Frontline Learning Research, 8(3), 185–193. https://doi.org/10.14786/flr.v8i3.637
Roegiers, A., Merchie, E., & Van Keer, H. (2020). Opening the black box of students’ text-learning processes: A process mining perspective. Frontline Learning Research, 40–62. https://doi.org/10.14786/flr.v8i3.527.
Good luck,
  • asked a question related to Survey Research
Question
4 answers
I am trying to perform the cell-weighting procedure on SPSS, but I am not familiar with how this is done. I understand cell-weighting in theory but I need to apply it through SPSS. Assume that I have the actual population distributions.
Relevant answer
Answer
I might be misunderstanding your question, or his answer, but in my reading of what you are trying to do, I think the approach suggested by David Morse is missing a final step.
I'll assume, as David did, that the population, with N=3200, consists of 500 cases (or 15.625%) in subgroup A, 700 (21.825%) in B, and 2000 (62.5%) in C. I'll assume, further, that you have a sample, with n=80, that includes 10 cases (or 12.5%) in A, 20 (25%) in B, and 50 (62.5%) in C. If so, then the cell weights for your sample should be (15.625/12.5 = 1.25) for cell A, (21.825/25 = 0.875) for cell B, and (62.5/62.5 = 1.0) for cell C. That will keep your weighted sample size at 80, the same as your unweighted sample size, but will make the proportion of cases in A, B, and C in your weighted sample equal to the population proportions.
Forming the weights as ratios of the % in the population divided by the % in the sample will inflate the under-represented cells and deflate the over-represented cells in your sample by exactly the right amount.
If, instead, you also want to make the total number of cases in your sample equal to the total population size, then each of the three initial weights (1.25, 0.875, and 1.0) should be multiplied by (3200/80 = 40), yielding three new weights (50, 35, and 40).
Multiplying by the ratio of population size divided by sample size inflates all of your initially weighted sample counts by exactly the right amount to equal the population count.
  • asked a question related to Survey Research
Question
3 answers
We are conducting survey research on COVID-19 misinformation among students. How can we validate our survey for it to produce valid and reliable data? Also, what statistical tools can we use to measure the extent of misinformation in a community? Thank you.
Relevant answer
  • asked a question related to Survey Research
Question
4 answers
I conducted an experimental study to examine the effect of the different communication methods {two experimental groups (method A and method B) with the control group}. All participants were randomly assigned to one of the experiment conditions (i.e. communication method A, B, or control) after exposure to each method respondents were asked to indicate their agreement or disagreement for five statements about the influence of the communication method using a 4-point Likert scale (strongly disagree, disagree, agree, strongly agree) with a not-applicable option.
I coded “not applicable” responses in two different ways based on experimental conditions. First, if a respondent is exposed to method A and method B and chosen “not-appliable” as a response for five statement items I coded response as Zero. Treating “not applicable” as zero was appropriate in this situation because the respondent put themselves to the lower end by indicating no effect of treatments (Huggins-Manley et al., 2018; Welch, 2013). Second, if a respondent in the control group chooses “not applicable” for any of five statements it was coded four. The “not applicable” response for the control condition describes their true situation (i.e. communication method did not influence them) so I gave a score (4) for the control group. Using a predetermined value for not-applicable responses reduces variance. So I would like to seek any advice on how to code “not-applicable” responses appropriately for such a situation.
Huggins-Manley, A. C., Algina, J., & Zhou, S. (2018). Models for semiordered data to address Not applicable responses in scale measurement. Structural Equation Modeling: A Multidisciplinary Journal, 25(2), 230-243. https://doi.org/10.1080/10705511.2017.1376586
Welch, W. W. (2013). Developing and analyzing a scale to measure the impact of the advanced technological education program. https://www.evalu-ate.org/resources/doc-2013-dev-analyzing/
Relevant answer
Answer
I'm confused as to why participants in the control group were given questions about the manipulation at all? I don't recommend assigning a value of 4 to participants in the control group who selected the NA option. The problem this will create is that a "4" response will represent different things between the experimental and control groups. Whichever manner you decide to code NA responses, it should be the same for all groups.
  • asked a question related to Survey Research
Question
4 answers
can quantitative research i.e. survey researches be exploratory in nature?
Relevant answer
Answer
The efa is quantitative and exploratory but generally exploratory researcg is qualitative and can be done through thematic or content analyis
  • asked a question related to Survey Research
Question
6 answers
How can I determine the sample size for my survey if I have two sample frames?
Do I calculate it separately for each?
Relevant answer
Answer
I recommend the online sampling calculator according to the a priori power according to the respective analysis.
  • asked a question related to Survey Research
Question
6 answers
The two designs seem to have some common characteristics as both of them correlate variables. Therefore, how could one differentiate between them?
Relevant answer
Answer
It is really a stupid distinction ( false antithesis) that is made in some textbooks (e.g.https://www.sagepub.com/sites/default/files/upm-binaries/57732_Chapter_8.pdf)
A survey is one way of collecting information which may be in a quantitative form.
A correlation is one form of analysis for quantitative data; its a very simple technique and generally you get more out of a model-based approach even for 'simple' problems.
On quantitative work and especially modelling you might want to have a look at
  • asked a question related to Survey Research
Question
6 answers
For example: Would using a 7 point scale of 0 - 3 in steps of 0.5 give you different results to using a 7 point scale of 0 - 6 in steps of 1? I'm aware that verbal labels are likely to be better but I'm interested in the possible differences between purely numerical scales that use 0.5 or 1 increments.
Thanks.
Relevant answer
Answer
Hello David,
As the 0-6 scale your describe is a simple, linear transformation of the 0-3 scale (by doubling the scores), there is no statistical or measurement advantage to either. Whether respondents react similarly to the different response scales is an empirical question.
Good luck with your work.
  • asked a question related to Survey Research
Question
4 answers
I developed my own survey based on previous themes from qualitative data. Pairs of questions were extracted from subscales from previous questionnaires in the field, and then adapted to fit the survey context. I have now completed data collection and I ran a CFA on the 'a-priori' factors and questions that were developed, and the model fit wasn't great!
I went back to the data and conducted an EFA to see what factors did work together, and the reliability, plus overall model fit when doing CFA was much, much better. The factors extracted during EFA weren't that far from the original themes, except for a couple of questions being moved around.
Therefore, my question is - is this a done thing? As this was a data-driven survey, would it be acceptable to run EFA and go by this factor structure to continue with the rest of my stats? Or should I just stick with the original 'a-priori' factor structure and deal with the consequences?
Thanks!
Relevant answer
Answer
Rachel--it sounds like you are really in the instrument development process here. In this case, I think it makes sense to review the EFA findings. Even if the items you selected came from several well-designed existing measures, they have likely not been presented together in a single administration, as you have laid them out. And responses to items may vary, depending on the other items to which respondents have already been exposed and the order in which the items appeared (order effects). After you revise your instrument (based on feedback from this EFA), I would suggest that you re-administer your survey in a new sample, fit your new hypothesized factor structure in a CFA framework, and examine the fit of the CFA model at that point. You may still be able to improve your scale, but also be wary of trying too hard to obtain excellent model/data fit, as doing so can exploit idiosyncrasies in a specific data set that may not translate to a subsequent sample.
  • asked a question related to Survey Research
Question
3 answers
I plan to conduct an online survey via Survey Monkey. It is a quantitative study that will measure perceived stress and I plan to use purposive sampling. No correlations, just a simple survey research design on a specific sample. I endeavor to use the Statistical Package for Social Sciences (SPSS) I am not too sure what will be the best form of statistical analysis if my target population will be 100? Thank you in advance for your advice.
Relevant answer
Answer
What is the purpose of your study, research question, any hypotheses? Are you testing any model? Your statistical analysis should be the one that will enable you to test your hypothesis. Also, is your target population 100 (and you will be sampling from 100) or your target sample is 100? If your population size is 100 and the sample is lower, depending on a sample size you might face challenges in meeting some of the assumptions for the statistical test.
  • asked a question related to Survey Research
Question
6 answers
I will do single group pre post test in evaluating training. In that case, what strategy i should follow between experimental or survey research?
Relevant answer
Answer
Respected Brother,
Salam from Pakistani Brother, In fact there are only two ways for data collection either interview or observation. Experiment is data creation method. Once you conduct an experiment, there is change which is either observed or asked about. So experiment will always use observation or interview for data collection.
So your options are:
either Experiment along with interview
or Experiment along with observation
  • asked a question related to Survey Research
Question
18 answers
Using a questionnaire as a data collection instrument in a survey, do a researcher need to formulate hypothesis?
Relevant answer
Answer
I agree with the previous speakers.
In case of exploratory research, descriptive statistics are enough. Instead of hypotheses you use research questions
  • asked a question related to Survey Research
Question
4 answers
Hello,
I'm not home in survey research among children (between 6-12 years old). I'm looking for scales for children which measure stress, coping, relationships with peers etc. Any suggestions?
Kind regards,
Filip
Relevant answer
Answer
FOLLOWING
  • asked a question related to Survey Research
Question
4 answers
What i mean to say is measurement like strongly agree, agree neutral, disagree and strongly disagree need to be analysed using ordinal logistic regression. Because there is no quantification of how many times of "agree" is ""strongly agree"? similarly how may times of disagree is strongly disagree. Can any on explain
Relevant answer
Answer
The driven data sets fringe with attributes .Variables are differ from one another , ironically driven data will be anlysed by different statistical methods by the researcher
  • asked a question related to Survey Research
Question
5 answers
Anybody pls explain and give a model research report
Relevant answer
Answer
Surveys are context specific.
primary data is data collected for orginal research.
Secondary data is collected for another purpose.
  • asked a question related to Survey Research
Question
13 answers
I hope to conduct a series of interviews/questionnaire surveys to collect information regarding urban flood management and the use of software tools for the same.
Fundamentally, decision-makers, flood modellers, general public and software modellers/developers are in my expected audience.
Could you please suggest what personal information should be considered when weighing them?
My assumptions are as follow;
1. Decision Makers: The age, level of education, years of service, the level in the organization, no of participations/decision makings in actual flood management activities
2. Flood modellers: educational status (MSc/PhD etc), years of experience, no of participations/decision makings in actual flood management activities
3. Software developers: years of experience, no of contributions in actual flood management software development and the role he/she played
4. General Public: The Age, the level of flood-affected to the person, educational level, experience with floods
Relevant answer
Answer
I appreciate the request to comment, Rmm, but I don't think I know enough about your particular problem domain.
That's the thing about applying weights to survey respondents - making responses from one person, or a group of people, more important than those of another. You would do this if you have a legitimate reason to think that one group is severely under-represented in your sampling frame, or in your final sample. Or if you have a theoretical reason for giving greater value to the responses of some, and lesser value to others.
You need to have a theory, and/or good evidence, to support the use of weights in the first place and some ideas about how much those weights should apply.
Thinking about it some more, the purpose of your research is likely to be important too. If you're interested only in the value of real estate affected by floods then your weights may apply to the value associated with the people/organisations you survey. If you are interested in the effects on people's homes then you may minimise commercial real estate and apply weights based on the sizes of families.
  • asked a question related to Survey Research
Question
5 answers
We conducted a survey about knowledge of a specific topic (measured as a score) and got some complete responses and some partial responses (demographics only). We think that non-response may be an indicator of lack of knowledge. How can I analyze the data to confirm that?
We thought to compare the demographics between full responders and partial responders and see whether the significantly different variables are the same variables that predict knowledge in the full response group. But we are looking for a better analysis approach that can combine the two outcomes in the same analysis (response to the survey and knowledge about the topic). Any advice?
Relevant answer
Answer
You can use the demographic data to test whether there are differences between those who did or did respond to the knowledge questions. In particular, you can create a 1/0 variable for response versus non-response, and then use it as the basis for a t-Test on any of the demographics.
  • asked a question related to Survey Research
Question
8 answers
I am carrying out a statistical analysis of my collected data through interviews. I would like to interpret the RSD in terms of the level of precision (i.e., Excellent, acceptable, unacceptable). what happens if I got 30% RSD or 60% RSD?! What does this mean in terms of precision? I would appreciate if you support your answers with references.
Relevant answer
Answer
RELATIVE STANDARD DEVIATION is to express standard deviation in terms of % instead of standard unit. RSD = (SD(100)) / mean.
PRECISION. there is no precision. standard deviation is not a tool for precision measurement. SD in any form, i.e. RSD, is a means to provide a range of the expected value or the mean. with your observed 30% or 60% RSD, it means it is that % off about the mean. When reading SD, we speak in terms of big or small SD, not in the language of "precision." If couched in that language, then 30% and 60% about the mean tells us that it is not "precise" for being too far off---if we use, say, 5% as acceptable level of error.
LINK:
  • asked a question related to Survey Research
Question
4 answers
Hi all,
I've got some questions for my thesis. I do a cross-sectional quantitative research (OLS regressions) based on an archival survey conducted between 2015 and 2017. The survey was not administered by me. I hope you can help.
1) What is more preferable, dropping control variables to have a relatively large sample size or remain controls and have a low sample size? (Especially in my case the difference is big n=86 with controls Firm Age and Firm Size and without n=120) OLS assumptions not violated in both situations.
2) Do I have to mention inter-reliability (e.g. Cronbach’s Alpha) among items of the survey? Or can I assume the items as inter-reliable since the administrators did not mention it.
Regards,
Luuk
Relevant answer
Answer
Luuk -
For #1, do you mean that there are 120 firms in the sample, with firm-level data, and of those 120 sample members, 86 are firms of (approximately) the same age and size? Do you know how the sample was drawn, because that sounds like either most firms are alike, or a sample was drawn in an odd way. Establishment surveys, for one thing, generally produce very highly skewed data distributions, because their size is generally highly skewed. That is a good reason for using a model-based approach, or an unequal probability sample, and that sounds unlikely here.
It would be interesting to see scatterplots to explore size impact on results, and a graphical residual analysis would be good.
If the sample was drawn primarily to look at one firm size and age, leaving you short on data for other cases, you may miss other information in the multiple regression, but maybe the original study was designed to just look at one firm size and age closely, and separately take a less rigorous, low sample look at the remainder of the population. If you knew the age and size distributions of the population, you might know more about your sample.
Cheers - Jim
  • asked a question related to Survey Research
Question
10 answers
Dear all
I am currently doing some literature survey in research on diabetic. I want some literature evidence to make some decision. Please share if you have any literature evidence.
Thanking you
Umaramani.M
Relevant answer
Answer
  • asked a question related to Survey Research
Question
5 answers
I plan to carry out a survey research that will model a certain variable with other pre-specified variables. As this is not an intervention study, I am not obliged to register the study to such a database as clinicaltrials.gov. However, I plan to do that in order to increase transparency of my research. Particularly, I would like to make all the tested variables prospectively revealed in order to make the statistical modelling more valid.
Do you consider my plan correct? Would you give me any further tips on how to increase transparency of my study?
Relevant answer
Answer
Consider registering under the Open Science Framework (www.osf.io). Maybe in addition to the clinical trials site.
  • asked a question related to Survey Research
Question
6 answers
I have studied telephone interviews that are conducted for marketing or survey research (Barriball et al. 1996; Carr & Worth 2001) but in case of semi-structured interviews, is it appropriate to conduct through any medium (Skype, Viber, etc) on video or audio? Or onsite is only recommended and mandatory?
Relevant answer
Answer
I have been conducting semi-structured interviews via phone, recording the conversation for later transcription. As long as you have an interview guide and verbatim transcript, I do not see any issue using other communication channels than a face-to-face interview.
  • asked a question related to Survey Research
Question
17 answers
I have studied telephone interviews that are conducted for marketing or survey research (Barriball et al. 1996; Carr & Worth 2001) but in case of semi-structured interviews, is it appropriate to conduct through any medium (Skype, Viber, etc) on video or audio? Or onsite is only recommended and mandatory?
Relevant answer
Answer
Yes, you can. I know people who have done telephone and Skype interviews. However, I do not recommend email interviews, which cannot provide as rich information for analysis.
  • asked a question related to Survey Research
Question
5 answers
I am using multi research design for my research. i.e. Survey, experimental and correlation. It is not only Survey research. My respondents are student teachers of Pune University. Population is almost 250000. Please guide me how much sample should I take for survey?
Relevant answer
Answer
If your concern is largely with the margin of error, then you should search the internet for any of a number of sample size calculators.
A more sophisticated approach, however, is to examine the "power" of a given sample size to detect an effect, in which case I would recommend you look at the G*Power calculator.
  • asked a question related to Survey Research
Question
4 answers
Please, I need responses.
Thank you.
Relevant answer
Answer
Frank -
There is no universally applicable number. To even guess what might be acceptable there is much more information that would need to be known: type of data, methodology (which may lead to many more questions), preliminary standard deviation for quantitative data, and goals. If your sample size is too small for your needs, or your plan is inadequate, then even 100% response will not do what you need to do.
Note, for example, that if you have a strictly design-based (i.e., probability of selection) quantitative sampling plan, and there is any nonresponse, then you are violating your plan. How much is too much? You could do a sensitivity study to simulate some cases, but the answer is going to be vague. However if you have a model-based approach to imputation, you may be fine, even with a good deal of imputation, as long as the sample size is not too small, as measured by variance, possibly using the estimated variance of the prediction error for the predicted total.
There are many other possible scenarios. The circumstances for your question need to be explained, preliminary information obtained, and then some rough guesses from ResearchGate respondents might be possible.
For qualitative data, I imagine the answer is going to be very vague/uncertain.
It isn't a straightforward question and answer situation.
Best wishes - Jim
  • asked a question related to Survey Research
Question
3 answers
I am conducting an empirical research and wondering how to gather around 2000-3000 respondents email ids, considering the response rate of 10 - 15%.
Does empirical researchers who are regularly conducting this type of research would like to share their experience. As it would be highly appreciable.
Relevant answer
Answer
Since your tags include supply chain management, I wonder if that is the area in which you plan to do your research?
In any case, usually if your research is specific to certain topic area, you can purchase a list, or you may be able to obtain one via a membership organization or other existing organizing / coordinating organization (because they may have privacy agreements with those who provided their email address, they may require you to send through them, and often want something in return - first access to results for their members, etc).
If you just need people, you might also try Amazon's Mechanical Turk: https://www.mturk.com/
  • asked a question related to Survey Research
Question
4 answers
Common method bias is apparent in cross-sectional survey research. Are there significant ways to minimise the common method bias in cross-sectional survey research? Please provide me with the sources.
Relevant answer
Answer
For more detailed instructions to run Harman's single factor score, you can have a look at the attached publications. Page 42 of  'Introduction to SPSS' guides you on steps to check for CMB using Harman's single factor score.
Should you encounter potential CMB (total variance for one factor is in excess of 50%), you may want to remedy the issue based on suggestions in these articles:
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. doi: 10.1037/0021-9010.88.5.879
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63(1), 539-569. doi: doi:10.1146/annurev-psych-120710-100452
Hope this helps!
  • asked a question related to Survey Research
Question
9 answers
What is 'survey' in research? Type of research, type of research design, method of research or method of data collection?
Relevant answer
Answer
A survey is a method of data collection; it is not in itself a research design, this being how one actually goes about one's research, taking into account practical issues like resource constraints, access issues etc. Thus, research design is not methodology which is about the application of methods; some use both terms interchangeably, but I think that this can lead to muddled thinking about research.
  • asked a question related to Survey Research
Question
5 answers
Questionnaires and Checklists seem to be the most widely instruments used for data collection in survey studies among undergraduate Students.
Relevant answer
Answer
In depth face to face interviews, even one on one telephone interviews, are among the most important tools or methods for conducting qualitative research. They're labor intensive and seem to be displaced somewhat by online and electronic approaches. But nothing really compares to the chance to interview a human subject in depth for maximizing the validity of your research. Questions such as, "What do you mean?" or "Can you give me a specific example?" have real power.
  • asked a question related to Survey Research
Question
4 answers
Social desirability bias is a significant challenge in Survey-based studies. Are there any significant ways to minimise the social desirability bias in Survey research, particularly in the field of consumer behaviour and insights?
Relevant answer
Answer
Social desirability is one of the most common sources of bias affecting the validity of experimental and survey research findings. There are many methods to reduce social desirability bias, including the randomized response technique, the bogus pipeline, self‐administration of the questionnaire, and the selection of interviewers.
  • asked a question related to Survey Research
Question
10 answers
I am looking at evaluations which ask participants of a program to rate different aspects of the program from 1-5. The evaluation was sent to all participants and around 20% responded (~650 people).  I'm worried about non-response bias, but not sure how to test whether my estimates are significantly biased since my data is ordinal. Any thoughts? 
Many thanks!
Relevant answer
Answer
The non-respondent bias can be assessed by comparing the responses from early and last waves, such as the first and last quarterly of responses (Armstrong & Overton, 1977). You may use Multiple Group Analysis (MGA) in PLS (or SPSS) to check whether the path coefficients different in all the relationship are significant or not (p-value).
  • asked a question related to Survey Research
Question
33 answers
The Slovin's Formula is quite popularly use in my country for determining the sample size for a survey research, especially in undergraduate thesis in education and social sciences, may be because it is easy to use and the computation is based almost solely on the population size. The Slovin's Formula is given as follows: n = N/(1+Ne2), where n is the sample size, N is the population size and e is the margin of error to be decided by the researcher. However, its misuse is now also a popular subject of research here in my country and students are usually discourage to use the formula even though the reasons behind are not clear enough to them. Perhaps it will helpful if we could know who really is Slovin and what were the bases of his formula.
Relevant answer
Answer
If you use statistical models (like Ttest, ANOVA, Pearson r, regression analysis, path analysis, SEM, among others) to test the hypotheses of your study, then I suggest you conduct a statistical power analysis in computing your minimum sample size. Sample size is a function of the following components: effect size, errors in decision (Type I and Type II), complexity of the statistical model, among others. Statistical power analysis or simply power analysis is finding the optimal combination of the said components. You can use the G*Power software which is downloadable for free. Just search it in google.
Slovin's formula has been taught by "irresponsible professors" in the Philippine Colleges and Universities. Sorry for the strong words, but that is true. Because they dont have formal training about Statistics, they taught the wrong things to their students. They are like blind people guiding another blinds.
Anyway, you may read a published article titled "On the misuse of Slovin's Formula".
I am on travel now. I am just using my phone to reply your message. I will email to you later the said article if you want. If interested, just email me at johnny.amora@gmail.com and then I will send you some materials including the said article.
  • asked a question related to Survey Research
Question
14 answers
Hi, I am reading a book on research methodology and wondering whether Survey Research method is inductive or deductive. Can survey research be in the form of qualitative data?
Thank you
Relevant answer
Answer
There is nothing intrinsically deductive, or inductive, about most social research methods. They are deductive when they are used to test hypotheses derived from an existing theory and inductive when data is collected in order to develop a theory. Surveys can and are used to do both; and sometimes neither. Having fixed questions and response formats does limit the ability to use surveys inductively. Interviews and focus groups can also be used inductively or deductively , though they are more commonly used inductively. Some research approaches are inherently inductive, e.g. ethnography, grounded research; though I have seen ethnography used in an attempt to test particular, high-level theories (Structural-Functionalism, marxism, structuralism, etc.).
  • asked a question related to Survey Research
Question
8 answers
I am doing survey research of customer right now so that I want to identify the minimum percentage that can be stated customer satisfy towards the service and quality of company. May I guess is it between 70 % and 90 %. I need the exact percentage about this. May anyone can give me assistance to answer my question. Thanks a bunch.
Relevant answer
Answer
One small note to my previous post:
when measuring customer satisfaction, it is sometimes appropriate to consider the "zone of tolerance" – The difference between level of quality expectations and level of quality desires.
See for example:
Van Riel, A., Semeijn, J., & Janssen, W. (2003). E-service quality expectations: A case study. Total Quality Management & Business Excellence, 14(4), 437–450. doi:10.1080/1478336032000047255
  • asked a question related to Survey Research
Question
7 answers
Is there a possibility that the data are valid and realiable if you use a self -made qustionaire.
Requesting references
Relevant answer
Answer
If you create your own question, then you need to demonstrate the reliability and validity of those items. This is definitely the case when you are creating a new multi-item scale. If so, you would begin by assessing the reliability of the scale, typically through assessing Cronbach's alpha and often Exploratory Factor Analysis. You start there because a measure that has low reliability will, by definition have low correlations with anything. If you do have sufficient reliability, then you can check validity be examining the pattern of correlations between your new measure and other established measures, in a process known as "construct validity."
  • asked a question related to Survey Research
Question
12 answers
Hello everyone, I am currently working on my thesis where I am analyzing if some factors have impact on purchasing behavior. 15 of the questions in my survey were were Likert Scale (1-5) and I would like to use them as independent variable. Total money spend would be dependent variable and I would also like to use income and gender and some other stuff as independent variable. What statistical analysis would you recommend me to analyze my data? I converted my Likert scale data into factor data with 5 levels and now I am lost.
Can I use OLS or would you recommend something else?
Any help would be really appreciated
Relevant answer
Answer
Hi Daniel:
You might begin with an exploratory factor analysis. If you get a two (or more) factor solution, you need to do some serious thinking about the items that form factors regarding whether or not you want to include them as variables. This is a theoretical problem. You should not combine items that belong to different factors in the same scale.
Assuming you are now analyzing your first, or only, factor, you can either use an index or a latent variables. If you use an index, a Likert-type scale, you should look at the distribution of scores and transform your index, especially if it is a dependent variable (meaning it has one or more arrows coming into it), so that it is roughly normal. If you dichotomize all items, you can do a Guttman-scale or cumulative scale measure.
If you prefer to use a latent variable, use EQS, SAS, or some other structural equations modeling program to assess the fit of your items to a single latent variable. Statistics such as the root mean square residual and the goodness of fit index will give you insight into whether your items measure a single dimension. If you do this, use maximum likelihood estimation rather than least squares so you get t-tests for the significance of the item coefficients. Delete non-significant items, one by one, until all left are significant.
Good luck with your project, Best regards, Warren
  • asked a question related to Survey Research
Question
9 answers
As you are doubtless aware, paper-based survey has been known as one of the most common methods for gathering data relevant to people's behavior (either revealed preferences or stated preferences). I wanna make sure how much can we rely on new methods like Internet (Web)-based survey instead of traditional paper-based survey? In particular, my research's scope is related to travel behavior analysis. My research' sample should cover all socioeconomic groups and almost all geographical areas in a city.
I would be happy if somebody shared with me his/her opinion or the valid references.
Thanks in advance
Relevant answer
Answer
Another problem you have to consider is about respondent’s willingness to participate. You have to have a reliable database of lead contacts and be aware that response rate is very low, commonly around 5% to 10% so if you need a sample of 400 subjects, for example, you have to contact at least 8000 people.
Of course, never forget the main characteristics of your sample otherwise your results will be biased.
  • asked a question related to Survey Research
Question
26 answers
Design-based classical ratio estimation uses a ratio, R, which corresponds to a regression coefficient (slope) whose estimate implicitly assumes a regression weight of 1/x.  Thus, as can be seen in Särndal, CE, Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang, page 254, the most efficient probability of selection design would be unequal probability sampling, where we would use probability proportional to the square root of x for sample selection. 
So why use simple random sampling for design-based classical ratio estimation?  Is this only explained by momentum from historical use?  For certain applications, might it, under some circumstances, be more robust in some way???  This does not appear to conform to a reasonable data or variance structure.
Relevant answer
Answer
I suggest to go through the book
"Sampling Technique"
by W. G. Cochran.
  • asked a question related to Survey Research
Question
3 answers
I am conducting survey research on MTurk that involves participants reading workplace sexual harassment scenarios that include potentially distressing depictions of sexual assault/violence. I want to design the survey in Qualtrics such that participants: (1) must spend a certain amount of time reading each scenario before advancing to dependent measures and next scenario (i.e., placing timer on block) and (2) may, at any point, advance directly to the end of the study to receive MTurk completion code if they wish to terminate participation for whatever reason. Is there a way achieve both of these requirements, or will I have to abandon (1) and allow participants to advance freely to the end of the survey by skipping questions? This is more of Qualtrics survey flow logic question.
More generally: how do researchers handle mid-study termination of participation and payment in Qualtrics survey research on MTurk?
Relevant answer
Answer
I think the easiest way of providing a 'quit the survey' option is to use Skip Logic. This allows you to send respondents to the end of the survey if they choose a particular answer to a question. For example, you could have a question that asks "are you happy to continue?" in each scenario. If the respondent chooses "no" it would trigger the skip condition and they'd be taken to the end of the survey.
You could set up the survey with different exit points (if you wanted to give different codes to respondents who read say one, two or three scenarios) using either branch logic or display logic. Branch logic would let you send respondents to different blocks depending on which scenario they quit at. Each exit block could contain a differnt exit code. and then lead to the end of teh survey
  • asked a question related to Survey Research
Question
8 answers
Whether research survey is mandatory or not?
Relevant answer
Answer
The need for extensive literature review in research cannot be overemphasized. What we are studying is likely to be an extension of an earlier work. No one is operating in complete isolation of others. Therefore the more review of relevant literature we do, the better we can do our research to supplement what has been done before.
  • asked a question related to Survey Research
Question
3 answers
I am currently creating a survey around communication within the feasibility assessment process of construction projects. And if possible I would like my respondents to all come from the industry so that my research knowledge is reliable. How can I make sure I channel it to the correct people in the correct industry?
Relevant answer
Answer
Sharing questionaire with right resources and sharing it in right groups / forums will lead to better responses
  • asked a question related to Survey Research
Question
3 answers
Dear colleagues,
I will be really thankful if you share your experience , research, pre-prints on the impact of GDPR on planning and executing the surveys. How you achieve the consent for data protection? Whaat is the respondent reaction? What is the response rate after GDPR? We expect some decrease, for now more visible in online surveys than in face-to-face.
Relevant answer
Answer
Dear Ekaterina, I enclose a copy of the Baker McKenzie report titled, GDPR National Legislation Survey issued in January 2018 and it covers a number of your queries.
  • asked a question related to Survey Research
Question
5 answers
I'm exploring innovative ways of conducting research with teens and younger children. We are looking to measure positive youth development, and normal surveys are not very engaging for this demographic, so I'm curious if anyone has explored any creative ways - e.g., gamification, turning survey into a fun quiz, using apps, etc. Looking for quantitative suggestions at this stage.
Thank you!
Relevant answer
Answer
I agree with photo vice techniques, however they are more efficient in qualitative than in quantitative, you can use them as early technique of involving teen in your study and then when you're connected with them you can distribute a survey for them to fill in.
  • asked a question related to Survey Research
Question
12 answers
Can I use purposive sampling in a quantitative survey research?
Relevant answer
Answer
Simply put, if the researcher has the full list of the target population (e.g., full list of employees in fast food industry of London or full list of SMEs working in a country etc); which in most cases is impossible; only then a probability sampling technique can be used, otherwise a non probability sampling technique is appropriate (such as Purposive Sampling) to get a representative sample.
Please refer to the reference given below:
Rowley, J. (2014). Designing and using research questionnaires. Management Research Review, 37(3), 308-330.
Kindly read below reference for the issue of Generalization.
Seddon, P. B., & Scheepers, R. (2012). Towards the improved treatment of generalization of knowledge claims in IS research: drawing general conclusions from samples. European Journal of Information Systems, 21(1), 6-21.
  • asked a question related to Survey Research
Question
2 answers
In Applied Survey Sampling, by Blair and Blair, Sage Publications, 2014, on page 175, they note the common use of "credibility intervals," by researchers using nonprobability samples and Bayesian modeling. They note that the American Association for Public Opinion Research (AAPOR) issued a caution to people using these credibility intervals, as not being something the public should rely on "in the same way" as a margin of sampling error. Attached is the AAPOR statement from 2012 in which they caution heavily regarding the use of such nonprobability opinion polls, as the Bayesian models have assumptions which will be of varying quality. However, they also state that "...even the best design [probability sample] cannot compensate for serious nonparticipation by the public." Thus, for much of the data in a probability sample, we have to use models or some method to estimate for missing data, and when the nonresponse rate is high, do we really actually have a valid probability sample anymore?
Thus the emphasis would be on total survey error. There is sampling error, and then there is nonsampling error. We have nonresponse and that can make reliance on a model better overall.
If that is the case, then why do many survey statisticians insist on probability samples for highly skewed establishment surveys with continuous data, when excellent regressor data are available? Often sampling the largest few establishments will provide very high 'coverage' for any given important variable of interest. That is, most of the estimated total would already be observed. The remainder might be considered as if it were missing data from a census. But these missing data, if generally close to the origin in a scatterplot of y vs regressor x, should have little prediction error considering the heteroscedastic nature of such data. With the relatively high measurement error often experienced with small establishments, long experience with energy data has shown one will often predict values for y for small x more accurately than one could observe such small y's. Further, this is done using the econometric concept of a "variance of a prediction error," and Bayesian model assumptions are not introduced.
It is important not to lump nonprobability sampling having good regressor data with other nonprobability sampling. For official statistics, an agency will often collect census surveys, and have more frequently collected samples of the same variables of interest (the same attributes), or a subset of them. Often the best regressor data for the samples are from such a census.
Finally, many years of use in publishing official statistics on energy data have shown this methodology - far less radical than the use of "credibility intervals" for polling - to have performed very well for establishment surveys. It does not appear reasonable to argue with such massive and long-term success. The second link attached is a short paper on the use of this methodology. The third is a study of the variance and bias, and the fourth shows a simple example, using real data, as to how effective a quasi-cutoff sample with a model-based classical ratio estimator can be.
Since the 1940s, for good reason in many cases, probability sampling has been the "gold standard" for most sample surveys. However, models are used heavily in other areas of statistics, and even for survey statistics "model-assisted design-based" methods have unquestionably often greatly improved probability sample results. But strictly model-based sampling and estimation does have a niche in establishment surveys, though it has met with resistance. One should not dismiss this without trying it. It especially seems rather odd if anyone were to consider Bayesian "credibility intervals" for election polls, but not quasi-cutoff sampling with model-based estimation, as defined by going through the second link below.
Your comments? 
  • asked a question related to Survey Research
Question
6 answers
My topic is Relationship between transformational leadership style and affective organizational commitment: a study based on functional level employees .
Also i would like to ask in questionnaire to determine the transformational leadrship behaviour can i write "My supervisor" Communicate a convincing vision for the future. or should i replace it with my leader?
Relevant answer
Answer
Hello,
You can certainly use purposive sampling in doing quantitative research, but you make your research quasi experimental because you violate one of the basic requirements of experimental research which is the use of random sampling.
Best regards,
R. Biria
  • asked a question related to Survey Research
Question
9 answers
Can I use purposive sampling in a quantitative survey research?
Relevant answer
Answer
such as your study is conducting on IT implemention in an organisation and you want to know about Are there accept among employees for implementing IT or not So, you have to given out a survey to IT staff only not all staffs who are working on finance , administration...etc .
IT STAFF IS CALLED PURPOSIVE SAMPLING METHOD.
Good luck
  • asked a question related to Survey Research
Question
7 answers
I am currently working on my master’s thesis about Sustainable Marketing in Construction Industry, at Istanbul Technical University. My research aims to introduce marketing as a tool for sustainable construction business development for the creation of the sustainable built environment. It should only take 10 minutes to complete. All responses will remain anonymous.
Please find below the link to the survey:
Thanks in advance for your support
Relevant answer
Answer
Answered and Sent!
  • asked a question related to Survey Research
Question
5 answers
hi everyone expert in statistics,
my sample size is 16 respondents in a survey research. So , to measure correlation, what type of analysis is most suitable??
Relevant answer
Answer
Hi Nima. It really depends on the kind of data you have. For a sample size of 16, I would use Chi Square if my data were nominal; Chi Square /Cross Tabulation or Gamma if my data were ordinal; Spearman Correlation if my data were interval. All these tests are readily available in Statistics books. I hope this helps :)
  • asked a question related to Survey Research
Question
5 answers
I have seen several references to "impure heteroscedasticity" online as heteroscedasticity caused by omitted variable bias.  However, I once saw an Internet reference, as I recall, which reminds me of a phenomenon where data that should be modeled separately are modeled together, causing an appearance of increased heteroscedasticity.  I think there was a youtube video.  That seems like another example of "impure" heteroscedasticity to me. Think of a simple linear regression, say with zero intercept, where the slope, b, for one group/subpopulation/population is slightly larger than another, but those two populations are erroneously modeled together, with a compromise b. The increase in variance of y for larger cases of x would be at least partially due to this modeling problem.  (I'm not sure that "model specification error" covers this case where one model is used instead of the two - or more - models needed.)
I have not found that reference online again.  Has anyone seen it? 
I am interested in any reference to heteroscedasticity mimicry.  I'd like to include such a reference in the background/introduction to a paper on analysis of heteroscedasticity which, in contrast, is only from the error structure for an appropriate model, with attention to unequal 'size' members of a population.  This would then delineate what my paper is about, in contrast to 'heteroscedasticity' caused by other factors. 
Thank you. 
Relevant answer
Answer
Multilevel random coefficient models are particularly suitable when between-group variances are heteroscedastic like in the example you mention. There are plenty of good references on these models in the statistical literature.
  • asked a question related to Survey Research
Question
1 answer
I'd like to know what insights were generated and what the implications were for research or commercial outfits.
Relevant answer
Answer
Chapter 11
The Acceptance, Challenges, and Future: Applications of Wearable Technology and Virtual Reality to Support People with Autism Spectrum Disorders
Nigel Newbutt, Connie Sung, Hung Jen Kuo and Michael J. Leahy
In: Recent Advances in Technologies for Inclusive Well-Being (2017)
Springer International Publishing AG
  • asked a question related to Survey Research
Question
5 answers
I have conducted pre-test surveys of few hundred people through non-probability sampling. Now can I create a sampling frame work with the details of those pre-test survey respondents and than select the random sample for my final survey from this ?
If allowed what would be that sampling technique called? Would it be probability? Also can we use these results to generalise it on health food population in my city (target population) or only limited to sample frame work. Please guide as this would be very useful. Thanks.
Relevant answer
Answer
Thanks Gier
  • asked a question related to Survey Research
Question
6 answers
I am looking for ways to address validity and reliability of the data collected with this technique.
Relevant answer
Answer
The most effective approaches to collecting unobtrusive data use systematic protocols that allow you count things. Note that this is quite different from things like ethnography or participant observation, where your presence in the setting is an "obtrusive" aspect of the research.
  • asked a question related to Survey Research
Question
1 answer
Stack Overflow advertises several official (https://stackoverflow.com/help/how-to-ask) and de facto standard guidelines (http://tinyurl.com/stack-hints) for writing good questions -- that is, questions that have greater chances to be resolved.
We built a statistical model to empirically validate this question-writing netiquette. The last step we're missing in our research is: What does the SO community think of these recommendations?
If you ever used Stack Overflow before, please help us find out by taking this very short survey: https://goo.gl/EzS3eN
Thanks for your contribution! 
EFFORT
The time to completion for the survey is about 5 minutes only (before June 21, 2017).
INTENT
This is a purely academic research project with no commercial interests. We will openly publish the results so everyone can benefit from them, but will anonymize everything before doing so; your responses will be handled confidentially. Please, note that you are not obligated to participate in the survey. If at some point during the survey you want to stop, you are free to do so without any negative consequences. Incomplete survey data will not be used.
A NOTE ON PRIVACY
This survey is anonymous. The record of your survey responses does not contain any identifying information about you, unless a specific survey question explicitly asked for it.
Relevant answer
Answer
The survey is no longer accepting responses.
  • asked a question related to Survey Research
Question
19 answers
If I pick up data from a survey for only 10% and randomly generate the rest of 90% from an application. (based on the 10%)  Will this work?  I am in IS discipline.
I think many people do simulate things in other domains too.
Relevant answer