Questions related to Survey Design
I am writing a research paper using quantitative descriptive survey design. That means I have my standard deviation to be reported in my research result section. However, I find that it can be confusing when it comes to standard deviation and mean score. I mean, I indeed analyse my mean score along this way, but how about my standard deviation? should I left it there or...???
Thank u for ur time.
If the original scale has one reverse-coded item, and if I want to adopt it in my study, is it possible to un-reverse it and use all the other items in the same order?
Is there anyway to have the correlation coefficient of two categorical variables ,both ordinal and nominal (For example level of education and ethnicity)? I have checked the independency by chi square test, and now the strength of association is important for me.
I have heard about Cramer coefficient but my problem is that it seems STATA doesn't provide it under survey design set (svy command). Any suggestion?
Thank you very much for your help.
I'm triying to analyze the teleworker's labor market before and during the pandemic crisis but there aren´t any specific surveys designed to measure this kind of workers, do you know anyone who has managed to identify teleworkers as cases in the survey? (not as a proportion of specific sectors or activities like Dingel and Neimann or Monroy Gomez Franco already did).
Thanks in advance.
I am going to conduct a survey among the experts who are working in power plant construction projects in my country. So far I know, construction of 30 mega projects are going on. The targeted experts are 7-categories, for example, contractors, sub-contractors, vendors, project director (PD), project manager (PM), site engineer, and consulting engineer (or consultant). The other variables are project size in terms of power generation capacity, budget, project location, experts' experience (year), and academic qualification. Please suggest me in a precise way to save my time? I am now in a critical moment. I have a presentation just after couple of weeks. Thank you for your patience and time.
Dear academy colleagues,
I'm looking for a truly comprehensive resource for teaching graduate students the elements of conducting robust survey research, including proper survey development, validation, distribution, confidentiality, data security, collation, statistical analysis, interpretation, and sense-making. I've seen elements of these in resources here and there, but not complete, and usually not written in a way that is accessible to a graduate student.
Do you have recommendations about a single resource or a progression of resources that really help a student get from zero to fairly strong (obviously with practice and some mentoring)?
I was studying the following paper: Park, D.H., Lee, J. and Han, I., 2007. The effect of on-line consumer reviews on consumer purchasing intention: The moderating role of involvement. International journal of electronic commerce, 11(4), pp.125-148.
My query is that first 3 items in the scale used to measure attitude in this paper are definitely about positive attitude while next 3 items seem to be about negative attitude. Is my understanding correct? If yes, are these items (from #4 to #6) reverse coded?
Asking this as Google forms doesn't allow multiple items in a semantic differential format.
I'm a bit new to these aspects of survey design and analysis. What should I read and what are some approaches to the following situation and question?
- We've a population-of-interest based on an affiliation, certain actions, or a set of ideas; (e.g., 'vegetarians' or 'tea-party conservatives)... call it the "Movement"
- There has never been a national representative survey nor a complete enumeration of this group. There is no 'gold standard'
- For several years we've advertised a survey (with a donation reward) in several outlets (web pages, forums, listserves which we call 'referrers') associated with the 'movement'
- We can track responses from each referrer. We suspect some referrers are more broadly representative of the movement as a whole than others, but of course there is no gold standard.
This is essentially a 'convenience sample', perhaps more specifically a 'river sample' (using the notation of Baker et al, 2013) or 'opt-in web-based sample'. It is probably non-representative because of
- Exclusion/coverage bias: Some members of the movement will not be aware of the survey (they don't visit any of the outlets or they don't notice it)
- Participation/non-response bias: Among those aware (through visiting the 'referrers') only a smallish share complete the survey (and these likely tend to be the more motivated and time rich individuals). Some outlets/referrers may also promote the survey more prominently than others.
We wish to measure:
- The (changing) demographics (and size) of the movement
- Measures of the demographics, beliefs, behavior, and attitudes of people in the movement (and how these have changed from year to year)
Our methodological questions
Analysis: Are there any approaches that would be better than 'reporting the unweighted raw results' (e.g., weighting, cross-validating something or other) to using this "convenience/river' sample to either:
i. Getting results (either levels or changes) likely to be more 'representative of the movement as a whole' then our unweighted raw measures of the responses in each year?
ii. Getting measures of the extent to which our reports are likely to be biased ... perhaps bounds on this bias.
Survey design: In designing future years' surveys, is there a better approach?
Brainstorming some responses...
- E.g., as we can separately measure demographics (as well as stated beliefs/attitudes) for respondents from each referrer, we could consider testing the sensitivity of the results to how we weight responses from each referrer.
- Or we might consider using the demographics derived from some weighted estimate of surveys in all previous years to re-weight the survey data in the present year to be "more representative."
- As noted we subjectively think that some referrers are more representative than others, sSo maybe we can do something with this using Bayesian tools
- We may have some measures of the demographics of participants on some of the referrers, which might be used to consider weighting to deal with differential non-response
- Would 'probability sampling' within each outlet (randomly choosing a small share within each to actively recruit/incentivize, perhaps stratifying within each outlet if the outlet itself provides us demographics) somehow be likely to lead to a more representative sample?
It's not immediately obvious to me why this would improve things. The non-response within probability samples would seem to be an approximately equivalent problem to the limited participation rate in the convenience sample. The possible advantages I see would be:
i. We could offer somewhat-stronger incentives for the probability sample, and perhaps reduce this non-response/non-participation rate and consequent biases.
ii. If we can connect to an independent measure of participant demographics from the the outlets themselves this might allow us to get a better measure of the differential rates of non-participation by different demographics, and adjust for it.
Some references (what else should I read?)
Baker, R., Brick, J.M., Bates, N.A., Battaglia, M., Couper, M.P., Dever, J.A., Gile, K.J., Tourangeau, R., 2013. Summary report of the AAPOR task force on non-probability sampling. Journal of survey statistics and methodology 1, 90–143.
Salganik, M.J., Heckathorn, D.D., 2004. Sampling and estimation in hidden populations using respondent-driven sampling. Sociological methodology 34, 193–240.
Schwarcz, S., Spindler, H., Scheer, S., Valleroy, L., Lansky, A., 2007. Assessing Representativeness of Sampling Methods for Reaching Men Who Have Sex with Men: A Direct Comparison of Results Obtained from Convenience and Probability Samples. AIDS Behav 11, 596. https://doi.org/10.1007/s10461-007-9232-9
For example: Would using a 7 point scale of 0 - 3 in steps of 0.5 give you different results to using a 7 point scale of 0 - 6 in steps of 1? I'm aware that verbal labels are likely to be better but I'm interested in the possible differences between purely numerical scales that use 0.5 or 1 increments.
Does anyone know a good software that you can use to develop psychological experiments that are run on a smartphone? Thank you!
Anderson et al. 2017 describe a method for minimising the limitations of PERMANOVA when data is heteroscedastic and with unbalanced designs. Im trying to compare community composition between populations in R using the 'adonis' function but I dont know what to do with regards to the structure of my data and survey design which dont meet the assumptions. I dont know if this is available in PRIMER 7 but ideally I would like to apply it in R.
Anderson et al. 2017. Some solutions to the multivariate Behrens–Fisher problem for dissimilarity-based analyses.
Aust. N. Z. J. Stat. 59(1), 2017, 57–79
Hi. I am trying to help provide some rough advice for selecting amongst alternative sampling designs to measure bird density or abundance. The surveys are short-term in duration such as for impact assessments and habitat association studies. In this instance, I am looking at rough guidelines for how to select where to sample (survey design) and not how to sample (survey method). Attached is a rough first draft of a decision tree (see attachment) to select between various forms of study design ranging from conducting a census to spatially balanced sampling & stratified random designs. Do you have thoughts and suggestions? Is neglecting to include simple random sampling a fatal flaw for example? If so, where would you place it? Of course, the main advice will be to consult with a biostatistician, but hopefully this (with come accompanying text and references) can provide some rough guideance and be a starting point for that conversation. Literature, debate and suggestions welcome and appreciated.
Dear esteemed colleagues,
I would love to hear your thoughts or opinions on the following statements posted. Thank you very much in advance
1). Launching a research study to study why doing something as now become a cultural phenomenon.
2). Launching a research study to study if doing something as now become a cultural phenomenon.
I am of the train of thought that with statement 1, there is an existence of a cultural phenomenon due to previous extant literature and this research study wants to study why this is the case as opposed to statement 2 which the study wants to study if there is one (Yes or No)
My team and I have developed a prototype of an augmented reality mobile application for teaching primary school students human anatomy. We are going to do a usability testing and evaluation with the primary school students using FUN toolkit, and we are also going to conduct an expert review using heuristic evaluation and cognitive walkthrough. Furthermore, we also want the teachers to test the app, and to evaluate the usability in the context of their students' usage. However, the teachers are neither usability experts nor end-users so what is the most appropriate method for them regarding usability testing, survey design etc? Do you have any recommendations for usability testing methods, survey design/template etc.?
We invite you to take a short survey about key concepts used in the field of speciation research. Through a series of questions, this survey is designed to capture people’s thoughts about concepts that are central to speciation research. We plan to use the answers to understand why researchers think about speciation differently, and to help people understand one another’s perspectives.
The survey is anonymous, and open to anyone that would like to take it, though we are primarily targeting researchers that study speciation. The summarized results may be circulated and published and answers will ultimately be available to anyone that wants to use them. The country of origin will only be used to measure the spread of the survey. Please circulate the link to anyone that you think may be interested in taking part!
The survey was written by Dr. Sean Stankowski, a research scientist from the department of Animal and Plant Sciences at the University of Sheffield (https://www.sheffield.ac.uk/aps), and Dr. Mark Ravinet, a research scientist from the Centre for Ecological and Evolutionary Synthesis (https://www.mn.uio.no/cees/english/). The survey has received ethical approval from the Department of Animal and Plant Sciences, University of Sheffield. In the event of any concern or complaint about this survey, please contact the Head of Department, of Animal and Plant Sciences, University of Sheffield. You can take the survey here:
Please forward it to any colleagues or students that you think would like to take part!
If you have any general questions about the survey, please email: email@example.com and/or firstname.lastname@example.org
Sean Stankowski and Mark Ravinet
Hey guys, the research question of my master thesis is " Effects of age on attitude towards information disclosure on e-commerce platforms". For testing my hypothesis, I gathered data through survey. Its a self constructed survey, with certain limitations and now I realize I should have tested more items mentioned in the literature review for certain variables which I didn't.
I plan to mention it in the limitations but not sure what valid reason to give for not testing certain items in contrast to the ones which I have tested in the survey. Please advice. Thanks in advance.
While developing a questionnaire to measure several personality traits in a somewhat unconventional way, I now seem to be facing a dilemma due to the size of my item pool. The questionnaire contains 240 items, theoretically deduced from 24 scales. Although 240 items isn't a "large item pool" per se, the processing time for each item is averages on ~25 seconds. This yields an overall processing time of over >1.5 hours - way to much, even for the bravest participants!
In short, this results in a presumably common dilemma: What aspects of the data from my item analysis sample to I have to jeopardize?
- Splitting the questionnaire into parallel tests will reduce processing time, but hinder factor analyses.
- Splitting the questionnaire into within-subject parallel tests over time will require unfeasible sample sizes due to a) drop-out rates and b) eventual noise generated by possibly low stability over time.
- An average processing time over 30 minutes will tire participants, jeopardize data quality in general.
- Randomizing the item order and tolerating the >1.5 hours of processing time will again require an unfeasible sample size, due to lower item-intercorrelations.
I'm aware that this probably has to be tackled by conducting multiple studies, but that doesn't solve most of the described problems.
This must be a very common practical obstacle and I am curious to know how other social scientists tackle it. Maybe there even is some best practice advise?
I have two different advertisements and two dependent variables, which are the same - 'how positive or negative is your attitude toward the advertisement?'. One ad is a 'femvertisement' (feminist advertisement) and I plan on doing an ordinal regression with 3 models.
Model 1 independent variables: interest, knowledge, etc. to determine how individuals process the ads
Model 2: processing variables and feminist attitudes
Model 3: processing variables, feminist attitudes, and socio-demographics
I'm wondering whether, from the two regressions, I will be able to infer how different processing levels (as inferred from the interest, knowledge, etc.) will effect attitudes toward the advertisements. Is there an analysis I can do to compare the effects or can I explain this with the two regressions?
Thanks in advance.
I am conducting a survey on the faculty members of the universities in my country. My question is since I don't know the exact number of teachers in my my country how can I calculate sample size?
I was wondering what is your opinion on converting data (themes and categories) into a survey in a mixed methods research design (exploratory sequential MMR). Any instructions or articles about this? Do you follow survey design instructions or is there something more in MMR?
I am designing a study looking at mental health outcomes of a population that had a certain exposure a year ago. While modeling the growth curve of the mental health outcome as related to the degree of initial exposure, I would also like to model the influences of some external factors that evolve/ take place in the next few years on the health outcome. So my study involves a panel survey to be administered a few times in the next few years, with an additional questionnaire for the 1st round that requires participants to recall their initial exposure. Should I call this a combined retrospective and prospective study? Can anyone recommend some texts that define/ describe/ comment on similar research designs (survey design, methodology, or empirical examples)? Thank you very much!
I know of some research into priming effects, but usually these are presented to demonstrate that context changes/ distorts attitudes. I am wondering whether there is any research suggesting that asking students to recall an example first (i.e., before asking their belief/opinion) is good survey design practice because it ensures that the construct being probed is salient to them?
Thanks to everyone in advance for any assistance with this conundrum.
I recently completed the defense of my dissertation proposal and was asked by my committee to shift from an email-based survey to a web-based format. At first, this seemed straight-forward and a good way to collect and manage the data from my responses.
However, upon closer inspection, I am running into a challenge I had not encountered in my original design.
I plan on administering a 5-question survey to all business (and related) faculty in Ohio community colleges to measure the degree of curriculum internationalization in the courses they develop and/or instruct. The first two questions are simply requests for job titles and contact information (textbox entries). Question 3 asks each respondent to list every business-related course that they have taught at the institution in the past 5 years (again, a textbox seems logical here).
Question 4 then asks each respondent a series of four Likert-scale questions based on each course they identified in Question 3. While this seems simple enough (and probably would be easy for someone to complete in a simple Word document--as I had originally planned), I'm struggling to figure out how to incorporate this type of question on any of the major web-based survey websites.
Question 5 is similar to question 4 as it then asks the respondent to identify for how long each course has been utilized/deployed with the degree of internationalized curriculum identified in question 4.
My main question is how do I create a tool on any web-based survey platform that would allow respondents to volunteer course numbers/titles that would later have follow-up questions about each specific course? I've dabbled with Google Forms, SurveyMonkey, and Sogosurvey without much success as I keep running into this issue.
I had posited about using a comprehensive dropdown list of EVERY business-related course offered at EVERY institution to allow self-selection and logical page progression, but that doesn't seem to resolve the issue and just makes the survey look even more convoluted. The same would be said if I created a dropdown list on page 1 for the respondent to choose the specific institution, the list of courses on page 2, etc.
I am probably overthinking this entire process, but I can't seem to think of how to structure this survey using the web-based tools that are out there and wonder if simply asking respondents to just type out a word document with answers to the questions would be simpler and result in just as many responses.
I have a very large survey dataset and am writing two distinctively different papers from the same dataset. One more focused on policy implications and the other a more traditional academic journal article. My question is how should I handle the methods section? For the most part, the methods section is the exact same, for obvious reasons. The statistical analyses differ. What do I do about the methods that are the same--survey design, implementation, etc? I have already written a descriptive methods section and would just like to use verbatim what I wrote for the second paper but then would I have copyright issues? Is it okay to just refer my audience for the second paper to the first paper for most of the methods? Thanks for your feedback!
I am looking for a validated survey tool to track changes in participants' notions of power and inequality in society. Rather than reinventing the wheel, I am wondering if anyone has, or has used one that I may be able to draw from in an upcoming study.
I am currently trying to design a survey based small scale experiment for my dissertation. I have chosen behavioral economics and want to base my paper on prospect theory and heuristics but am struggling with the questions for my survey.
I wanted to first test for the obvious things such as people reacted more drastically to negatively 'framed' situations.
But then also wanted to test whether the object under uncertainty has an effect on someone's riskiness such as comparing the same situation but using money and then human lives and seeing if their response is the same or different.
Can anyone offer any help?
As you are doubtless aware, paper-based survey has been known as one of the most common methods for gathering data relevant to people's behavior (either revealed preferences or stated preferences). I wanna make sure how much can we rely on new methods like Internet (Web)-based survey instead of traditional paper-based survey? In particular, my research's scope is related to travel behavior analysis. My research' sample should cover all socioeconomic groups and almost all geographical areas in a city.
I would be happy if somebody shared with me his/her opinion or the valid references.
Thanks in advance
We are conducting a randomized study on the effect of Pterigopalatine fossa block (PPFB) during endoscopic sinus surgery on intraoperative bleeding.The number of endoscopic sinus surgeries performed the previous year were 86. What would be the valid sample size for this study?
I am planning to include "risk behaviour and time preference of smallholder farmers" as factors that affect their decision to adopt long-term improved soil and water conservation practices in Ethiopia. For this I need to construct simple questions which are easy to be understood by farmers. If you have any idea, please, forward it.
Few days ago, I've received the following comments from one of the reviewers for my manuscript I submitted. His comment was as follows:
The authors' data source for their dependent variable, mentally unhealthy days, is the Behavioral Risk Factor Surveillance System (BRFSS) based on a complex sample survey design. Yet, the models the authors use (Ordinary Least Square- OLS) and their analyses do not take this survey design into account; specifically, their analyses do not account for the positive correlation among respondents in their replies to interviewers within State strata and primary sampling units. The variability of the authors' estimates (for example, regression coefficients of associations) are probably underestimated.
Do you guys know how to take the survey design into account? Honestly, I don't have a clue on how to address the reviewer's comment. I've used SPSS to run the model using BRFSS data. Please advise and any useful comments and tips (videos or references) will be greatly appreciated. Thanks in advance!
We are working in a survey design to youths, but we will include children are 12-14 year old. The classic measuring of time preferneces and risk will be complex to childrens. Likewise, some recomendation of general application of this question in survey experiment would be perfect.
I am carrying out research to better understand candidates' experiences of doing the IELTS test. I am seeking to measure the construct of test-takers' positivity towards their experiences of taking IELTS using Likert Scales and some open-ended comments. I have designed a draft questionnaire and am seeking to enlist the help of anyone with expertise in survey design, language testing, or the IELTS test to provide feedback on the content and construct validity of the survey.
If you can help, feel free to get in touch with me at my email address wsp202 'at' exeter.ac.uk
While working with a translated (into Lithuanian) version of the IMS (Investment Model Scale) I'm getting a sub-par Cronbach's alpha score of 0,629. The highest increase with an item deleted only goes up to 0,666.
However, when scoring each of the 4 sub scales separately I get good reliability:
I'm sure there's something fundamental I don't understand here, but at this point I'm clueless as to what to do to increase the reliability of the questionnaire. Any help would be appreciated.
My mixed methods study is on the impact of feedback in the writing process and I have three questions (listed below). To resolve the first question, a repeated measures design was used to determine which form of feedback has the most impact. A survey design was used for the second question and a discourse analysis will be used to analyze the language for similarities and differences. After referring to Crewell and Plano Clark's book, I know that I can not use what I find in question three to explain question one and two. Based on my research questions and what I read in Crewell's book, I am thinking that this is not a true mixed methods and my study will not fall under any of the major mixed methods designs. Am I on the right track?? Please advise
1. What is the impact of feedback via teacher one-on-one writing conferences versus computer-based writing feedback on the writing skills of at-risk Pre-college students?
2. How do students who participated in the study perceive the effectiveness of teacher feedback versus computer-based feedback?
3. How is the discourse in teacher-student interactive feedback versus computer-based feedback alike and /or different
I have two instruments for my research. I gained approval to use the 6 question instrument which was provided to me in a word document. The other one must be provided to the participant through the Center for Leadership Studies website for a 3 dollar fee for each survey. I will also need to create a demographic questionnaire and provide the participant with an appropriate consent form. I am wondering the best way to ensure my participants receive and take all of these assessments since they are in different formats. Does anyone have any best practices. This is for my dissertation so this is my first time doing a survey design.
Thank you in advance for any advice you can offer!
I know how to set up distribution panels/survey invitations in Qualtrics, but how do I send a message to the contact list without sending the survey with the message? Specifically, how do I send a pre-communication so I can give advance notice to the contacts that they will be receiving a request to participate in a survey?
M. Justin Miller, M.S.
I am still collecting survey tools used by researchers who have used in studies of transgender men and women. The tools will be used to inform the development of a new survey tool that may be added to CDC's National HIV Behavioral Surveillance (NHBS) survey set. If you are willing to share your survey tool (and have not already done so) please do so.
All the best,
When being asked about the number of partners they have had, sex workers and sexual assault survivors may not want to include their clients or their rapist. Does anyone have questions they have used or suggestions for how to word survey questions to take this into account? How do we give survey respondents permission to leave these sexual partners out of the total and do so in a sensitive way?
I want to develop a research project which compare the behavior of the same respondent before and after of an experiment. kindly help me by referring appropriate literature.
As there is always some attrition in longitudinal research, then how can a researcher match the responses from same respondent at T1, T2 and T3...... in a longitudinal survey design? So that a researcher may carry out t-test later to check whether those who participated in all the surveys and those who left out earlier are significantly different.
Hello research experts!
I'm currently performing research into different types of survey designs, response coding and analyses. I came across a citation I was hoping someone might be able to decode:
"When creating response scales, it's important to take into consideration its orientation. While there are many caveats and exceptions when creating response items, one effect is that respondents tend to favor the left side of a response scale. Take the following two response options:
Strongly-Disagree Disagree Undecided Agree Strongly-Agree
Strongly-Agree Agree Undecided Disagree Strongly-Disagree
If you code the values from 1 to 5 for the first scale and 5 to 1 on the second scale then you’ll have a higher average score on the second response option".
I'm just slightly confused as to how the average score would change in the second response item scale, since it's been inversely coded from the first, so my thinking is that the average shouldn't change, but there may be something crucial I'm missing.
Any help would be greatly appreciated. Thank you!
This quotation comes from the famous poem “Morte d’Arthur” by Alfred, Lord Tennyson.
Compare, perhaps, scientific testing of the proposition:
“More patients suffering from disease D are cured by treatment T than doctors yet realise”
which seems analogous.
Hello. I want to measure citizens' "perceptions of energy insecurity". Therefore I need to ask them several questions about different dimensions of this concept and then construct an index. The questions I have asked my respondents in the pre-test stage of my survey do not seem to correlate much, which is detrimental for the reliability of my index. I have a very low Cronbach's alpha coefficient. Can someone recommend me (a) some paper(s) which have used similiar approach in the survey design? Thank you.
I am working on a design of survey about job loss and EI. I intend to use the already available scales for Risk propensity by Meertens and Lion’s (2008) and "Fear of failure" with 5-item scale developed by Conroy, Willow, and Metzler (2002).
In social settings of Pakistan, India etc. I feel that instead of 7 point scale, its better to use 5-point scales to make things simpler and easier for the respondents. What is your experience and opinion about it?
Secondly I guess we need to use consistent scales for both like if one measurement uses 5-point scale the other should also be 5-points scale. Need your experience and opinion about that also.
1. What are African American male’s perception of mental illness?
2. How much does stigmas, associated with mental illnesses impact African American men understanding of mental illness?
There is a need for additional studies to identify the perceptions of mental illness within the black population, specifically with African American men. The researcher will use qualitative research techniques to explore the African American male participants' perceptions of mental illness and mental health services. The focus of this study will examine 25 African American men, in Tallahassee, Florida, ranging between the ages of 18 and above. The selection of the specific geographical area is a convenience sample.
The purpose of this study is to explore African American adult men perceptions of mental illness. The researcher will explore stigmas and misconceptions of mental illness among the participants. Additionally, the researcher will attempt to identify if such stigmas and misconceptions influence the participants willingness to seek mental health services. The use of qualitative research is to identify any patterns and gain general knowledge of the phenomen, based on participant’s opinions.
Though there are a lot of significant discussion about this issue but today I am confused when I am putting my data into data sheet from the questionnaire.
I have variable name social capital and in my research, I have taken six dimensions of social capital (*e.g. trust, social cohesion etc.) and in every dimension, I have several questions. Data were collected by using 5 points scale (strongly agree to strongly disagree) and in the questions, I have both positive and negative questions equally (e.g. I have trust and I do not have trust).
Now, I am confused. In the case of positive statement - should I have to give 5 for strongly agree and for negative statement should I have to give 1 for strongly agree?
Thank you in advance for your anticipated cooperation.
I am an MA student in Dance Practises and I have to produce a research proposal poster for my final dissertation, I am going to 5 dance companies who teach disabled children to find out the methods they use and why (Physical guidance, tactile etc) I have looked at Social constructivist theory and using qualitative research methods, but would I use a mixed methods approach to analyse my information or stick with qualitative? Sorry its all very new to me!
I am struggling to install the software suggested by the FAO to analyse Food Insecurity Experience Scale and use
I need to carry out some test on Spss but i am a bit confused what item from my questionnaire scale or variable items is most appropriate to add to my dependent variables? I have 17 items on a particular variable and they all likert scale items, but i am not sure if i can select any item to carry out the test or use the whole items to carry out the test? Or is it alright if only Cronbach alpha is use to test for the reliability of the items and then can use any of the variable items to carry out the test?
I am conducting a baseline audit of a large organizations that provides diverse services to a wide range of clients (housing, employment, etc.). We would like to capture participants perceptions and experiences with services prior to implementing a system wide Trauma Informed Practice intervention. Many of the clients may have low English literacy, significant trauma histories and some have cognitive challenges. I am interested in some quality resources for helping me design the survey instruments for these populations.
We're looking for it several times and still there's no result related to it. We will be needing it for our research paper. Thank you.
I am interested in whether using AI biases the direction of an evaluation survey and, if so, whether this is a limitation of the methodology in terms of rigour.
I have generally seen the scale used as a continuous scale. One recommendation for cut points (high hope, etc.) would be to use one or two standard deviations above or below mean, although this would be dependent on the characteristics of those who take the scale.
Substantively I am attempting to show change in the hope of individuals with serous mental illness at the time of onset of involvement in mental health services over time. I can use simple change scores but in presentation it would be clearer to be able to say that the individuals moved from low hope to ...
I am conducting a study on autistic traits in typically developed subjects and I am wondering what would be a better approach to examine neural differences in this population:
1. looking at the scores (the AQ questionnaire for autistic traits) as a continuum
2. looking at the two extreme quartiles
Does anyone have a pro\cons suggestions for each option?
Can you please recommend a relevant article that deals with the topic of working with the extreme quartiles as opposed to the continuum?
Thank you all for helping,
I’m doing an investigate of blood glucose level to see the pre-diabetes progression. I use a form that consist of demographic characteristics related to that disease. I’m really looking suggestions in order to make all data gathered valid.
Hi! My colleagues and I have to use a good scale of stressors, but we cannot find any precise information if the mentioned scales are publicly available or not. While some say SRRS (1967) can be used for professional purposes, others claim they asked for a permission. Also, we tried to contact the authors (specifically, authors of LEI) on mail addresses they provided in the article, but no one responded for a few weeks. Does anyone have any reference from which it could be clearly concluded that (at least one of) these scales can be used for research purposes? Or at least an idea who (else) to contact to get the permission? We wouldn't like to send a new round of mails and wait for another few weeks to get no response, since timing of our project is a bit limited.
I am combining different scales in my questionnaire. These scales have been tested for their validation and their reliability. 1st. Do I still need to examine discriminant validity? 2nd. The questionnaire is long and I need to eliminate items, is it possible to do that based on my experience and the review of other experts in the field? Thanks
I am intending to use a scaled survey from 1 to 6 (Not Probably to Definitely) as responses to the seven learning styles that my respondents would think a hand hygiene program approach should consider for it to become effective?
Though, there are several 16-20 item learning style questionnaires existing such as the VARK and LSQ, I would just want to make it as a short and simple survey. Could it be exempted in any condition?
There are no existing resources for my dissertation topic. I am studying the ability of an existing fellowship certification process to indicate professional capability and efficacy for healthcare organizations and administrators.
I'm working on a study about patients' perspectives on being treated by a resident. In my literature search I found a lot of studies with a cross-sectional survey design. I really want to check the quality (risk of bias) for these studies, however, I cannot find a good appraisal tool. Does anybody have any recommandaties for a appropriatie tool?
Folks, I have used Qualtrics panels - not sure if you are familiar with this, Qualtrics have panels and charge a certain amount per survey depending on how scarce the panel respondents are - for example CEOs may be $50 a survey, whereas a middle manager might be $8-10 depending on criterion. It is a little difficult to get response rate data in this case.
Another method some of my colleagues are using is Amazon Mechanical Turk. I have not tried this and am not familiar with the quality. It is definitely a less expensive option - I have heard it could be $1 per survey response.
What are your views and experience on some good ways to get quality survey data?
We are developing and validating a questionnaire regarding chronic disease and disaster preparedness. The questions are all categorical in nature ie:
1) Which of the following items do you have as part of your household’s disaster preparedness?
Check all that apply
Water, two liters of water per person per day for at least three days
Food, at minimum a three-day supply of non-perishable food
2) Do you require supplemental oxygen?
Mark only one box
If no go to question 5
3) Do you have 72 hours of supplemental oxygen cylinders (compressed gas, not liquid)?
Mark only one box
I am struggling to find the best way to examine test-retest. I think I should be using Kappa Stats but I'm a bit confused on it and there is little literature on the subject matter.
I'm looking for two things basically:
1) Syllabi for sampling or survey data analysis courses you've taken or tought. Good and bad examples both wanted.
2) If you've take a course on this, we'd love your frank thoughts (again positive or negative) even if you can't share the syllabus. Direct message is fine if you don't feel comfortable commenting "out loud."
Colleague Stas Kolenikov and I are up to something.
Just so you don't think I'm doing my own work, this isn't for a class I need to prepare. :)
Thanks in advance! :) We can continue the conversation here.
I'm looking for something like the O'Brien article in Academic Medicine 2014 (Standards for Reporting Qualitative Research:A Synthesis of Recommendations) or CONSORT or something similar?
I am starting my dissertation related to spiritual well being and compassion fatigue, and I plan to use three different measurement surveys, such as the Professional Quality of Life (ProQOL) measure. What is the best way or ways to make comparisons, statistically among the three scales?