• Ronán Michael Conroy added an answer:
    Can anyone help me with the questionnaire and methodology used in "Choosing your words carefully"?

    Dear All,

    I already possess the article mentioned above but I cannot seem to understand or comprehand how the authors interpreted the answered questionnaire and their exact questionnaire used. Tried contacting the authors but no reply for a really long time. I need to get working on this. Please help fellow researchers!

    Thank you


    Ronán Michael Conroy · Royal College of Surgeons in Ireland

    I suggest that you try contacting the authors individually. I checked just two of them and they're both on RG:

    If you cannot get the original questionnaire, can you reconstruct the questions based on the paper?

    PS: A reference to the paper would have made this question easier to answer!

  • Anthony Kripps added an answer:
    Should a teacher focus on 'rigorous learning' or 'learning with entertainment'?
    It has been seen that many teachers in universities have become entertainers rather than focusing mainly on value-addition and learning. A lot of time gets devoted to pleasing the students; knowing them personally; building good relations with them; and telling jokes and creating humour; the focus becomes more of good feedback than rigor. Keeping the audience motivated is good for effective teaching; but since a lot of time goes in entertainment less time remains for analysis and conceptualization. What is your preference and why?
    Anthony Kripps · University College of Jubail

    A survey of American university professors found that the vast majority prefer lecture-style teaching, whereas the majority of students preferred learning through games and group work. Corporate trainers were found to use games, ice-breakers, and group activities most, and lecture least often.

  • Grace Elizabeth Varghese added an answer:
    Can anyone please explain the basic differences between systemic reviews and meta-analysis?
    Do these two have different scope of work and methodology?
    Grace Elizabeth Varghese · Valparaiso University (USA)
    • Systemic review is a summary of evidence on a particular topic, typically by an expert or expert panel that uses a rigorous process for identifying, appraising, and synthesizing studies to answer a specific clinical question. 
    • Meta analysis : Many systemic reviews incorporate quantitative methods to summarize the results from multiple studies. These reviews are called Meta- Analyses. 

    Melnyk, B. M., & Fineout-Overhault, E. (2005). Evidence-Based Practice in Nursing & Healthcare. Philadelphia, PA: Lippincott Williams & Wilkins.

  • Eduardo Karol added an answer:
    What methodology is adopted for the study of out migration within india?

    In india interstate migration is there but lack of proper data and methodology. plz guide this issue

    Eduardo Karol · Rio de Janeiro State University

    Great work is Abdelmalek Sayad Immigrants. Has a chapter; What is an immigrant?

  • Melita Vidakovic added an answer:
    Is it possible to measure number of mitochondria per cell through isolation of mitochondrial fraction?
    I wonder if there are some methods of relatively fast assessment of mitochondrial number per cell through something like this:
    -isolation of mitochondrial fraction from known amount of cells
    -evaluation of mitochondrial number in extract (through flow cytometry perhaps)
    Melita Vidakovic · University of Belgrade

    I heard about test MitoC that serves to detect mitochondrial number in any cell type. I do not know if it is commercially available...but I can ask. As a energetic generators, No of mitochondria increase in cell under stress, so this test was used as stress detector as well as.

  • James Bjork added an answer:
    How can I deceive a participant in an fMRI scanner into believing they are interacting with a person in another room without using a confederate?

    All I need is for the participant to believe that the “person” they are interacting with in the other room is seeing the same stimuli as them, but I’d like something more convincing than just telling the participant that there is someone in another room. If someone knows a method that has been used before and can give me a link to a research article that’ll be great, but I’m happy to hear other suggestions.

    James Bjork · Virginia Commonwealth University

    As has been mentioned, the APA ethics guidelines allow for deception if the value of the knowledge (to mankind, health science etc) justifies it.  I don't see an alternative to deception in many kinds of social neuroscience paradigms with fMRI where the subject has to make decisions that ostensibly affect another person, but you have to keep constant the "social stimuli" presented to different individual subjects.  Brain activation DOES differ as a function of whether the subject thinks he/she is playing against a computer vs another person.  I used to do research with the point-subtraction aggression paradigm, where subjects were provoked by a fictitious opponent, in order to examine the effects of neurotransmitter manipulation on aggression.  Sometimes we would have a throw-away session, where we'd send a subject home with partial payment, because the partner was a "no-show."  Also, as an aside, our IRB back then actually stipulated that we NOT debrief subjects, with the thinking that it would be more emotionally damaging for subjects to learn they had been deceived.  We treated subjects with courtesy and professionalism.  I haven't lost any sleep over it.

  • Zol Bahri Razali added an answer:
    Does anybody have any experience using "spectrostar nano" plate-reader to detect fluo4?
    Can not find where to define emission wavelength in the device - there's only one option to choose a wavelength, and I assume that's for excitation.
    User manual was no big help.
    Zol Bahri Razali · Universiti Malaysia Perlis

    InsyAllah I will explain in detail soon.

  • Abdulbaset Azizi added an answer:
    Is there a methodology to isolate mycoviruses and propogate it in a medium?

    I can see papers were mycoviurs genetic material is extracted for sequencing or it is purified from the fungal cells for visualisation in TEM. 

    Abdulbaset Azizi · Tarbiat Modares University

    for mycovirus extraction you should extract double strand RNA from fungi. viruses are obligate parasite and you can not propagate on media.

  • Hakan Bayramlık added an answer:
    Are there specific attributes to formulating a good problem statement?
    How can one formulate a good research's problem statement?
    Hakan Bayramlık · Kara Harp Okulu

    Dear Amer A. Al-Atwi,

    the link below may be a good starting point.

  • Carlos Montalvo added an answer:
    How to measure an innovation index?

    Dear all, I'm thinking about building an innovation ranking of Polish companies. Something what would show how companies build their innovation value in different sectors, circumstances etc. I have quite a huge sample of different companies available and IDI/ CATI/ CAWI methodology. I' would be very grateful for any recommendations/ suggestions/ links to available resources.

    At TNO we have been working on this question for some time. Starting from a behavioural approach we aimed to advance to a model that captures innovation micro-motivations&resources to macro effects. Take a look on these two papers. Might help.

    Montalvo C. (2006), What triggers change and innovation, Technovation

    Montalvo and Moghayer (2012) State of an innovation system: theoretical and empirical advance towards an innovation efficiency index

    You can download the second from my page here in ResearchGate. The first one form Elsevier. ScienceDirect

  • Damian G Kelty-Stephen added an answer:
    Why should complex methods be used when simple methods result in the same science-based conclusions?

    Hypotheses are empirically tested with observations or perception of phenomena involving more or less complex methodology. Recent studies of the same natural phenomena use more complex methods/tools than older studies, but not necessarily changing the science-based conclusions. Why should complex methods be used when simple methods result in the same science-based conclusions. Any examples/thoughts?

    Damian G Kelty-Stephen · Grinnell College

    I prefer Howard Skipper's quotation just a little bit more than George Box's: "A model is a lie that helps you see the truth."

  • Ángel Machado Cabezas added an answer:
    Do you know sound studies on financial reporting practices for family owned firms?

    My interest is methodology part. the way data have been collected and analysed.  

    Ángel Machado Cabezas · University of Oviedo

    I know a job that maybe I could help, but the problem is it is in Spanish, but with an automatic translator like google can help you and maybe get something useful besides having its literature in English.

    I attach you the paper.

  • Gana Gecheva added an answer:
    Any suggestion on Macrophyte-based assessment of ecological potential?

    The developed methodologies for macrophyte-based assessment have values for ecological status classes. The WFD also requires assessment of 'heavily modified water bodies' in terms of ecological potential. I am thinking if the given methodology reflects general degradation (i.e. including physical alterations), is there a need of separate scale with values for ecological potential? Thank you for your comments in advance!

    Gana Gecheva · Plovdiv University "Paisii Hilendarski"

    Dear Iakovos, Really specific communities in hypersaline lakes! Unfortunately, Bulgaria did not identify common transitional water type. As far as I know, there is an intercalibrated assessment method for BQE Fish within the North East Atlantic GIG Coastal and transitional  waters. Intercalibration exercise has not been finished for phytoplankton and benthic invertebrates, while the results for macrophytes has to be accepted by 2016. Can you refer articles or reports focused on hypersaline lakes in Cyprus?

  • Alexandra Castle Ph.D. added an answer:
    Has anyone come across any research that interpret meaning of Facebook Like and Share?

    I'm looking for research that specifically look into psychological motivations behind liking and sharing facebook posts/statuses. Does "like" means "agree", "awesome!", "good!", or simply just "like", or does clicking the button simply means that one is aware of the post. 

    Is there any research that measure the number of likes and shares as part of its methodology? 

    Thank you.

    Alexandra Castle Ph.D. · Bellingrath Family Law

    All true but then there is the simple but self enhancing idea that the more you "like" others, the more "likes" you will get and that improves the number of people who follow you and pay attention to what you say and so many of Facebook pages are now combination business,social that can mean that Google scores you better and you will get more followers, and ultimately more clients.  See how crass I can be?  But trust me what I just said is true.  Not very deep psychologically but financially it could make all the difference.  They have classes in social page marketing practices you know.  So like everything else, you have to be really specific on what you want to know.  You certainly couldn't use our Facebook page, it is being designed almost exclusively for marketing purposes.  Though it might contain all the same things, just a difference in emphasis.

  • Stefan Svetsky added an answer:
    What are the impacts of action research on classroom learning process of middle standard students?

    What sampling and methodology can be used to find out the impacts of action research on learning process?

  • Adebayo Akanbi Oladapo added an answer:
    Are there any methodology to study the population structure and regeneration status of a monocot species?

    Dear All,

    I want to investigate the population structure and regeneration status of a monocot species like Homalomena species of Areaceae family. Please suggest me some standard methodology.

    Adebayo Akanbi Oladapo · University of Central Lancashire

    Sorry, I have no idea; this is not in my area of expertise

  • Eduardo oliva-lópez added an answer:
    How I can choose the methodology of calculating the overall performance evaluation?

    there are so many methodologies to assess the overall organization performance using Multi-criteria framework

    how I could choose ? 

    Eduardo oliva-lópez · National Polytechnic Institute

    An effective performance evaluation has to be linked to the objectives sought by the firm, since good quality management is assessed in terms of the objectives achieved. Since each company has unique objectives and resources, the analist needs to develop his/her own taylor-made methodology on the basis of all pertinent well-known methodologies. Sorry, there is no simple answer to your question.

  • Izdihar Ismail added an answer:
    Can we run crude extracts by GCMS? How to prepare Aq and MeOH crude extracts to run GCMS? and what type of solvents can we use?

    can anyone suggest me the best methodology for it.. Thank you.

    Izdihar Ismail · Universiti Tun Hussein Onn Malaysia

    very informative..Thanks to u all.

  • Steve Macgillivray added an answer:
    How to use JBI SUMARI methodological quality assessment tool?
    Why do JBI SUMARI methodological quality assessment tools only have appraisal questions but no scores? And how can reviewers judge whether to include or exclude?
    Steve Macgillivray · University of Dundee

    Dear Mingji Zhang

    You may also be interested in the guidance document produced by Cochrane:

    Hannes K. Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C (editors), Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions. Version 1 (updated August 2011). Cochrane Collaboration Qualitative Methods Group, 2011. Available from URL

    Key points

    Critical appraisal of qualitative studies is an essential step within a Cochrane Intervention review that incorporates qualitative evidence.
    The overarching goal of critical appraisal in the context of including qualitative research in a Cochrane Intervention Review is to assess whether the studies actually address questions under meaning, process and context in relation to the intervention and outcomes under review.
    Review teams should use a critical appraisal instrument that is underpinned by a multi-dimensional concept of quality in research and hence includes items to assess quality according to several domains including quality of reporting, methodological rigour and conceptual depth and bread.
    Critical appraisal involves (i) filtering against minimum criteria, involving adequacy of reporting detail on the data sampling, -collection and-analysis, (ii) technical rigour of the study elements indicating methodological soundness and (iii) paradigmatic sufficiency, referring to researchers’ responsiveness to data and theoretical consistency.
    When choosing an appraisal instrument a Review teams should consider the available expertise in qualitative research within the team and should ensure that the critical appraisal instrument they choose is appropriate given the review question and the type of studies to be included.
    Reviewers need to clarify how the outcome of their critical appraisal exercise is used with respect to the presentation of their findings.  The inclusion of a sensitivity analysis is recommended to evaluate the magnitude of methodological flaws or the extent to which it has a small rather than a big impact on the findings and conclusions.


    Considerable debate exists on whether or not concepts such as validity and reliability apply to qualitative research and if so how they could be assessed.  Some researchers have stated that qualitative research should establish validity, reliability and objectivity. Others plead for an adjustment of these concepts to better fit the qualitative research design.  As a consequence, critical appraisal instruments might differ in the criteria they list to complete a critical appraisal exercise.  Some researchers consider appraisal instruments a tool that can be utilized as part of the exploration and interpretation process in qualitative research (Popay et al, 1998; Spencer, 2003).  Edwards et al (2002) describes the use of a “signal to noise” approach, where a balance is sought between the methodological flaws of a study and the relevance of insights and findings it adds to the overall synthesis. Other researchers do not acknowledge the value of critical appraisal of qualitative research, stating that it stifles creativity (Dixon-Woods, 2004). While recognising that all these views have some basis for consideration certain approaches succeed in positioning the qualitative research enterprise as one that can produce a valid, reliable and objective contribution to evidence synthesis. It is these that may therefore have more potential to be generally accepted within the context of producing Cochrane Intervention Reviews. The Cochrane Collaboration recommends a specific tool for assessing the risk of bias in each included study in an intervention review, a process that is facilitated through the use of appraisal instruments addressing the specific features of the study design and focusing on the extent to which results of included studies should be believed.  This suggest that in assessing the methodological quality of qualitative studies the core criterion to be evaluated is researcher bias.  Believability in this context refers to the ability and efforts of the researcher to make his or her influence and assumptions clear and to provide accurate information on the extent to which the findings of a research report hold true. However, it is the actual audit trail provided by researchers that allows for an in-depth evaluation of a study.  Most existing appraisal instruments use broader criteria that account for reporting issues as well. We suggest that these issues should be part of the appraisal exercise.  Currently, there are four possibilities to make use of qualitative research in the context of Cochrane Intervention reviews:

    1.      The use of qualitative research to define and refine review questions a Cochrane Review (informing reviews).

    2.      The use of qualitative research identified whilst looking for evidence of effectiveness (enhancing reviews).

    3.      The use of findings derived from a specific search for qualitative evidence that addresses questions related to an effectiveness review (extending reviews).

    4.      Conducting a qualitative evidence synthesis to address questions other than effectiveness (supplementing reviews).

    The latter use (Supplementing) is beyond the scope of current Cochrane Collaboration policy  (Noyes et al, 2008). Stand alone qualitative reviews that supplement Cochrane Intervention reviews  need to be conducted and published outside of the Cochrane context.  

    Critical appraisal applies to all of the above possibilities. 

    Reviewers should bear in mind that narratives used in reports of quantitative research cannot be considered qualitative findings if they do not use a qualitative method of datacollection and –analysis.Therefore, critical appraisal based on instruments developed to assess qualitative studies is not applicable toreports that do not meet the criteria of being a ‘qualitative study’..   

    This chapter breaks down in four sections.  Section 1 addresses translated versions of core criteria such as validity, reliability, generalisibility and objectivity of qualitative studies. Section 2 presents an overview of different stages involved in quality assessment. Section 3 guides the researcher through some of the instruments and frameworks developed to facilitate critical appraisal and section 4 formulates suggestions on how the outcome of an appraisal of qualitative studies can be used or reported in a systematic review.

    Section 1: Core criteria for quality assessment

    Critical appraisal is “the process of systematically examining research evidence to assess its validity, results and relevance before using it to inform a decision” (Hill & Spittlehouse, 2003). Instruments developed to support quality appraisal usually share some basic criteria for the assessment of qualitative research.  These include the need for research to have been conducted ethically, the consideration of relevance to inform practice or policy, the use of appropriate and rigorous methods and the clarity and coherence of reporting (Cohen & Crabtree, 2008). Other criteria are contested, such as the importance of addressing reliability, validity, and objectivity, strongly related to researcher bias.  Qualitative research as a scientific process needs to be “rigorous” and “trustworthy” to be considered as a valuable component of Cochrane systematic review.  Therefore an evaluation using such criteria is essential.  Nevertheless we should acknowledge that the meaning assigned to these words may differ in the context of qualitative and quantitative research designs (Spencer et al, 2003).

    Does translation of terminology compromise critical appraisal?

    The concepts used in table 1 are based on Lincoln and Guba’s (1985) translation of criteria to evaluate the trustworthiness of findings.  Acknowledging the difference in terminology does not obviate the rationale or process for critical appraisal.  There might be good congruence between the intent of meanings relevant to key aspects of establishing study criteria, as demonstrated in table 1.

    Table 1: Criteria to critically appraise findings from qualitative research

    Qualitative Term
    Quantitative Term
    Truth value
    Internal Validity
    External Validity or generalisibility

    This scheme outlines some of the core elements to be considered in an assessment of the quality of qualitative research.  However, the concept of confirmability might not be applicable to approaches inspired by phenomenology or critical paradigms in which the researcher’s experience becomes part of the data (Morse, 2002).  The choice of critical appraisal instruments should preferably be inspired by those offering a multi-dimensional concept of quality in research.  Apart from methodological rigour, that would also include quality of reporting and conceptual depth and bread.

    What indications are we looking for in an original research paper?

    There are a variety of evaluation techniques that authors might have included in their original reports, that facilitate assessment by a reviewer and that are applicable to a broad range of different approaches in qualitative research. However, it should be stated that some of the techniques listed only apply for a specified set of qualitative research designs. 

    Assessing Credibility: Credibility evaluates whether or not the representation of data fits the views of the participants studied, whether the findings hold true.
    Evaluation techniques include: having outside auditors or participants validate findings (member checks), peer debriefing, attention to negative cases, independent analysis of data by more than one researcher, verbatim quotes, persistent observation etc.

    Assessing Transferability: Transferability evaluates whether research findings are transferable to other specific settings.
    Evaluation techniques include: providing details of the study participants to enable readers to evaluate for which target groups the study provides valuable information, providing contextual background information, demographics, the provision of thick description about both the sending and the receiving context etc.

    Assessing Dependability: Dependability evaluates whether the process of research is logical, traceable and clearly documented, particularly on the methods chosen and the decisions made by the researchers.
    Evaluation techniques include: peer review, debriefing, audit trails, triangulation in the context of the use of different methodological approaches to look at the topic of research, reflexivity to keep a self-critical account of the research process, calculation of inter-rater agreements etc.

    Assessing Confirmability: Confirmability evaluates the extent to which findings are qualitatively confirmable through the analysis being grounded in the data and through examination of the audit trail.
    Evaluation techniques include: assessing the effects of the researcher during all steps of the research process, reflexivity, providing background information on the researcher’s background, education, perspective, school of thought etc.

    The criteria listed might generate an understanding of what the basic methodological standard is a qualitative study should be able to reach.  However, a study may still be judged to have followed the appropriate procedures for a particular approach, yet may suffer from poor interpretation and offer little insight into the phenomenon at hand.  Consequently, another study may be flawed in terms of transparency of methodological procedures and yet offer a compelling, vivid and insightful narrative, grounded in the data (Dixon-Woods et al, 2004).  Defining fatal flaws and balancing assessment against the weight of a message remains a difficult exercise in the assessment of qualitative studies.  As in quantitative research, fatal flaws may depend on the specific design or method chosen (Booth, 2001).  This issue needs further research. 

    Section 2: Stages in the appraisal of qualitative research

    Debates in the field of quality assessment of qualitative research designs are centred around a more theoretical approach to evaluating the quality of studies versus an evaluation of the technical adequacy of a research design.  How far criteria-based, technical approaches offer significant advantages over expert intuitive judgement in assessing the quality of qualitative research is being challenged by recent evidence indicating that checklist-style approaches may be no better at promoting agreement between reviewers (Dixon-Woods, 2007).  However, these appraisal instruments might succeed better in giving a clear explanation as to why certain papers have been excluded.  Given the fact that few studies are completely free from methodological flaws, both approaches can probably complement each other.

    Is the use of a critical appraisal instruments sufficient in assessing the quality of qualitative studies enhancing Cochrane intervention reviews?

    Three different stages can be identified in a quality assessment exercise: filtering, technical appraisal and theoretical appraisal. The first stage links to the inclusion criteria of study types that should be considered to enhance or extent Cochrane Reviews and requires no specific expertise.  The required expertise for the next two stages ranges from a basic understanding of qualitative criteria to be able to critically appraise studies to a more advanced level of theoretical knowledge on certain approaches used.

    Stage 1: Filtering:
    Within the specific context of enhancing or extending Cochrane Reviews, and viewing critical appraisal as a technical and paradigmatic exercise, it is worth considering limiting the type of qualitative studies to be included in a systematic review.  We suggest restricting included qualitative research reports to empirical studies with a description of the sampling strategy, data collection procedures and the type of data-analysis considered.  This should include the methodology chosen and the methods or research techniques opted for, which facilitates the systematic use of critical appraisal as well as a more paradigmatic appraisal process. Descriptive papers, editorials or opinion papers would generally be excluded.

    Stage 2: Technical appraisal:
    Critical appraisal instruments should be considered a technical tool to assist in the appraisal of qualitative studies, looking for indications in the methods or discussion section that add to the level of methodological soundness of the study. This judgement determines the extent to which the reviewers may have confidence in the researcher’s competence in being able to conduct research that follows established norms (Morse, 2002) and is a minimum requirement for critical assessment of qualitative studies.  Criteria include but are not limited to the appropriateness of the research design to meet the aims of the research, rigour of data-collection and analysis, well-conducted and accurate sampling strategy, clear statements of findings, accurate representation of participants’ voices, outline of the researchers’ potential influences, background, assumptions, justifications of the conclusion or whether or not it flows from the data, value and transferability of the research project etc.  For this type of appraisal one needs to have a general understanding of qualitative criteria. Involving a researcher with a qualitative background is generally recommended.

    ·         Stage 3: Theoretical appraisal: 

    In addition to assessing the fulfillment of technical criteria we suggest a subsequent, paradigmatic approach to judgment, with a focus on the research paradigm used in relation to the findings presented. Although some critical appraisal instruments integrate criteria related to theoretical frameworks or paradigms most of them are pragmatic.  These do little to identify the quality of the decisions made, the rationale behind them or the responsiveness or sensibility of the researcher to the data.  Therefore, a consideration of other criteria should be considered.  This would e.g. include an evaluation of methodological coherence or congruity between paradigms that guide the research project and the methodology and methods chosen, an active analytic stance and theoretical position, investigator responsiveness and openness and verification, which refers to systematically checking and confirming the fit between data gathered and the conceptual work of analysis and interpretation (Morse et al, 2002).    For this type of overall judgment a more in-depth understanding of approaches to qualitative research is necessary.  It is therefore recommended that a researcher with experience of qualitative research -who can guide others through the critical appraisal process- is invited. Experienced methodologists may have valuable insights into potential biases that are not at first apparent.  It should be mentioned though that the need for a paradigmatic input might depend on the type of synthesis chosen.

    The Cochrane Qualitative Research Methods group recommends stage 3 whenever the instrument chosen for stage 2 does not cover for a paradigmatic approach to judgment.

    Other considerations include involving people with content expertise for the evaluation exercise.  They are believed to give more consistent assessments, which is in line with what the Cochrane Collaboration suggests for the assessment of risk of bias in trials (Oxman et al, 1993).  

    Section 3: A selection of instruments for quality assessment

    A range of appraisal instruments and frameworks is available for use in the assessment of the quality of qualitative research.  Some are generic, being applicable to almost all qualitative research designs; others have specifically been developed for use with certain methods or techniques. The instruments also vary with regard to the criteria that they use to guide the critical appraisal process.  Some address paradigmatic aspects related to qualitative research, others tend to focus on the quality of reporting more than theoretical underpinnings.  Nearly all of them address credibility to some extent.  The list with examples presented below is not exclusive with many instruments still in development or yet to be validated and others not yet commonly used in practice. It draws on the findings of a review of published qualitative evidence syntheses (Dixon-Woods et al, 2007) and the ongoing update of it. Reviewers need to decide for themselves which instrument appears to be most appropriate in the context of their review and use this judgement to determine their choice. Researchers with a quantitative background also need to consider an input from a researcher familiar with qualitative research, even when an appraisal instrument suitable for novices in the field is opted for.

    Which instruments or frameworks are out there?

    Checklists embedded in a software program to guide qualitative evidence synthesis:
    Some evidence synthesis organisations have developed and incorporated a checklist in the software they make available to assist reviewers with the synthesis of qualitative findings. Typically, potential reviewers need to register to be able to use it.  However, the instruments are also available outside the software program on the websites of both organisations[1].


    QARI software developed by the Joanna Briggs Institute, Australia

    Used by: Pearson A, Porritt KA, Doran D, Vincent L, Craig D, Tucker D, Long L, Henstridge V. A comprehensive systematic review of evidence on the structure, process, characteristics and composition of a nursing team that fosters a healthy environment. International Journal of Evidence-Based Healthcare 2006; 4(2): 118-59.

    Rhodes LG et al.Patient subjective experience and satisfaction during the perioperative period in the day surgery setting: a systematic review. Int J Nurs Pract 2006; 12(4): 178-92.

    EPPI-reviewer developed by the EPPI Centre, United Kingdom


    Used by: Bradley P, Nordheim L, De La Harpa D, Innvaer S & Thompson C. A systematic review of qualitative literature on educational interventions for evidence-based practice. Learning in Health & Social Care 2005: 4(2):89-109.

    Harden A, Brunton G, Fletcher A, Oakley A. Teenage pregnancy and social disadvantage: a systematic review integrating trials and qualitative studies. British Medical Journal Oct 2009.

    Other online available appraisal instruments:
    Most of the instruments in this selection are easily accessible and clearly define what is meant by each individual criterion listed.  As such, they may be particularly useful if reviewers with little experience of qualitative research are required to complete an assessment.


    Critical Appraisal Skills Programme (CASP):

    Used by: KaneGA et al.Parenting programmes: a systematic review and synthesis of qualitative research. Child Care Health and Development 2007; 33(6): 784-793.

    Modified versions of CASP, used by:

    Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J. Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Social Science and Medicine 2003; 56: 671-84.
    Malpass A, Shaw A, Sharp D, Walter F, Feder G, Ridd M, Kessler D. ‘Medication career" or "Moral career"? The two sides of managing antidepressants: A meta-ethnography of patients' experience of antidepressants. Soc Sci Med. 2009; 68(1):154-68.

    Quality Framework UK Cabinet Office

    Used by: MacEachen E et al. Systematic review of the qualitative literature on return to work after injury. Scandinavian Journal of Work Environment & Health 2006; 32(4): 257-269.

    Evaluation Tool for Qualitative Studies

    Used by: McInnes RJ & Chambers JA. Supporting breastfeeding mothers: qualitative synthesis. Journal of Advanced Nursing 2008; 62(4): 407-427.

    Checklists developed by academics and commonly used in published qualitative evidence syntheses: Such checklists have been selected and utilised by other researchers in the specific context of an evidence synthesis.


    The Blaxter (1996) criteria for the evaluation of qualitative research papers, used by:

    Gately C et al. Integration of devices into long-term condition management: a synthesis of qualitative studies. Chronic Illn 2008; 4(2): 135-48.

    Khan N et al. Guided self-help in primary care mental health - Meta-synthesis of qualitative studies of patient experience. British Journal of Psychiatry 2007; 191: 206-211.

    The Burns’ (1989) standard for qualitative research, used by:

    Barrosso J, Powell Cope GM. Meta-synthesis of qualitative research on living with HIV infection. Qualitative Health Research 2000; 10: 340-53.

    Thorne S, Paterson B. Shifting images of chronic illness. Image: Journal of Nursing Scholarship 1998: 30; 173-8.

    Hildingh C et al. Women's experiences of recovery after myocardial infarction: a meta-synthesis. Heart Lung 2007; 36(6): 410-7.

    Howard AF, Balneaves LG, Bottorff JL. Ethnocultural women’s experiences of breast cancer: a qualitative meta-study. Cancer nursing 2007 30(4): E27-35.

    The Popay et al (1998) criteria, used by:

    Attree P. Low-income mothers, nutrition and health: a systematic review of qualitative evidence. Maternal and Child Nutrition 2005 1(4): 227-240.

    Sim J & Madden S. Illness experience in fibromyalgia syndrome: A metasynthesis of qualitative studies." Social Science & Medicine 2008; 67(1): 57-67.

    Yu D et al. Living with chronic heart failure: a review of qualitative studies of older people. J Adv Nurs 2008; 61(5): 474-83.

    The Mays & Pope (2000) criteria, used by :

    Humphreys A et al. A systematic review and meta-synthesis: evaluating the effectiveness of nurse, midwife/allied health professional consultants. Journal of Clinical Nursing 2007; 16(10): 1792-1808.

    Metcalfe A et al. Family communication between children and their parents about inherited genetic conditions: a meta-synthesis of the research. Eur J Hum Genet 2008; 16(10): 1193-200.

    Robinson L. & Spilsbury K. Systematic review of the perceptions and experiences of accessing health services by adult victims of domestic violence. Health Soc Care Community 2008; 16(1): 16-30.

    [1]URL: A detailed guide on how to conduct a QARI supported Systematic Review, including a detailed explanation of the10 critical appraisal criteria, can be found on the JBI-website:

    URL for instrument on process evaluation: Tools exist which help to assess quality along three dimensions: quality of reporting, sufficiency of strategies for increasing methodological rigour, and the extent to which study methods and findings are appropriate to answer the review question (For an example, see Harden et al 2009 study).

  • Vinayak K. Nahar added an answer:
    Can anyone recommend guidelines to conduct systematic review of cross-sectional studies?
    I am looking for guidelines to conduct systematic review of cross-sectional studies. Any recommendations are appreciated.
    Vinayak K. Nahar · University of Mississippi

    Thanks Dr. Groot for sharing great work. 

  • Michael W. Marek added an answer:
    If there are a number of theories that go with your topic what's the best way of selection?
    Should we add theories according to our variables or the main purpose of the study?
    Michael W. Marek · Wayne State College

    Part of it depends on the ROLE of the theory in your study.  If the theory framework will be the source of your data collection plan, then you need a theory that aligns most closely with your study research questions, participants, etc.  On the other hand, if you are looking for a theoretical framework for analysis in the Discussion section of your paper, then you could use more than one.  In fact, multiple ways of analysis in your discussion could be a good thing because the analysis would therefore be more rich.

  • Jane Murray added an answer:
    What are colleagues' best and worst experiences of action research?
    I work with early childhood students who work full-time and study part-time. They are often drawn to action research as the methodology for their dissertation but many go on to encounter difficulties in its implementation. I want to build a repository that might help the students when they meet problems.
    Jane Murray · The University of Northampton

    Thank you to contributing colleagues for some valuable suggestions.


  • Yibadatihan Simayi added an answer:
    What is the meaning of quadratic effect of factors on the response surface methodology (RSM)?

    usually we use the RSM on optimization studies to investigate the effect of different factors on the response and to find out the optimum factor levels for the proposed responses. here we have linear , interaction and quadratic effects. what is the meaning of quadratic effect?

    Yibadatihan Simayi · Putra University, Malaysia

    In that case, you will have the individual effect of each factor which showed significance. meaning the factor it self has significant (positive or negative) effect on the variation of your response but do not has any interaction effects with other factors you studied.

  • Abdul Razaque asked a question:
    Is this example of research considered misconduct according to US Government?

    You are doing an experiment sponsored by the National Institutes of Health, a U. S. federal agency. In your experiment, you are testing the impact of a new method of exposure to chlorofluorocarbons to lung tissue using low-dose spiral computerized tomography. The protocol you are using was already approved and requires you to screen 200 subjects. You have completed 190 subjects and need to do just ten more. However, it is time for spring break and you really want to go with your friends. You decide to use the date for the 195 subjects and extrapolate the results for the remaining 10.

    Look up what "research misconduct" is according to the US government. Is this research misconduct? Why or why not? Would it be misconduct if, because of sloppy records keeping, you actually thought you had completed 200 subjects only later realized your error of having completed just 190?

  • Paulinus Woka Ihuah added an answer:
    What are the advantages and disadvantages of mixed methods research?

    I was considering using a mixed methods approach for a future research topic. I would appreciate the views of others in relation to their experiences and views about mixed methodology.

    Paulinus Woka Ihuah · Rivers State University of Science and Technology and University of Salford, UK

    For the mixed methods approach several definitions exist: it is a research inquiry that employs both qualitative and quantitative approaches in a mixed methods research work for the purposes of breadth and depth of understanding and partnership (Johnson et. al., 2007). Creswell and Plano Clark, (2011) added that the indispensable premise of mixed method design is that the use of qualitative and quantitative, in rapport, will provide a better understanding of the research problems than the use of either one method alone in a study. This is argued to be one, if not, the most central premise of the pragmatic philosophical reasoning in research today (Tashakkori and Teddlie, 2003; Ihuah and Eaton, 2013). Please you can read these resource for more details.

  • James R Knaub added an answer:
    On the linearity of an analytical procedure what dose the intercept depend on?

    When we validate an analytical determination method with instruments like HPLC, LC-MS etc  we conduct the linearity tests to create calibration curves, then we will get the linear equation Y=mx±c. for the intercept here some times we get very low or very high values, what dose the observation result from? Does it relate to the concentration of the analyte? if so how?

    James R Knaub · Energy Information Administration

     If you do want to see how your points fit into a classical ratio estimator model, you could plot confidence bounds, assuming normality of residuals (which could be problematic near the origin), by using the attached spreadsheet.  

  • DJ Sullivan added an answer:
    How can I quantify the success of conflict resolution / peacebuilding initiatives?

    I’m currently looking for a way to rank the success of different conflict resolution and/ or peacebuilding initiatives throughout the world. Do you know of any data project with such indicators or methodological literature, which would help me discerning the most relevant criteria for determining various levels of success?

  • AR Atarodi added an answer:
    The social impact assessment is defined as an activity designed to identify and predict the impact. Is it sufficient?
    We know that social impact assessment (SIA), which is defined as an activity designed to identify and predict the impact on the biogeophysical environment and on man's health and well-being of legislative proposals, policies, programs, projects, and operational procedures, and to interpret and communicate information about the impacts and their effects. Is it sufficient to protect the humans with social impact assessment only?
    AR Atarodi · Gonabad University of Medical Sciences

    Dear Prof thanks so much, would you say more, it was interesting.

  • Jeri A. Milstead added an answer:
    What are the principles of conducting a comparative study?

    I have used this type of methodology to compare policies and analyze them, and I'm looking for other ways to enrich my knowledge.

    Jeri A. Milstead · University of Toledo

    Lijphart's work is seminal and I appreciate Goggin's and Williams's comments. I think there is another dimension that has not been mentioned: comparison of scale. That is, comparing a national issue to the same issue in a smaller state or province. For example, is it possible/useful to compare sex education in schools related to out-of-wedlock birthrates between a country and a state? Many variables (too many?) may emerge and cultural context must be considered.

About Methodology

Emergent methodologies in soft and hard sciences

Topic Followers (1760) See all