- Amir Zamri added an answer:Has anyone come across any research that interpret meaning of Facebook Like and Share?
I'm looking for research that specifically look into psychological motives behind liking and sharing facebook posts/statuses. Does "like" means "agree", "awesome!", "good!", or simply just "like", or does clicking the button simply means that one is aware of the post.
Is there any research that measure the number of likes and shares as part of its methodology?
Thank you, I really appreciate your reply. =)Following
- John F. Wilhite added an answer:Should a teacher focus on 'rigorous learning' or 'learning with entertainment'?It has been seen that many teachers in universities have become entertainers rather than focusing mainly on value-addition and learning. A lot of time gets devoted to pleasing the students; knowing them personally; building good relations with them; and telling jokes and creating humour; the focus becomes more of good feedback than rigor. Keeping the audience motivated is good for effective teaching; but since a lot of time goes in entertainment less time remains for analysis and conceptualization. What is your preference and why?
Miloud, did the article mention why the parents complained about the song? Because they shouldn't be singing at all? Because it's a song associated with Christmas (although it is not a religious song)? Parents who complain about what their children are being taught should keep them at home and educate them themselves.Following
- Stefan Svetsky added an answer:What are the impacts of action research on classroom learning process of middle standard students?
What sampling and methodology can be used to find out the impacts of action research on learning process?Following
- Adebayo Akanbi Oladapo added an answer:Are there any methodology to study the population structure and regeneration status of a monocot species?
I want to investigate the population structure and regeneration status of a monocot species like Homalomena species of Areaceae family. Please suggest me some standard methodology.
Sorry, I have no idea; this is not in my area of expertiseFollowing
- Eduardo oliva-lópez added an answer:How I can choose the methodology of calculating the overall performance evaluation?
there are so many methodologies to assess the overall organization performance using Multi-criteria framework
how I could choose ?
An effective performance evaluation has to be linked to the objectives sought by the firm, since good quality management is assessed in terms of the objectives achieved. Since each company has unique objectives and resources, the analist needs to develop his/her own taylor-made methodology on the basis of all pertinent well-known methodologies. Sorry, there is no simple answer to your question.Following
- Izdihar Ismail added an answer:Can we run crude extracts by GCMS? How to prepare Aq and MeOH crude extracts to run GCMS? and what type of solvents can we use?
can anyone suggest me the best methodology for it.. Thank you.
very informative..Thanks to u all.Following
- Steve Macgillivray added an answer:How to use JBI SUMARI methodological quality assessment tool?Why do JBI SUMARI methodological quality assessment tools only have appraisal questions but no scores? And how can reviewers judge whether to include or exclude?
Dear Mingji Zhang
You may also be interested in the guidance document produced by Cochrane:
Hannes K. Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C (editors), Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions. Version 1 (updated August 2011). Cochrane Collaboration Qualitative Methods Group, 2011. Available from URLhttp://cqrmg.cochrane.org/supplemental-handbook-guidance
Critical appraisal of qualitative studies is an essential step within a Cochrane Intervention review that incorporates qualitative evidence.
The overarching goal of critical appraisal in the context of including qualitative research in a Cochrane Intervention Review is to assess whether the studies actually address questions under meaning, process and context in relation to the intervention and outcomes under review.
Review teams should use a critical appraisal instrument that is underpinned by a multi-dimensional concept of quality in research and hence includes items to assess quality according to several domains including quality of reporting, methodological rigour and conceptual depth and bread.
Critical appraisal involves (i) filtering against minimum criteria, involving adequacy of reporting detail on the data sampling, -collection and-analysis, (ii) technical rigour of the study elements indicating methodological soundness and (iii) paradigmatic sufficiency, referring to researchers’ responsiveness to data and theoretical consistency.
When choosing an appraisal instrument a Review teams should consider the available expertise in qualitative research within the team and should ensure that the critical appraisal instrument they choose is appropriate given the review question and the type of studies to be included.
Reviewers need to clarify how the outcome of their critical appraisal exercise is used with respect to the presentation of their findings. The inclusion of a sensitivity analysis is recommended to evaluate the magnitude of methodological flaws or the extent to which it has a small rather than a big impact on the findings and conclusions.
Considerable debate exists on whether or not concepts such as validity and reliability apply to qualitative research and if so how they could be assessed. Some researchers have stated that qualitative research should establish validity, reliability and objectivity. Others plead for an adjustment of these concepts to better fit the qualitative research design. As a consequence, critical appraisal instruments might differ in the criteria they list to complete a critical appraisal exercise. Some researchers consider appraisal instruments a tool that can be utilized as part of the exploration and interpretation process in qualitative research (Popay et al, 1998; Spencer, 2003). Edwards et al (2002) describes the use of a “signal to noise” approach, where a balance is sought between the methodological flaws of a study and the relevance of insights and findings it adds to the overall synthesis. Other researchers do not acknowledge the value of critical appraisal of qualitative research, stating that it stifles creativity (Dixon-Woods, 2004). While recognising that all these views have some basis for consideration certain approaches succeed in positioning the qualitative research enterprise as one that can produce a valid, reliable and objective contribution to evidence synthesis. It is these that may therefore have more potential to be generally accepted within the context of producing Cochrane Intervention Reviews. The Cochrane Collaboration recommends a specific tool for assessing the risk of bias in each included study in an intervention review, a process that is facilitated through the use of appraisal instruments addressing the specific features of the study design and focusing on the extent to which results of included studies should be believed. This suggest that in assessing the methodological quality of qualitative studies the core criterion to be evaluated is researcher bias. Believability in this context refers to the ability and efforts of the researcher to make his or her influence and assumptions clear and to provide accurate information on the extent to which the findings of a research report hold true. However, it is the actual audit trail provided by researchers that allows for an in-depth evaluation of a study. Most existing appraisal instruments use broader criteria that account for reporting issues as well. We suggest that these issues should be part of the appraisal exercise. Currently, there are four possibilities to make use of qualitative research in the context of Cochrane Intervention reviews:
1. The use of qualitative research to define and refine review questions a Cochrane Review (informing reviews).
2. The use of qualitative research identified whilst looking for evidence of effectiveness (enhancing reviews).
3. The use of findings derived from a specific search for qualitative evidence that addresses questions related to an effectiveness review (extending reviews).
4. Conducting a qualitative evidence synthesis to address questions other than effectiveness (supplementing reviews).
The latter use (Supplementing) is beyond the scope of current Cochrane Collaboration policy (Noyes et al, 2008). Stand alone qualitative reviews that supplement Cochrane Intervention reviews need to be conducted and published outside of the Cochrane context.
Critical appraisal applies to all of the above possibilities.
Reviewers should bear in mind that narratives used in reports of quantitative research cannot be considered qualitative findings if they do not use a qualitative method of datacollection and –analysis.Therefore, critical appraisal based on instruments developed to assess qualitative studies is not applicable toreports that do not meet the criteria of being a ‘qualitative study’..
This chapter breaks down in four sections. Section 1 addresses translated versions of core criteria such as validity, reliability, generalisibility and objectivity of qualitative studies. Section 2 presents an overview of different stages involved in quality assessment. Section 3 guides the researcher through some of the instruments and frameworks developed to facilitate critical appraisal and section 4 formulates suggestions on how the outcome of an appraisal of qualitative studies can be used or reported in a systematic review.
Section 1: Core criteria for quality assessment
Critical appraisal is “the process of systematically examining research evidence to assess its validity, results and relevance before using it to inform a decision” (Hill & Spittlehouse, 2003). Instruments developed to support quality appraisal usually share some basic criteria for the assessment of qualitative research. These include the need for research to have been conducted ethically, the consideration of relevance to inform practice or policy, the use of appropriate and rigorous methods and the clarity and coherence of reporting (Cohen & Crabtree, 2008). Other criteria are contested, such as the importance of addressing reliability, validity, and objectivity, strongly related to researcher bias. Qualitative research as a scientific process needs to be “rigorous” and “trustworthy” to be considered as a valuable component of Cochrane systematic review. Therefore an evaluation using such criteria is essential. Nevertheless we should acknowledge that the meaning assigned to these words may differ in the context of qualitative and quantitative research designs (Spencer et al, 2003).
Does translation of terminology compromise critical appraisal?
The concepts used in table 1 are based on Lincoln and Guba’s (1985) translation of criteria to evaluate the trustworthiness of findings. Acknowledging the difference in terminology does not obviate the rationale or process for critical appraisal. There might be good congruence between the intent of meanings relevant to key aspects of establishing study criteria, as demonstrated in table 1.
Table 1: Criteria to critically appraise findings from qualitative research
External Validity or generalisibility
This scheme outlines some of the core elements to be considered in an assessment of the quality of qualitative research. However, the concept of confirmability might not be applicable to approaches inspired by phenomenology or critical paradigms in which the researcher’s experience becomes part of the data (Morse, 2002). The choice of critical appraisal instruments should preferably be inspired by those offering a multi-dimensional concept of quality in research. Apart from methodological rigour, that would also include quality of reporting and conceptual depth and bread.
What indications are we looking for in an original research paper?
There are a variety of evaluation techniques that authors might have included in their original reports, that facilitate assessment by a reviewer and that are applicable to a broad range of different approaches in qualitative research. However, it should be stated that some of the techniques listed only apply for a specified set of qualitative research designs.
Assessing Credibility: Credibility evaluates whether or not the representation of data fits the views of the participants studied, whether the findings hold true.
Evaluation techniques include: having outside auditors or participants validate findings (member checks), peer debriefing, attention to negative cases, independent analysis of data by more than one researcher, verbatim quotes, persistent observation etc.
Assessing Transferability: Transferability evaluates whether research findings are transferable to other specific settings.
Evaluation techniques include: providing details of the study participants to enable readers to evaluate for which target groups the study provides valuable information, providing contextual background information, demographics, the provision of thick description about both the sending and the receiving context etc.
Assessing Dependability: Dependability evaluates whether the process of research is logical, traceable and clearly documented, particularly on the methods chosen and the decisions made by the researchers.
Evaluation techniques include: peer review, debriefing, audit trails, triangulation in the context of the use of different methodological approaches to look at the topic of research, reflexivity to keep a self-critical account of the research process, calculation of inter-rater agreements etc.
Assessing Confirmability: Confirmability evaluates the extent to which findings are qualitatively confirmable through the analysis being grounded in the data and through examination of the audit trail.
Evaluation techniques include: assessing the effects of the researcher during all steps of the research process, reflexivity, providing background information on the researcher’s background, education, perspective, school of thought etc.
The criteria listed might generate an understanding of what the basic methodological standard is a qualitative study should be able to reach. However, a study may still be judged to have followed the appropriate procedures for a particular approach, yet may suffer from poor interpretation and offer little insight into the phenomenon at hand. Consequently, another study may be flawed in terms of transparency of methodological procedures and yet offer a compelling, vivid and insightful narrative, grounded in the data (Dixon-Woods et al, 2004). Defining fatal flaws and balancing assessment against the weight of a message remains a difficult exercise in the assessment of qualitative studies. As in quantitative research, fatal flaws may depend on the specific design or method chosen (Booth, 2001). This issue needs further research.
Section 2: Stages in the appraisal of qualitative research
Debates in the field of quality assessment of qualitative research designs are centred around a more theoretical approach to evaluating the quality of studies versus an evaluation of the technical adequacy of a research design. How far criteria-based, technical approaches offer significant advantages over expert intuitive judgement in assessing the quality of qualitative research is being challenged by recent evidence indicating that checklist-style approaches may be no better at promoting agreement between reviewers (Dixon-Woods, 2007). However, these appraisal instruments might succeed better in giving a clear explanation as to why certain papers have been excluded. Given the fact that few studies are completely free from methodological flaws, both approaches can probably complement each other.
Is the use of a critical appraisal instruments sufficient in assessing the quality of qualitative studies enhancing Cochrane intervention reviews?
Three different stages can be identified in a quality assessment exercise: filtering, technical appraisal and theoretical appraisal. The first stage links to the inclusion criteria of study types that should be considered to enhance or extent Cochrane Reviews and requires no specific expertise. The required expertise for the next two stages ranges from a basic understanding of qualitative criteria to be able to critically appraise studies to a more advanced level of theoretical knowledge on certain approaches used.
Stage 1: Filtering:
Within the specific context of enhancing or extending Cochrane Reviews, and viewing critical appraisal as a technical and paradigmatic exercise, it is worth considering limiting the type of qualitative studies to be included in a systematic review. We suggest restricting included qualitative research reports to empirical studies with a description of the sampling strategy, data collection procedures and the type of data-analysis considered. This should include the methodology chosen and the methods or research techniques opted for, which facilitates the systematic use of critical appraisal as well as a more paradigmatic appraisal process. Descriptive papers, editorials or opinion papers would generally be excluded.
Stage 2: Technical appraisal:
Critical appraisal instruments should be considered a technical tool to assist in the appraisal of qualitative studies, looking for indications in the methods or discussion section that add to the level of methodological soundness of the study. This judgement determines the extent to which the reviewers may have confidence in the researcher’s competence in being able to conduct research that follows established norms (Morse, 2002) and is a minimum requirement for critical assessment of qualitative studies. Criteria include but are not limited to the appropriateness of the research design to meet the aims of the research, rigour of data-collection and analysis, well-conducted and accurate sampling strategy, clear statements of findings, accurate representation of participants’ voices, outline of the researchers’ potential influences, background, assumptions, justifications of the conclusion or whether or not it flows from the data, value and transferability of the research project etc. For this type of appraisal one needs to have a general understanding of qualitative criteria. Involving a researcher with a qualitative background is generally recommended.
· Stage 3: Theoretical appraisal:
In addition to assessing the fulfillment of technical criteria we suggest a subsequent, paradigmatic approach to judgment, with a focus on the research paradigm used in relation to the findings presented. Although some critical appraisal instruments integrate criteria related to theoretical frameworks or paradigms most of them are pragmatic. These do little to identify the quality of the decisions made, the rationale behind them or the responsiveness or sensibility of the researcher to the data. Therefore, a consideration of other criteria should be considered. This would e.g. include an evaluation of methodological coherence or congruity between paradigms that guide the research project and the methodology and methods chosen, an active analytic stance and theoretical position, investigator responsiveness and openness and verification, which refers to systematically checking and confirming the fit between data gathered and the conceptual work of analysis and interpretation (Morse et al, 2002). For this type of overall judgment a more in-depth understanding of approaches to qualitative research is necessary. It is therefore recommended that a researcher with experience of qualitative research -who can guide others through the critical appraisal process- is invited. Experienced methodologists may have valuable insights into potential biases that are not at first apparent. It should be mentioned though that the need for a paradigmatic input might depend on the type of synthesis chosen.
The Cochrane Qualitative Research Methods group recommends stage 3 whenever the instrument chosen for stage 2 does not cover for a paradigmatic approach to judgment.
Other considerations include involving people with content expertise for the evaluation exercise. They are believed to give more consistent assessments, which is in line with what the Cochrane Collaboration suggests for the assessment of risk of bias in trials (Oxman et al, 1993).
Section 3: A selection of instruments for quality assessment
A range of appraisal instruments and frameworks is available for use in the assessment of the quality of qualitative research. Some are generic, being applicable to almost all qualitative research designs; others have specifically been developed for use with certain methods or techniques. The instruments also vary with regard to the criteria that they use to guide the critical appraisal process. Some address paradigmatic aspects related to qualitative research, others tend to focus on the quality of reporting more than theoretical underpinnings. Nearly all of them address credibility to some extent. The list with examples presented below is not exclusive with many instruments still in development or yet to be validated and others not yet commonly used in practice. It draws on the findings of a review of published qualitative evidence syntheses (Dixon-Woods et al, 2007) and the ongoing update of it. Reviewers need to decide for themselves which instrument appears to be most appropriate in the context of their review and use this judgement to determine their choice. Researchers with a quantitative background also need to consider an input from a researcher familiar with qualitative research, even when an appraisal instrument suitable for novices in the field is opted for.
Which instruments or frameworks are out there?
Checklists embedded in a software program to guide qualitative evidence synthesis:
Some evidence synthesis organisations have developed and incorporated a checklist in the software they make available to assist reviewers with the synthesis of qualitative findings. Typically, potential reviewers need to register to be able to use it. However, the instruments are also available outside the software program on the websites of both organisations.
QARI software developed by the Joanna Briggs Institute, Australia
Used by: Pearson A, Porritt KA, Doran D, Vincent L, Craig D, Tucker D, Long L, Henstridge V. A comprehensive systematic review of evidence on the structure, process, characteristics and composition of a nursing team that fosters a healthy environment. International Journal of Evidence-Based Healthcare 2006; 4(2): 118-59.
Rhodes LG et al.Patient subjective experience and satisfaction during the perioperative period in the day surgery setting: a systematic review. Int J Nurs Pract 2006; 12(4): 178-92.
EPPI-reviewer developed by the EPPI Centre, United Kingdom
Used by: Bradley P, Nordheim L, De La Harpa D, Innvaer S & Thompson C. A systematic review of qualitative literature on educational interventions for evidence-based practice. Learning in Health & Social Care 2005: 4(2):89-109.
Harden A, Brunton G, Fletcher A, Oakley A. Teenage pregnancy and social disadvantage: a systematic review integrating trials and qualitative studies. British Medical Journal Oct 2009.
Other online available appraisal instruments:
Most of the instruments in this selection are easily accessible and clearly define what is meant by each individual criterion listed. As such, they may be particularly useful if reviewers with little experience of qualitative research are required to complete an assessment.
Critical Appraisal Skills Programme (CASP): http://www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf
Used by: KaneGA et al.Parenting programmes: a systematic review and synthesis of qualitative research. Child Care Health and Development 2007; 33(6): 784-793.
Modified versions of CASP, used by:
Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J. Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Social Science and Medicine 2003; 56: 671-84.
Malpass A, Shaw A, Sharp D, Walter F, Feder G, Ridd M, Kessler D. ‘Medication career" or "Moral career"? The two sides of managing antidepressants: A meta-ethnography of patients' experience of antidepressants. Soc Sci Med. 2009; 68(1):154-68.
Quality Framework UK Cabinet Office
Used by: MacEachen E et al. Systematic review of the qualitative literature on return to work after injury. Scandinavian Journal of Work Environment & Health 2006; 32(4): 257-269.
Evaluation Tool for Qualitative Studies
Used by: McInnes RJ & Chambers JA. Supporting breastfeeding mothers: qualitative synthesis. Journal of Advanced Nursing 2008; 62(4): 407-427.
Checklists developed by academics and commonly used in published qualitative evidence syntheses: Such checklists have been selected and utilised by other researchers in the specific context of an evidence synthesis.
The Blaxter (1996) criteria for the evaluation of qualitative research papers, used by:
Gately C et al. Integration of devices into long-term condition management: a synthesis of qualitative studies. Chronic Illn 2008; 4(2): 135-48.
Khan N et al. Guided self-help in primary care mental health - Meta-synthesis of qualitative studies of patient experience. British Journal of Psychiatry 2007; 191: 206-211.
The Burns’ (1989) standard for qualitative research, used by:
Barrosso J, Powell Cope GM. Meta-synthesis of qualitative research on living with HIV infection. Qualitative Health Research 2000; 10: 340-53.
Thorne S, Paterson B. Shifting images of chronic illness. Image: Journal of Nursing Scholarship 1998: 30; 173-8.
Hildingh C et al. Women's experiences of recovery after myocardial infarction: a meta-synthesis. Heart Lung 2007; 36(6): 410-7.
Howard AF, Balneaves LG, Bottorff JL. Ethnocultural women’s experiences of breast cancer: a qualitative meta-study. Cancer nursing 2007 30(4): E27-35.
The Popay et al (1998) criteria, used by:
Attree P. Low-income mothers, nutrition and health: a systematic review of qualitative evidence. Maternal and Child Nutrition 2005 1(4): 227-240.
Sim J & Madden S. Illness experience in fibromyalgia syndrome: A metasynthesis of qualitative studies." Social Science & Medicine 2008; 67(1): 57-67.
Yu D et al. Living with chronic heart failure: a review of qualitative studies of older people. J Adv Nurs 2008; 61(5): 474-83.
The Mays & Pope (2000) criteria, used by :
Humphreys A et al. A systematic review and meta-synthesis: evaluating the effectiveness of nurse, midwife/allied health professional consultants. Journal of Clinical Nursing 2007; 16(10): 1792-1808.
Metcalfe A et al. Family communication between children and their parents about inherited genetic conditions: a meta-synthesis of the research. Eur J Hum Genet 2008; 16(10): 1193-200.
Robinson L. & Spilsbury K. Systematic review of the perceptions and experiences of accessing health services by adult victims of domestic violence. Health Soc Care Community 2008; 16(1): 16-30.
URL:http://www.joannabriggs.edu.au/cqrmg/tools_3.html. A detailed guide on how to conduct a QARI supported Systematic Review, including a detailed explanation of the10 critical appraisal criteria, can be found on the JBI-website:http://www.joannabriggs.edu.au/pdf/sumari_user_guide.pdf
URL for instrument on process evaluation: http://eppi.ioe.ac.uk/cms/default.aspx?tabid=2370&language=en-US. Tools exist which help to assess quality along three dimensions: quality of reporting, sufficiency of strategies for increasing methodological rigour, and the extent to which study methods and findings are appropriate to answer the review question (For an example, see Harden et al 2009 study).Following
- Vinayak K. Nahar added an answer:Can anyone recommend guidelines to conduct systematic review of cross-sectional studies?I am looking for guidelines to conduct systematic review of cross-sectional studies. Any recommendations are appreciated.
Thanks Dr. Groot for sharing great work.Following
- Michael W. Marek added an answer:If there are a number of theories that go with your topic what's the best way of selection?Should we add theories according to our variables or the main purpose of the study?
Part of it depends on the ROLE of the theory in your study. If the theory framework will be the source of your data collection plan, then you need a theory that aligns most closely with your study research questions, participants, etc. On the other hand, if you are looking for a theoretical framework for analysis in the Discussion section of your paper, then you could use more than one. In fact, multiple ways of analysis in your discussion could be a good thing because the analysis would therefore be more rich.Following
- Jane Murray added an answer:What are colleagues' best and worst experiences of action research?I work with early childhood students who work full-time and study part-time. They are often drawn to action research as the methodology for their dissertation but many go on to encounter difficulties in its implementation. I want to build a repository that might help the students when they meet problems.
Thank you to contributing colleagues for some valuable suggestions.
- Yibadatihan Simayi added an answer:What is the meaning of quadratic effect of factors on the response surface methodology (RSM)?
usually we use the RSM on optimization studies to investigate the effect of different factors on the response and to find out the optimum factor levels for the proposed responses. here we have linear , interaction and quadratic effects. what is the meaning of quadratic effect?
In that case, you will have the individual effect of each factor which showed significance. meaning the factor it self has significant (positive or negative) effect on the variation of your response but do not has any interaction effects with other factors you studied.Following
- Abdul Razaque asked a question:Is this example of research considered misconduct according to US Government?
You are doing an experiment sponsored by the National Institutes of Health, a U. S. federal agency. In your experiment, you are testing the impact of a new method of exposure to chlorofluorocarbons to lung tissue using low-dose spiral computerized tomography. The protocol you are using was already approved and requires you to screen 200 subjects. You have completed 190 subjects and need to do just ten more. However, it is time for spring break and you really want to go with your friends. You decide to use the date for the 195 subjects and extrapolate the results for the remaining 10.
Look up what "research misconduct" is according to the US government. Is this research misconduct? Why or why not? Would it be misconduct if, because of sloppy records keeping, you actually thought you had completed 200 subjects only later realized your error of having completed just 190?Following
- Paulinus Woka Ihuah added an answer:What are the advantages and disadvantages of mixed methods research?
I was considering using a mixed methods approach for a future research topic. I would appreciate the views of others in relation to their experiences and views about mixed methodology.
For the mixed methods approach several definitions exist: it is a research inquiry that employs both qualitative and quantitative approaches in a mixed methods research work for the purposes of breadth and depth of understanding and partnership (Johnson et. al., 2007). Creswell and Plano Clark, (2011) added that the indispensable premise of mixed method design is that the use of qualitative and quantitative, in rapport, will provide a better understanding of the research problems than the use of either one method alone in a study. This is argued to be one, if not, the most central premise of the pragmatic philosophical reasoning in research today (Tashakkori and Teddlie, 2003; Ihuah and Eaton, 2013). Please you can read these resource for more details.Following
- James R Knaub added an answer:On the linearity of an analytical procedure what dose the intercept depend on?
When we validate an analytical determination method with instruments like HPLC, LC-MS etc we conduct the linearity tests to create calibration curves, then we will get the linear equation Y=mx±c. for the intercept here some times we get very low or very high values, what dose the observation result from? Does it relate to the concentration of the analyte? if so how?
If you do want to see how your points fit into a classical ratio estimator model, you could plot confidence bounds, assuming normality of residuals (which could be problematic near the origin), by using the attached spreadsheet.Following
- DJ Sullivan added an answer:How can I quantify the success of conflict resolution / peacebuilding initiatives?
I’m currently looking for a way to rank the success of different conflict resolution and/ or peacebuilding initiatives throughout the world. Do you know of any data project with such indicators or methodological literature, which would help me discerning the most relevant criteria for determining various levels of success?Following
- AR Atarodi added an answer:The social impact assessment is defined as an activity designed to identify and predict the impact. Is it sufficient?We know that social impact assessment (SIA), which is defined as an activity designed to identify and predict the impact on the biogeophysical environment and on man's health and well-being of legislative proposals, policies, programs, projects, and operational procedures, and to interpret and communicate information about the impacts and their effects. Is it sufficient to protect the humans with social impact assessment only?
Dear Prof thanks so much, would you say more, it was interesting.Following
- Jeri A. Milstead added an answer:What are the principles of conducting a comparative study?
I have used this type of methodology to compare policies and analyze them, and I'm looking for other ways to enrich my knowledge.
Lijphart's work is seminal and I appreciate Goggin's and Williams's comments. I think there is another dimension that has not been mentioned: comparison of scale. That is, comparing a national issue to the same issue in a smaller state or province. For example, is it possible/useful to compare sex education in schools related to out-of-wedlock birthrates between a country and a state? Many variables (too many?) may emerge and cultural context must be considered.Following
- Samuel Freije added an answer:Can anyone explain the difference between inequality of outcome and opportunities and also the measures to estimate inequality of opportunity?
I am looking for the methodology which can be used to estimate the inequality of opportunity using household level data.
An interesting review of both the concept of inequality of opportunities and different methodological approaches to measure it can be found at:
Pignataro, G. (2012), "Equality of opportunity: Policy and Measurement paradigms", Journal of Economic Surveys, Vol. 26, No. 5, pp. 800–834.
You can find it at:
- James R Knaub added an answer:Can anyone explain random coefficient model to me?
I recently came across the term "random coefficient model" in an article under the methodology section. If anyone has a concise definition and explanation for its use, please reply to this thread.
Attached is a link to a short paper with regard to this. I like the way it goes smoothly from equation 1there to eq2 to eq3. (I'm not wild about ignoring heteroscedasticity in the error term in those equations though. However, it does indicate further down that "z" can be used to 'model variance heterogeneity' here.)
- Candido S. Pires-Neto added an answer:Changes in body composition. What can be considered as longitudinal?I would like to analyze longitudinal changes in body composition (fat mass, fat-free mass). My dataset includes participants with different times from baseline measurement (ranging from 1 to 70 months). I plan to categorize all subject given the time from baseline (>1 year, 1–2 years from baseline etc.), however I'm not sure if the range of first category should begin from 1 month. I think this is insufficient for changes in body composition. Could anyone suggest what is sufficient beginning of interval for first category?
A longituinal study requires, at least, three measurements (some research & methodology books & articles suggest that two is adequate). With all due respect, I strongly desagree. Well, considering age span you mentioned, chilren anthropometric & BC variables development are quite fast. Also, I have no idea of how many chldren you have already measured or will measure, but a 6 months interval between data collection is quite good. You could also consider a 4 month interval, but you must consider how many people will help you out. I hope some of it is of any help. By the way, sorry for my poor english.Following
- Dr. Nizar Matar added an answer:How can we process plastic waste (like the plastic polythene used in daily life) into wax?
Do you have any idea regarding the technology or methodology uesd?
Hase Petroleum Wax Company specializes in producing polyethylene waxes (please see the link below).
As Dr. Abdelkader BOUAZIZ indicated, the production is very possible. But it needs some work & effort. The feasibility of the process comes from the simple information (the main component of commercial paraffin wax is the alkane eicosane C20H42). This alkane can be considered as a polyethylene low molecular weight polymer (or oligomer). Therefore, a controlled pyrolysis under vacuum will most probably give eicosane or similar waxy products.Following
- Narong Chamkasem added an answer:Is SPE for leaf samples really useful?
I want some information about the utility of SPE for PAH's determination in leaf samples. Can I try without them? Has someone has tried it? Any methodological advice is welcome.
Because florometer is more selective, it would not see any interference that does not fluorese at the specifica wavelength like PAH, then it will be more sensitive because it has better signal/noise ratio than the non-selective UV detector. Your extract is so dirty already even with sample cleanup. Try not to make your life too complicate.Following
- Peter Wennberg added an answer:Does anyone have suggestions for criteria for evaluating different typologies?If you face a situation when you are about to compare different typologies against each other, how do you do this?
I've come up with some criteria such as: the types (or cluster) should occur as natural cluster empirically, most cases should be able to be classified and the typology should be sound from a clinical point of view.
thanks a lot, this exactly the type of reference I've searching for !Following
- Richard C. Henriksen added an answer:How can grounded theory be used as a methodology for studying urban development in a region?
It is for my PhD research. I request your suggestions and comments. My case study is a rapidly urbanizing wetland region in Kerala, India
The focus of grounded theory is on the creation of a new theory that leads to better understanding a culture that is not clearly defined. Charmaz (2006) stated that "a journey begins before the travelers depart. So, too, our grounded theory adventure begins as we seek information about what a grounded theory journey entails and what to expect along the way." Qualitative research begins with the topic and defining what is to be discovered. From there an appropriate methodology is chosen that matches the purpose of the study and the research question. It is difficult if not impossible to apply a methodology after the data is collected because you have already answered the question for which an answer is being sought. I would suggest maybe looking at a phenomenology method as that might help you as your seek to uncover the themes that define the emerging culture. Just my thoughts. I wish you great success with your project.Following
- Farheen Khan added an answer:Can someone suggest a methodology to decipher the prophylactic potentialities of silver nanoparticle against imperative pathogens?
Silver is known to posses prophylactic potentialities..can anyone sugggest a methodology to decipher the same...As I am finding it difficult to carry out as there is no literature available for the methodology to be employed.
Thnx for your kind suggestions; unfirtunately, I was asking for a method to evaluate the preventive(prophylactic) potentials of silver nanoparticles against listeria or staphylococcus infection.Following
- Tahir Mehmood Khan added an answer:How effective are quasi non-randomized studies?Single arm post exposure quasi non-randomized studies are one of the research methods that are not used very often. Many fear that maturity effect over the time is hard to predict with such studies. However, if proper longitudinal follow up is maintained, such study designs can assist to estimate the base line effect of the intervention, especially in the case where RCTs are not possible.
Dear All thank for your expert opinion. I just want to share my experience with using a single arm post exposure Quasi design.
We did a study where other therapies fail to provide effective response among uremic pruritus patients. Pregabalin was recommended to use for this population,a base line assessment was done was done before starting the Pregabalin, Assessment to estimate the improvement was done at week 2,4,6,8,10. We use Generalized estimated equations (GEE) to assessment the effectiveness over the time. though there was no control but GEE has predicted to the point result and also assist us to predict reduction in itching score over the time.
According to my experience, in the case where placebo cant be used and the disease is resistant to the existing treatment, Quasi post exposure single arm studies can give an effective estimation for the interventionFollowing
- Abdulvahed Khaledi Darvishan added an answer:What are the main characteristics of a good hypothesis?The main characteristics of a good hypothesis.
Thank you for your answers and your nice question!
I think a conjecture is just a "conjecture", while a hypothesis is based on scientific information and memories and has the ability to direct the research.
Any hypothesis can be a conjecture, but any conjecture is not necessarily a hypothesis.
What would you think?Following
- Kees-Jan Kan added an answer:Should I exclude a control variable from the final SEM model when it is insignificant?
I tested a structural model, where gender (A) and another variable (B) predict Y. However, I found that male students were significantly (0.7 year) older than female participants in the sample. This means that A is correlated with C. Thus, I controled for C by including C as the third predictor? Yet, C does not predict Y. I wonder whether it is correct, to exclude C from the final model (on condition that it does not affect the model fit)?
You inserted the variable C for a reason (theoretical or otherwise), so you should leave it in.
What you can do is to decide whether or not the variable has any effect. You can accomplish this by fixing the regression coefficient to 0 and to test if this leads to a significant decrease in modelfit (compared to the original model fit). If so, the model in which the coefficient is estimated freely is the preffered model. If not, the model in which the coefficient is 0 is the preferred model, so that you can assume the variable has no effect.
The message is: don't look at the significance of individual parameters, judge complete models.Following
- Keith Wolodko added an answer:What is the theory building process?Please explain the theory building process.
Are to talking about grounded theory?Following
Emergent methodologies in soft and hard sciences