Science topic

Bias (Epidemiology) - Science topic

Any deviation of results or inferences from the truth, or processes leading to such deviation. Bias can result from several sources: one-sided or systematic variations in measurement from the true value (systematic error); flaws in study design; deviation of inferences, interpretations, or analyses based on flawed data or data collection; etc. There is no sense of prejudice or subjectivity implied in the assessment of bias under these conditions.
Questions related to Bias (Epidemiology)
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Identifying and mitigating biases in data and algorithms is crucial to ensure fairness, transparency, and accountability in AI systems.A comprehensive approach:
Identifying Biases
1. *Data auditing*: Analyze data sources, collection methods, and preprocessing techniques to detect potential biases.
2. *Exploratory data analysis*: Use statistical methods and visualization tools to identify patterns, outliers, and disparities in the data.
3. *Bias detection tools*: Utilize specialized tools, such as AI Fairness 360, to detect biases in data and algorithms.
Types of Biases
1. *Selection bias*: Biases in data collection or sampling methods.
2. *Confirmation bias*: Biases in algorithm design or training data that reinforce existing beliefs.
3. *Anchoring bias*: Biases in algorithmic decision-making that rely too heavily on initial or default values.
4. *Availability heuristic bias*: Biases in algorithmic decision-making that overemphasize vivid or memorable events.
Mitigating Biases
1. *Data preprocessing*: Clean and preprocess data to remove biases and ensure consistency.
2. *Data augmentation*: Increase dataset diversity by adding new data points or transforming existing ones.
3. *Regularization techniques*: Use regularization methods, such as L1 and L2 regularization, to reduce overfitting and biases.
4. *Fairness-aware algorithms*: Develop algorithms that incorporate fairness metrics and constraints.
5. *Human oversight and review*: Implement human review processes to detect and correct biases in AI decision-making.
6. *Diverse and inclusive teams*: Foster diverse and inclusive teams to bring different perspectives and reduce biases in AI development.
Best Practices
1. *Document and report biases*: Transparently document and report biases in data and algorithms.
2. *Continuously monitor and evaluate*: Regularly monitor and evaluate AI systems for biases and fairness.
3. *Establish accountability*: Establish clear accountability and responsibility for AI decision-making and biases.
4. *Foster a culture of fairness*: Encourage a culture of fairness and transparency within organizations developing AI systems.
Relevant answer
Answer
Wow -- one Ai addressing another with neither of them getting proper citation.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
# 175
Dear He Huang, Shary Heuninckx, Cathy Macharis
I read your paper:
20 years review of the multi actor multi criteria analysis (MAMCA) framework: a proposition of a systematic guideline
My comments:
1- In the abstract you say “emergence of stakeholder-based multi-criteria group decision making (MCGDM) frameworks. However, traditional MCGDM frequently overlooks the interactions and trade-offs among different actors and criteria
I completely agree with this possibly most important feature in MCDM, as is interaction, that 99% of methods ignore. They prefer to work as if criteria were independent entities and then, adding up results. MCDM does not work with the concept that the result is A U B or sum, when it should be A ∩ B or intersection.
I have been claiming this for years in RG, and yours is the first paper I read that addresses it
Your paper also addresses the very important issue that the stakeholder who decide the alternatives, projects or options as well as the criteria they are subject to.
2- Page 2 “it is necessary to involve more than one decision maker (DM) to appraise the possible alternatives in the interest of, for example, diverse perspectives, increased acceptance of decision, and reduced bias
In my opinion, the DM, in the first stage of the process, is only an instrument that receives information and demands form the stakeholders, translates them to the decision matrix, and selects the MCDM method.
His most Important function is to analyze the result, make corrections as per his/her know-how and expertise., and recommend the solution to the stakeholders. They are the decision-makers.
3- Page 2 “. The stakeholders can be defined as individuals or interest groups that have vested interests in the outcome of a particular issue or the decision being considered (Freeman et al. 2010).”
Absolutely correct, because each one is responsible for an area of the project. This is the people that know what is needed or wished, and you also emphasise it.
4- Page 3 “The original objective of MAMCA is to help actors understand the preferences and priorities of all relevant stakeholders, and to identify and evaluate different alternative solutions for which a consensus can be reached (Macharis & Bernardini, 2015). It is a decision-support framework with ’stakeholder involvement’ as a keyword”.
5- In my opinion, the word ‘preferences’ should be banned in MCDM. Normally, a stakeholder does not have preferences. A production manager does not have preference to fabricate a product A or product B, or on the importance of each product; he follows instructions on a plan that has been decided at the highest levels. It is absurd to think for instance that rejects are 3 times more important than quality, when this comes from a person that possibly does not have the faintest idea on production. The stakeholder has a production plan and has to comply with it.
In my opinion, and after reading hundreds of papers I realized that many authors have only a theoretical vision of the problem and ignore the reality, and try to solve a problem that is only in papers.
Another for me inappropriate word is “consensus”. IN MCDM consensus is a weird word, because most of the time there is a fight among the different stakeholders and components, where some must give and others receive.
In 1974 Zeleny defined the MCDM problem as a ‘compromise’, a balance between all parts, and that is only possible using a MCDM method, that is, it is the method which for example, must decrease a production goal to satisfy another goal as is the financial objective of a return of say 6 %. It is impossible for a human being to consider all the hundreds of interactions necessary to reach a balanced solution.
The MCDM knows nothing about consensus, but knows how to find an equilibrium or balance for the whole system
6- Page 4 “The most relevant criteria are selected for every stakeholder and weights are elicited that reflect their importance”
I am afraid I don’t concur on weights. Weights are useful to quantify the relative importance of criteria, using either subjective or objective procedures.
In the first kind, they are useless in MCDM, while in the second kind they are very useful. In countless publications as in yours, it is said that there is fundamental in MCDM. This is an intuitive concept without any mathematical support.
I however agree that in general, in most projects, criteria have different importance, no doubt about it, and that the experience of the DM is valuable, and it must be incorporated in the MCDM process, but at the right time and in the proper mode.
Just think that criteria are linear equations and as that, subject to the laws of lineal algebra.
Linear equations can be graphically represented as straight lines in a x-y graphic, and have different slopes that depend on their two values.
When you apply a weight to a criterion it multiplies each value within it. This provokes that a criterion line displaces parallel to itself, but the distances between values are preserved. When this is done for other criteria, that are multiplied by different weight values, the respective lines displace parallel to themselves, because in each one the distance between values is the same.
What is not the same is the existing distance between two criteria, because they depend on the different weight values. As can be seen, there is nothing in these weights that are used to evaluate alternatives.
It is different with entropy, where each criterion obtains an entropic value that quantitatively informs the degree of dispersion between the values. It is precisely this property what makes them useful, because a criterion with high entropy denote a closeness of the criterion distances between values.
The complement to 1 indicates the amount of information each criterion has to evaluate alternatives, that is, the Shannon Theorem.
Therefore, weights only show the geometrical displacement of a whole criterion, while entropy shows the discrimination of values withing each criterion.
7- Page 4 “Different MCDM methods can be used, like for example analytic hierarchy process (AHP)”
You are contradicting yourself when at the beginning you talk about interaction and now, you mention using AHP where interaction is not allowed (Saaty dixit, not me)
8- “A primary difference lies in MAMCA’s high regard for stakeholder autonomy; stakeholders are empowered to introduce criteria that reflect their interests and to evaluate alternatives based on personal preferences”
9- I agree excerpt in the word ‘preferences’
I do not know you, but I have worked in project management and in several countries, in large hydro, mining, oil, paper, and metallurgical projects, assisting at many meetings and I do not remember that somebody was asking or expressing preferences.
We were the stakeholdersand as other fellows, I was just following the direction from the highest levels. Of course, they were open discussions and everybody was free to express his opinions. Nobody was saying that “my preferences are…..”
Where the scholars in MCDM got that preference word? We expressed the needs in our own departments, and our opinions, discusssd with other colleagues, usually the financial guy, about what we need and explain why, and usually it was the project manager who closed the discussion trlough his own opinion
This is how the real world works, not with classroom examples. From there the DM must consider without discussion what each manager said, and put it in a matrix format. Normally the DM has no authority to decide if criterion environment is less or more important than criterion transportation. A DM is a specialist in decision making, involving, mathematics, knowledge, and experience on other projects, similar or not,something than in general is unknown to stakeholders. Thus, each one must operate in his own field: the stakeholders provide information and needs, and the DM process that, analyzes the result, modify it if necessary, and submit it to stakeholders.
Imagine that if in his/her presentation, the DM is interrupted by a stakeholder, asking for the origin of data in the matrix, and the DM responds that come from pair-wise comparison and thus, involving intuition. What do you think would be the reaction of the stakeholder other than incredulity on what he is hearing? I certainly know what would be mine
These are some of my comments. Hope they can be of service
Nolberto Munier
Relevant answer
Answer
# 175
Dear Samia Fekhin
I apologize for my delay in Answering your response; I had not seen it before
1) I agree
2) I agree
3) I disagree. Weights are trade-offs and thus, they are useless to evaluate alternatives.
AHP is unable to be used here because stakeholders may generate criteria that are interrelated
Pair-wise comparison is not acceptable.You cannot put a preference value at your will.
4- I do not know what you mean with alternative generation. As a fact alternatives must precede the selection of weights. You cannot assign weights on something that yo don’t of know
5- Please tell ne how you evaluate impact assessment
Cost-benefit analysis is not a MCDMA technique. It is a simple ratio, ans was developed long befre MCDA
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Benefits of AI in Higher Education Improved Educational Opportunities: education that is customized to meet the needs of each unique learner. enhanced accessibility via assistive technologies for students with disabilities. Simulations and interactive materials make learning more interesting. Efficiency in Administration: Routine chores can be automated to free up employees for more important work. enhanced analysis and processing of data to facilitate better decision-making. Developments in Research: Research advances more quickly as a result of speedier data processing. AI-powered systems that enable international research cooperation. Assistance for Students: Using predictive analytics, retention rates can be raised by identifying students who want more assistance. International Cooperation: Geographical distances can be overcome by AI, resulting in global research collaborations. AI's drawbacks in higher education Employment Displacement: Automation may result in the loss of administrative positions. Bias and Ethical Issues: danger of biased AI systems for grading and admissions. Ethics-based supervision and accountability for AI choices are required. The Digital Divide differences in how well-funded and under-funded institutions use AI. Security and Privacy of Data: difficulties in guaranteeing the security and privacy of student data. Over-reliance on artificial intelligence Potential for greater susceptibility to system faults and less human control. Gap in Skills: Faculty and students must acquire new skills in order to use AI technologies efficiently. Research homogenization: There is a chance that using similar AI techniques will result in the loss of varied research perspectives. Expense and Obsolescence of Technology: high upfront costs and the difficulty of staying up to date with the quick changes in technology. Regulatory and Political Difficulties: navigating financial priorities and governmental regulations that could affect the use of AI.
Relevant answer
Answer
The use of artificial intelligence, with its remarkable advantages and transformative potential, is accompanied by certain challenges. However, to mitigate or overcome these challenges, greater attention can be given to the following points:
  • Embracing the benefits while addressing the challenges
  • Proposing tangible and actionable solutions
  • Emphasising the fair development and application of AI
  • Fostering international collaboration
  • Adopting a balanced and optimistic tone
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I would like to know if I suspect that two studies (same authors) was based on the same patients but with different follow-up. Can we include both? Can we dowgrade one of them (high risk of bias) to exclude one in a sensitivity analysis? I cannot find any clear answer in the different tools assessing quality of evidence (GRADE, RoBs, ...)
Relevant answer
Answer
The usual thing in these cases is if there is a cohort of patients that have been published at two different times, to avoid bias, only one of the two will be used in the meta-analysis. Generally the one that has a longer follow-up period.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
  • Imputation: Use statistical methods like mean, median, or mode imputation for numeric fields (e.g., average age).
  • Deletion: For substantial gaps, consider removing incomplete records if doing so doesn't bias results.
  • Follow-up: If feasible, revisit schools to collect missing information. For critical fields, prioritize completeness during data collection. For example, if 20% of oral health records lack caries status, imputing based on the school’s average caries prevalence might help.
Citation: Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147–177.
Relevant answer
Answer
You can,
  1. Identify the missing data: Review the records to determine which information is missing or incomplete.
  2. Impute or estimate values: If appropriate, use statistical methods to estimate missing values based on available data (e.g., mean imputation, regression).
  3. Follow-up: Contact students, parents, or healthcare providers to collect the missing information.
  4. Document gaps: Clearly note any missing data and the steps taken to address it for transparency.
  5. Prevent future gaps: Implement a system to ensure consistent data collection and regular updates to minimize missing information moving forward.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
The Impact of Artificial Intelligence on Recruitment: Artificial intelligence enhances efficiency by streamlining candidate selection, analyzing large datasets, reducing human bias, and providing tailored training opportunities. It also supports performance evaluation while posing challenges such as losing the human touch and potential algorithmic bias.
Relevant answer
Answer
AI significantly impacts recruitment by automating tasks like resume screening, interview scheduling, and candidate matching, improving efficiency and speed. It enhances decision-making through predictive analytics and reduces bias when designed carefully, promoting diversity. AI-powered chatbots improve candidate experience by providing instant communication.
However, AI has limitations, such as potential algorithmic bias, lack of emotional intelligence, and difficulty assessing cultural fit and interpersonal skills. It lacks the empathy and contextual understanding required for nuanced decision-making.
AI cannot fully replace human expertise in candidate selection, especially for roles requiring creativity or strong interpersonal skills. The best approach is collaboration: AI handles data-driven tasks while humans focus on strategic, ethical, and interpersonal aspects of recruitment.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I'm interested in exploring this topic for my research. Research direction would be:
1. Investigating the effectiveness of ML in mitigating specific biases (e.g., confirmation bias, loss aversion).
2. Developing ML-based decision support systems for financial advisors.
3. Analyzing the impact of ML-driven interventions on investor behavior.
4. Exploring the role of ML in fostering financial literacy.
#BehaviouralFinance #MachineLearning
Relevant answer
Answer
AI, the greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I am curious to know if any biases, narcissistic tendencies, empathy, or any other psychological traits impact politicians decision-making.
Relevant answer
Gender is a typical trait that influence your possibilities to be elected.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
what is the purpose of this portal for the researcher and how the data of the scholars are protected from any bias and affection ? who is responsible?
Relevant answer
Answer
It helps me compile and update my research work , see other related researchers activity. Moreover it gives me my interest score for motivation. We can further gain sceintific knowledge by asking questions and answering as well.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Hope this helps you guys to write the discussion part of your qualitative papers:
  • Thematic Analysis: Firstly this is essential , I organised the discussion by key themes from interviews (e.g., communication, resources) with direct quotes to capture stakeholder perspectives.(Braun & Clarke, 2006).
  • Linking to Literature: This is crucial and tricky. I related my study findings to existing studies, highlighting agreements and differences to show new insights.Yes there were some areas that presented scarce research, then I expanded the literature to include similar settings. This helped me strengthen the context and value of my study (Silverman, 2011).
  • Addressing Bias and Limitations: We cannot think that our studies never have limitations,Please include a reflection on researcher bias and study limits, explaining how these were managed with techniques like journaling. Please note my friends, this builds transparency and credibility, despite challenges in achieving balanced self-reflection (Creswell & Poth, 2018).
Happy to share knowledge,
Anitha
Relevant answer
Answer
The discussion section should interpret your findings, connect them to existing literature, reflect on methodological challenges, state the implications, and highlight the contributions of your research. By following these guidelines, you can effectively communicate the importance and impact of your qualitative study.
  • asked a question related to Bias (Epidemiology)
Question
6 answers
The risk of bias significantly influences the validity of systematic review conclusions, as studies with higher bias are more likely to overestimate treatment effects. Systematic reviews that incorporate assessments of bias, such as the Cochrane Risk of Bias Tool, tend to provide more reliable estimates of intervention effectiveness.
Higgins, J. P. T., & Green, S. (2011). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0. The Cochrane Collaboration. [Available at: http://handbook.cochrane.org]
Relevant answer
Answer
The risk of bias in systematic reviews can lead to inaccurate or misleading conclusions. If studies included in the review have flaws, such as poor design or selective reporting, it can overestimate or underestimate the effects of treatments, affecting the reliability of the review's findings.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
“In a dynamic and uncertain environment, how can behavioral finance theory be used to explain investors’ decision-making biases in the formation of asset price bubbles, and whether corresponding policy intervention measures can effectively curb the formation of such bubbles?”
Relevant answer
Answer
Hi Anne,
So I think Behavioral finance explains how psychological biases lead to asset price bubbles:
1. Herding Behavior: Investors follow the crowd, inflating prices.
2. Overconfidence: Overestimating knowledge leads to risky decisions.
3. Anchoring: Fixating on past prices ignores new information.
4. Loss Aversion: Fear of losses prevents selling overvalued assets.
Policy Interventions
To curb bubbles, effective measures include:
1. Stricter Regulations: Limit speculative trading.
2. Investor Education: Raise awareness of biases.
3. Increased Transparency: Monitor markets to identify bubbles.
4. Macroprudential Policies: Implement safeguards against excessive borrowing.
Combining behavioral insights with regulations can stabilize financial markets and reduce bubbles.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Recently I read an article by 'Wagdy Sawahel' on 05 September 2024, in the University World News: African Addition titles "How Africa can help to tackle global bibliometric coloniality?"
As time passes and world conflicts are becoming more aggressive and as the politican arena worldwide is impacting any dimension one can imagen [sports, education, economics, safety & security, etc.], of concern here is the actual Global Bibliometric Bias that is saliant in the countries located mostly in the southern hemisphere of the earth. The call nowaday is to establish new indexes that provide a balance and fairness. What do you think? Are you as a researcher, expert, professional, etc. with such a move?
Note: Read along the rference articles.
Relevant answer
Answer
New world events are occurring with hidden impacts on education and collaboration. However, with the falling values of declared democracies, what will happen to the Global bibliometric systems?
  • asked a question related to Bias (Epidemiology)
Question
4 answers
I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?
Relevant answer
Answer
The bias in neural networks plays a crucial role in enhancing the model's flexibility. It shifts the activation function to help the network better fit the data. Without bias, the model would always have to pass through the origin (zero), limiting its ability to represent data patterns effectively. Bias terms allow neural networks to adjust their decision boundaries and handle more complex, non-linear relationships between inputs and outputs, ultimately improving accuracy and performance in tasks like classification and regression.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
Relevant answer
Answer
A late reply. I hope this helps you!
  • asked a question related to Bias (Epidemiology)
Question
6 answers
I am currently validating a tool on assessing patients perspectives on primary care. Initial research showed me that using a reverse order Likert scale ( 1= " I completely agree' , 5= "I completely disagree", 3 = "" I don't know") would avoid response bias so I collected data from this tool. I have passed the analysis stage. What are the important aspects to consider during the scale validation? What is the impact on descriptive features of the tool- Scoring? What precautions to follow?
Relevant answer
Answer
Using a reverse-order Likert scale (where 1 = "I completely agree" and 5 = "I completely disagree") offers both advantages and challenges when validating a tool, especially in assessing patient perspectives in primary care. The main benefits include reducing response bias by encouraging more thoughtful answers and balancing response patterns by diversifying scoring. This can lead to richer data and improved insights during validation, as inconsistencies between reverse and traditional scales can help identify comprehension issues. However, the reverse scale can increase cognitive load and complexity, particularly for respondents unfamiliar with this format. This may lead to errors, slower completion rates, and inconsistent responses. In terms of scoring and data interpretation, reverse coding is essential to avoid misleading results and ensure a logical flow in the analysis. To mitigate these risks, pilot testing is crucial to evaluate respondent reactions, and clear instructions should be provided. Proper reverse scoring during analysis is vital to maintain data integrity. Overall, while reverse-order Likert scales can enhance engagement and reduce bias, careful validation and precautions are necessary to ensure reliable outcomes.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
What are the potential risks of Artificial Intelligence (AI) in higher education, particularly concerning data privacy, bias and the digital divide?
How can these risks be mitigated?
Relevant answer
Answer
It was my pleasure, Md. Afroz Alam.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
So let us assume in whatever field there is a difference between 'life' and 'research'.
Consider the following bias:
- in a research project, cases are added to one group which would in 'real life scenarios' not be identified as such.
For example a patient with a rare disease is added to a cohort for research purposes, while in real life the diagnosis is not strong enough to justify dangerous therapy.
The research cohort is inflated (possibly to allow 'stronger' statistics or reach a minimum group size), yet the over-included cases would better be suited as control.
Obviously this problem is rather simple. I am asking:
- is there a name already for that kind of bias?
- can you name a citation or researcher?
Thanks a bunch,
Stefan
Relevant answer
Answer
Thanks a lot!
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Dear researchers,
In a systematic review (not a meta-analysis) compiling studies on various diagnostic systems against a specific pathogen. The most relevant parameters evaluated are the type of material used, the type of sample used, and the detection limit achieved.
It is important to note that these are not clinical or observational studies, and no diagnostic performance parameters are being compared to a gold standard.
What type of application or platform would be most suitable for evaluating the quality (risk of bias) of the selected articles? Would it be possible to modify any published platform?
Thank you in advance for your assistance.
Sincerely,
Daniel
Relevant answer
Answer
Possible Modifications:
  1. Domain 1: Patient Selection (or "Sample Selection" in your case):Focus on whether the selection of samples was appropriate for the type of pathogen being tested, considering biases in sample type (e.g., environmental, biological) rather than patient populations.
  2. Domain 2: Index Test (Diagnostic Systems in your case):Assess the methodology of the diagnostic systems in terms of consistency, reproducibility, and any reported limitations in the materials and detection limits.
  3. Domain 3: Reference Standard:Since no gold standard is used, this domain can be excluded or simplified. You may instead assess how well the studies justify their chosen detection limits and materials.
  4. Domain 4: Flow and Timing: Adapt this domain to focus on the reproducibility of experiments and whether the studies adequately describe the processes and conditions under which the tests were performed. Regards
  • asked a question related to Bias (Epidemiology)
Question
10 answers
I am conducting a qualitative study that uses interviews to investigate the perceptions of teachers about a particular leadership practice and I am focusing on 3 schools which have a total number of 300 teachers. I work at one of the schools. I am hesitating about my sampling strategy. I considered snowball sampling or convenience sampling which are easy, time efficient and flexible. I could utilise people who have particular knowledge and experience in the area in question, which is the teachers perceptions about promotion strategies.
But I’m also thinking of limiting and controlling my sample size and focus only on senior teachers who have 10 years plus experience. For one thing they have a lot of rich input and for another I would avoid the potential bias of snowball or convenience sampling. What do you think? I would really appreciate your input. Thank you so much indeed.
Relevant answer
Answer
Purposive Sampling (Focusing on Senior Teachers) may be the best choice for you.
Advantages: Targeting senior teachers with 10+ years of experience ensures that your sample is rich in relevant insights and experience, leading to more informed and nuanced perspectives on leadership practices. It reduces the risk of bias associated with convenience and snowball sampling by focusing on a specific, well-defined group. Disadvantages: While it’s more controlled, it may limit the diversity of perspectives, as you’ll be focusing on a specific group that might have different views than less experienced teachers. So my recommendation is as under:
Given the nature of your study and your focus on obtaining in-depth perceptions of leadership practices, purposive sampling might be the best strategy. By selecting senior teachers with significant experience, you’ll likely gather rich, informed data that will enhance the credibility and depth of your findings. This approach also allows you to mitigate the potential biases associated with snowball and convenience sampling while still being efficient and focused.
Since you’re working at one of the schools, you could start by identifying senior teachers across the three schools and conducting interviews until you reach data saturation, ensuring a comprehensive understanding of the leadership practices of experienced educators.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
na
Relevant answer
Answer
Drama-infused interventions in educating young children on gender bias are very effective. Media plays very important role educating people and keep them updated for day to day developments and advancements going on in day to day life. To name a few Print media , Electronic media the most powerful is drama and theater because it has multiple benefits one is entertainment and other is educating people. So more and more drams and theaters should be performed on educating children on gender bias.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
The question specifically looks at how lenders' biases (e.g. confirmation bias, anchoring bias) influence their evaluations and approval of loans for green projects, and what implications this has for carbon neutrality.
Give more suggestion to connect pychology, finance and Carbon neutrility.?
Relevant answer
Answer
Hello Iam Javed Ali Khan Biostatistician. As far as your question is concerned there is a strong correlation between psychological biases, green projects and carbon neutrality. There should be through verification while loans are sanctioned for green projects. Preference should be given to sanction loans to the people who posses degree in agriculture and relevant fields and those who have worked in the projects of Plantation greenery and who have worked on anti pollution drives. By doing so there will be a sustainable impact on carbon neutrality.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
To check difference of post test between test group and control group by controlling pre test score.
I read that ANCOVA is based on the assumption that the means of the covariate are equal across groups. In other words, the covariate (pre-test scores) should not differ significantly between groups. If there is a significant difference in the covariate across groups, it suggests that the pre-test scores themselves differ by group, which could bias the analysis results.
However, what I understand is that ANCOVA is conducted to control for covariates. If the pre-test (covariate) is already same, why do we need to perform ANCOVA to control covariate? it is already same.. Isn't it the same as an independent t-test? Wouldn't it be more logical to say: despite the difference between the two groups (experimental, control) in the pretest, set it as a covariate to see the difference between the two groups in the posttest?
Relevant answer
Answer
Covariates aren't always pretests. Sometimes they're measures which could account for significant variance in the dependent variable. The covariate then functions to reduce error variance, making testing of the independent variable more powerful, i.e. more likely to show as significant or more significant, e.g. p< .01 rather than p < .05.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
How do you address potential biases in diary entries?
Relevant answer
  • asked a question related to Bias (Epidemiology)
Question
1 answer
How do you handle the issue of observer bias in your research?
Relevant answer
  • asked a question related to Bias (Epidemiology)
Question
1 answer
How do you handle potential biases in survey responses, especially regarding sensitive topics?
Relevant answer
  • asked a question related to Bias (Epidemiology)
Question
1 answer
How do you ensure that your observations are objective and not influenced by personal biases?
Relevant answer
  • asked a question related to Bias (Epidemiology)
Question
1 answer
How do you balance including your own research in
a literature review without allowing bias to influence
your analysis?
Relevant answer
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Hi there,
I recently read some case-control studies and noticed that not all studies match their participants on the length of follow-up. (The matching variables in this study include length of follow-up, See DOI:10.1002/cpt.2369)
In a case-control design, researchers index at the date of event occurrence and look back over several months to explore the incidence of exposure. I'm wondering if, by not matching on the length of follow-up, those who experience a longer follow-up period might also have a higher possibility of being exposed, finally leading to time-window bias (DOI: 10.1097/EDE.0b013e3182093a0f). Instead, some researchers proposed that using time-varying sampling is a viable way to deal with this bias. (DOI: 10.1136/bmjopen-2015-007866)
Thus, I'm confused about:
(1) is it necessary to control for time-window bias?
(2) the difference between these two methodology
(3) based on question (2), which one is better?
Relevant answer
Answer
When a case-control study is drawn from a cohort study, time-dependent sampling also called incidence density sampling can be implemented and with this approach rate ratios can be estimated without needing to adjust or match for duration of follow-up. Indeed, the two methodological articles cited (DOI: 10.1097/EDE.0b013e3182093a0f and DOI: 10.1136/bmjopen-2015-007866) presented adjusted estimates and neither included duration of follow-up in their model.
I understand that Gronich et al. (2021) DOI:10.1002/cpt.2369 applied the principles of risk set sampling, another name for time-dependent sampling, and that matching on duration of follow-up may be overkill, but they may have had a good reason my quick reading did not reveal.
  • asked a question related to Bias (Epidemiology)
Question
6 answers
I have designed magnetic material with different biasing conditions in HFSS. Now I want to give an RF AC signal and do a transient simulation in HFSS. Is it possible to do in HFSS? Please help me to figure this out. Thanks.
  • asked a question related to Bias (Epidemiology)
Question
7 answers
In my transnational teaching context, I noticed that many learners learn rigidly. For example, they gain knowledge through watching news, but they are not critical about the source of the news. Also, they express themselves very subjectively with no trace of criticality in their speech. I mean, we are human beings. It is understandable to be biased toward certain things due to the lack of knowledge. But my question is, what exactly is being critical for students in a transnational context?
Relevant answer
Answer
Saludos, independiente de las etapas de desarrollo o de la metodología del docente, si éste, el docente, no los conduce con la expresión crítica en sus clases, el estudiante no la desarrolla. Se necesita permitir la opinión y la crítica en el aula, respetando las opiniones diversas para fomentar y enriquecer la opinión pública
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I want to know that n-doped side of solar cell is connected with positive or negative electrode.
Relevant answer
Answer
The simplest model of a photovoltaic cell is a model consisting of two elements connected in parallel:
1. A source of current proportional to the power of solar radiation on the surface of the element. This source determines the short circuit current.
2. A diode shunting this source. The voltage drop across this diode determines the open circuit voltage (approximately 0.7V for silicon).
This model perfectly describes the characteristics of a photocell in both the forward and reverse directions. If desired, it can be complicated with series and parallel resistors.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
To the best of my knowledge, a systematic review aims to collect and summarize all the published & unpublished literature revolving around a certain topic/sub-topic.
Sometimes, I encounter results in ClinicalTrials.gov which are yet to be published, or abstracts which do not have their full-texts available yet, or conference proceedings which do not include their methodologies in fine detail.
In this case, when the methods section is not addressed appropriately, what tools could be employed to assess the risk of bias/quality of such research types?
Thank you beforehand.
Relevant answer
Answer
Thank you for your insights.
  • asked a question related to Bias (Epidemiology)
Question
5 answers
I am doing a systematic review, and I am measuring risk of bias with RoB2 for RCT, and ROBINS-I for non RCT. My questions is, for single arm studies, can I use ROBINS-I? I am not sure how to answer the questions for the domain regarding confounding in this case.
Thank you!
Relevant answer
Answer
you can try ROBINS-E for single arm studies or try to adapt ROBINS-I to fit your situation. You should mention any adaptation or modification in the methodology section.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
The source of classification bias in marker gene metagenome sequencing?
a variability in the taxonomy classification of microbial communities when using different primer pairs (e.g. for 16S rDNA) is commonly known. However, the mismatches to these primers are not described as the major reason for this bias. My question is: what are other possible causes of this bias and which is now supposed to be the major one?
Relevant answer
Answer
In marker gene metagenome sequencing, such as 16S rRNA sequencing used to profile microbial communities, several factors can contribute to classification bias. While primer mismatches can cause variability, they are not the primary source of bias. Here are some other significant causes:
  1. Incomplete Coverage of Variable Regions:Sequencing companies often do not sequence all nine hypervariable regions (V1-V9) of the 16S rRNA gene. Different primer sets target different regions, and incomplete coverage can lead to biases as some regions provide more taxonomically informative sequences than others.
  2. Primer Selection and Design:Different primer pairs have varying levels of specificity and efficiency. Primers designed for certain regions might preferentially amplify some taxa over others, leading to an imbalanced representation of the microbial community.
  3. PCR Amplification Bias:During PCR amplification, some sequences might be amplified more efficiently than others. This can be due to differences in GC content, secondary structures, or the presence of inhibitory substances. These biases can distort the relative abundance of taxa.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim
Relevant answer
Answer
An ATAC library like this cannot be used for further sequencing, if you increase the depth then also it will be very hard to find good alignments of the reads. You should look into the number of cells and the cell lysis timings. You can also increase the transposition time depending on the cell type. I have attached one of my prepared library QC report.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I randomly interviewed 250 poor people and 250 non-poor people. Considering 1 for poor and 0 otherwise, does estimating a logit model aiming to capture the probability of becoming poor make sense? What are possible biases?
Relevant answer
Answer
A logistic regression may make sense but you have things to consider:
- is a conditional logistic regression required? (yes if case and control are matched)
- you fixed arbitrary the prevalence of the condition of interest (either poor of non poor). You can distort several parameters associated with this selection process especially if your prevalence is far away from the true prevalence. In this case the robustness of your findings should be tested.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
my doubts on that style of writing
Relevant answer
Answer
Why use English at all? If our writers love their mother tongue so much, why not write in African languages? Or are we defficient in English? Just saying.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
What is the difference between limitation in recall and recall bias?
Relevant answer
Answer
Limitation in recall refers to the general inability of individuals to remember past events accurately or completely. This can be due to a variety of factors, including the passage of time, the complexity of the information, the individual’s cognitive abilities, or the context in which the information was encoded and stored.
Recall bias is a type of systematic error that occurs when there is a differential accuracy of recall between study groups. It typically arises in retrospective studies when participants with a particular outcome or exposure remember past events differently than those without the outcome or exposure.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
Can anyone suggest how to design a transistor model in ansys circuit. snp file with respect to different bias is available but from the datasheet I want to design transistor and observe it's effect with respect to any bias condition.
Any suggestion would be really helpful.
Relevant answer
Answer
Hello Swadesh Poddar sir,
If you got the answer of your question , please post the answer because I am also trying to design the transistor model in HFSS circuit but not getting correct way to design. I hope you will reply.
Thanking you
  • asked a question related to Bias (Epidemiology)
Question
3 answers
A new 𝑇𝑟𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶𝑜𝑔𝑛𝑖𝑡𝑖𝑣𝑒 𝑆𝑐𝑖𝑒𝑛𝑐𝑒𝑠 article challenges characterizing people as irrational and argues behavioral science aimed at policy should start by assuming people are reasonable.
Traditional models often label deviations from 'perfect rationality' as a seemingly never ending list of biases. Maybe this is less useful lately? The article gives examples that what may seem irrational can be appropriate responses to specific contexts.
From climate change to COVID-19, they show how assuming people are reasonable shifts the focus. For instance, trust in health authorities correlated with higher vaccine uptake, which makes the behavior appear reasonable.
This reframing encourages participatory methods, turning targets of interventions into partners. Methods like citizens' assemblies and 'nudge plus' highlight the value of engaging those affected by policies.
By recognizing reasonableness, maybe behavioral science can craft more effective, context-aware interventions. What do you think of this argument?
Relevant answer
Answer
I think both approaches are flawed because they start from an assumption that human behaviour is either rational or irrational. The truth of the matter is, as the article points out, that we're all a mixture of both. That being the case and with the the concept of normative itself being a variable, differing by influences of society, culture, education etc. then starting from either binary positions is going to have flawed modelling.
With typical behaviour modeled on repeatable conditions, they both have value, but for predicting behaviour in completely new situations, such as a global pandemic, they are simply too fixed. It's not 'either, all' it's 'either, or, if, else, and when' and starting from a position which balances the rational and irrational, rather than the idea of perfectly being one or the other.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Hello, friends. Currently, I am working on species distribution modelling using maxent. I have run the model using occurrence and climate data from WorldClim. Where i can find calibration area (e.g., buffer zones, minimum convex polygons, enclosing rectangles) biases introduced during the calibration process from my model
Relevant answer
Creo que un Estadígrafo puede ayudar.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
When submitting a manuscript to journals, sometimes we are asked to recommend reviewers. How to choose the reviewers to recommend?
I prefer to maintain the integrity so not to recommend someone I have favorable conflict of interests. On the other hand, what recommendation we should avoid to prevent unfavorable bias?
Relevant answer
Answer
"Recommend people in your field whose view of your paper you would respect
Also suggest researchers who know the subject well and are willing to invest the time (‘big names’ will often decline)"
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Although the two terms are discussed separately in many textbooks, some other epidemiologists suggest that confounding is a type of bias.
It is worth considering their perspective and exploring the relationship between these concepts. By understanding how confounding and bias are related, we can improve our research methods and draw more accurate conclusions.
Relevant answer
Answer
Dear Sir Mohamed Hussein Adam This is indeed a fascinating question that has sparked debate among epidemiologists and researchers. Thank you for asking this question. Let's delve into it further.
Confounding and bias are related concepts in epidemiology, but they are not synonymous.
Confounding occurs when a third variable, called a confounder, is associated with both the exposure and the outcome of interest, biasing the observed association between the exposure and the outcome.
Bias, on the other hand, refers to systematic errors or deviations from the truth in the design, conduct, or analysis of a study that can lead to erroneous conclusions.
While confounding can potentially bias the results of a study, not all confounding leads to bias. If confounding is adequately controlled for through study design or statistical methods, it may not introduce bias. For example, if researchers adjust for the confounding variable in their analysis or use techniques like stratification or multivariable regression, they can mitigate the impact of confounding and obtain unbiased estimates of the exposure-outcome association.
Let's look at an example: Imagine a study shows that smoking and drinking coffee are linked to high blood pressure. But here's the catch: Both smoking and coffee drinking are also connected to how much money people make, which is a known risk factor for high blood pressure. So, money (or socioeconomic status) is like the middleman here, messing with the results. It's related to both smoking and coffee drinking, as well as high blood pressure. That's what we call a confounder.
If researchers fail to account for SES in their analysis, the observed association between smoking/coffee consumption and arterial hypertension may be confounded and biased. However, if they adjust for SES in their statistical analysis, they can control for confounding and obtain unbiased estimates of the true associations between smoking, coffee consumption, and arterial hypertension.
By recognizing the distinction between confounding and bias and implementing appropriate methods to address confounding in study design and analysis, researchers can enhance the validity and reliability of their findings. Here is my humble answers from my point of view, i'm open for further discussion. Hope it helps. I wish you success in all your endeavors.
Reference:
Rothman, K., Greenland, S., & Lash, TL. (2008). Modern Epidemiology, 3rd Edition. Lippincott Williams & Wilkins.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
في مجال العلاقات الدولية، تعتبر الحاجة للفصل والتصنيف من الدواع الأساسية لفهم واستيعاب النظريات المجردة والمقاربات النظرية المتعددة لتحليل الظواهر السياسية المعقدة. فعلى سبيل المثال، يمكن لتصنيف الدوافع الدولية أو العوامل المؤثرة في السلوك الدولي أن يوفر إطارًا مفيدًا لتحليل سلوك الدول واتخاذ القرارات على الساحة الدولية.
ومع ذلك، يظهر أحيانًا أن هذه الضرورة الأكاديمية للفصل والتصنيف قد تؤدي إلى نتائج عكسية، حيث يمكن أن تحدث فجوات في فهم الوضع السياسي وتقدير المواقف الدولية. فعلى الرغم من أن الحوارات النظرية والتصنيفات المفسرة للسلوكات والفواعل الدولية يمكن أن توفر إطارًا للتحليل، إلا أنها قد تقيّد التفاعل مع تعددية العوامل والمتغيرات في الساحة الدولية.
وهكذا، قد يجد المحلل نفسه محاصرًا داخل مجموعة من المصطلحات والمفاهيم التي قد تكون محدودة في تفسير السلوك الدولي المعقد. ومن هنا، قد يتعين على المحللين الاستماع بانفتاح إلى مختلف العوامل والمنظورات والتحليلات، والابتعاد عن الانحياز لإطار نظري أو مصطلحات محددة، حتى يتمكنوا من فهم وتقدير الديناميات السياسية بشكل أكثر شمولية ودقة.
Relevant answer
Answer
Theoretical bias is a deliberate choice of theory in favour of chosen variables, subject of interest, evidences and their interpretation. Both theoretical bias and political perspective are subjective. The effect of theoretical bias on political analysis is reflective of the direction of the analyzed subject in tandem with the interest of the analyzer. Essentially, the interpretation of a comprehensive issue using a theoretical bias is often skewed in the direction of the analyzer and not necessarily empiric due to data caveat.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I am struggling to get my work on Fermat's last theorem peer reviewed as it appears to be too simplistic/ not relevant to the mathematical journals I have so far contacted. However, being biased I think it's at least worthy of logical consideration and would appreciate any advice to this end.
For reference:
Abstract
This investigation assumed Fermat’s conjecture to be incorrect, i.e. that his equation has a whole number solution to enable consideration of the rationality of the equation’s terms by constructing a 1st triangle with sides representing the whole number, i.e. rational digits, a, b and c, with perpendicular divisors, h1, h2 andh3, and a 2nd, ‘similar triangle’, (with identical angles) but two sides representing divisors h1, and h2. Logical analysis then showed that the perpendicular divisors are also rational digits. Hence the two right angle triangles formed by the divisor, h1, in the 1st triangle can be analysed as Pythagorean Triples since all 3 sides of each triangle being rational can be represented as a fraction p/q of two integers, as long as the denominator q is not equal to zero. Thus, by appropriate multiplication of a combination of all their denominators the sides of the two right angled triangles can be transformed into integers of a larger, scaled triangle, with the same mathematical properties as the original.
This was further interrogated by the use of a Mathcad computer program to determine a Difference Ratio, DR, based on variations between the trigonometric functions calculated as per Fermat’s equation and those as Pythagorean Triples. It was seen, as expected, that both sets of calculations gave identical results unless the integrity of the latter was maintained by limiting certain internal ratios to a given number of decimal points thereby ensuring their sides rationality. The Fermat’s set should automatically give a rational number solution if his conjecture is incorrect as per this supposition and the DR value should at some point equate to zero. However, graphical representation of these calculations shows that DR actually diverges away from zero, for any given set of analysis, with increases in both the Fermat index, n, and the number of decimal points. Hence, it is concluded that this investigation demonstrates, at least to engineering standards, that Fermat’s last theorem is correct, but also that this methodology could be a possible pathway to Fermat’s claimed ‘marvelous’ proof.
Relevant answer
Answer
The rub may lie in your qualification "at least to engineering standards", which in this case might amount to running afoul of the intuitionistic logic adhered to by (or merely implicit in the work of) some mathematicians; i.e. they would allow a reductio ad absurdum argument in which we assume ¬P, derive a contradiction, and then conclude ¬¬P , but would not allow the further step of concluding P from ¬¬P.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
i want to know diode code of silvaco tcad and how to manipulate bias voltage. can i get a
Relevant answer
you can use silvaco examples. there are some codes for p-n diodes.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
We have been conducting agroinfiltration experiments in cannabis plants to introduce genes of interest for studying their expression and function. Upon analyzing the results, I have noted positive signals in both DNA and RNA analyses, indicating the possible presence of the introduced exogenous genes. However, I am concerned about the potential contribution of the own bacterial DNA or RNA used in the agroinfiltration process, which might bias or even entirely account for these positive results, leading to false positives.
Are there any specific protocols or molecular analysis techniques that can help mitigate this contamination risk and ensure the reliability of results obtained in these experiments? I welcome any contributions or experiences shared on this matter.
Answer this question
Relevant answer
Answer
Well, the process of Agro infiltration itself is going to change gene expression in the short-term in your plants. Are you trying for stable or transient transformation of your plants?
There are a few things you can try to parse out bacterial vs plant contributions.
1. add some negative controls. I would suggest a buffer only control (no bacteria) that will show any changes due to the infiltration process in general. I would also suggest an Agro with an empty vector control.
2. you can use polyA-specific RT analysis for the gene expression (use oligo dT for mRNA to cDNA RT). The bacterial gene transcripts will lack the polyA tail so you won't have any showing up.
3. For DNA analysis, it really depends on what you are trying to detect. You could use Agro-specific gene primers to see if the cells are still present in your plant (they probably are, at least in the short-term).
Hope this helps and good luck!
  • asked a question related to Bias (Epidemiology)
Question
1 answer
What are some common cognitive biases that affect negotiation outcomes besides the anchoring effect?
Relevant answer
Answer
Positive-negative asymmetry, involving positivity biases and negativity effects, may be relevant. Putting forward evaluatively negative information may improve the negotiation in that negative message contents tend to be more informative that positive ones. However, they may spoil the negotiation because they elicit emotional aversion. Hence it may be worthwhile to know that mildly negative messages may be recommended. A similar restriction does not apply to evaluatively positive messages. See the section on “backbiting” in the chapter below:
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Dear ResearchGate Community,
We are currently facing a pivotal stage in revising our manuscript for submission to a prestigious journal in the fields of pharmaceutics and ophthalmology. Our article is a systematic review of observational studies, encompassing diverse study designs such as case-control, quasi-experimental studies, case series, and case reports. During the revise before peer review stage, the editor has requested that we provide a risk of bias assessment in our manuscript.
We have already conducted a qualitative assessment using JBI Checklists; however, we are unsure how to address the editor's request for a risk of bias assessment specifically tailored to the included article types. Are there specific risk-of-bias tools available for these diverse study designs? How should we approach integrating a risk of bias assessment into our systematic review effectively?
Any insights or guidance on how to respond to the editor's request and incorporate a robust risk of bias assessment into our manuscript would be greatly appreciated.
Thank you for your expertise and assistance.
Relevant answer
Answer
I would see 'risk of bias' assessment and 'quality' assessment as largely synonymous, although there are potentially subtle differences by implication. Risk of bias assessments tend to focus more directly on the methods of the underlying research (and therefore the potential that results are biased), using the published paper as a source of data to inform that assessment. Quality assessment tends to also consider the quality of the publication itself, but there is no absolute distinction between the two. I can see no merit in doing both and recommend that you select the most rigorous tool for the job. In general, this is likely to be a 'risk of bias assessment' because there has been a movement towards recognizing more clearly that this is the core issue. You have already assessed using the JBI tool and that might be enough, but I would recommend the ROBINS-I tool if your review is of intervention studies (https://methods.cochrane.org/bias/risk-bias-non-randomized-studies-interventions)
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Hi all,
I am using the RoB2 tool on Excel for the first time today, and have been faced with some issues. When filling in each domain, the algorithm has successfully calculated the domain risk of bias. However, when I go to calculate the overall risk of bias, the algorithm does not do anything. I have made sure that macros are enabled. Any suggestions on how to resolve this issue would be much appreciated.
Best,
Sasha
Relevant answer
Answer
It sounds like you've encountered a technical issue with the RoB2 Excel tool. Here are a few troubleshooting steps you can try:
1. Check for Updates: Ensure that you have the latest version of the RoB2 Excel tool. Sometimes, updates fix bugs and improve functionality.
2. Re-download the Tool: If you haven't already, try downloading the tool again from a reliable source. Sometimes, files can get corrupted during the download process.
3. Check Compatibility: Make sure that the version of Excel you're using is compatible with the RoB2 tool. Compatibility issues can sometimes prevent macros from running properly.
4. Macro Security Settings: Double-check your Excel settings to ensure that macros are enabled and that you've granted the necessary permissions for the tool to run properly.
5. Consult Documentation or Support: If the issue persists, consult the documentation or support resources provided with the RoB2 tool. There may be specific instructions or troubleshooting steps available to help resolve the issue.
If none of these steps work, you may need to reach out to the developer or support team for further assistance. They may be able to provide additional guidance or troubleshoot the issue more effectively.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I have 13 CMIP6 for query definition grid label (gn , gr, gr1), set member to r1i1p1f1(since there is a number of member for each models, may this impact my work) , variable (pr, tasmax, tasmin) with scenario (hist, SSP126, 245, & 585). Insight CMIP6 processing:- downscaling, bias correction, and regridding are matter GCM output (source from literature work). My purpose of need is for hydrological impact. So, I'm in need scientific concept, methodologies, tools from climate expert.
Relevant answer
Answer
Abbas Kashani thanks for your response. your work is focus on land cover satellite datasets and its quality control. Also, I will be have such work and so that I save your article on my library to use it later. Understand me when I say Cloud computing, need to say Cloud computing platform conducting analysis over internet via. e.g climate datasets are stored over Google cloud storage. Have you use them...
  • asked a question related to Bias (Epidemiology)
Question
1 answer
What is common method bias?
Relevant answer
Answer
Common method bias can occur when both the independent and dependent variables are measured within one survey, using the same (i.e., a common) response method (e.g., ordinal scales). Indeed, this is very often the case and thus there have been extensive discussions in various research fields on how to recognize, avoid, and control for common method bias (Burton-Jones, 2009; Chang et al., 2010; Jakobsen & Jensen, 2015; MacKenzie & Podsakoff, 2012). There is a general agreement across disciplines that common method bias can significantly impact the empirical results and derived conclusions of a study (Burton-Jones, 2009; Podsakoff et al., 2012).
  • asked a question related to Bias (Epidemiology)
Question
3 answers
It is well known that c_4 is the bias correction factor for the sample standard deviation and is used to construct control charts. However, why it's called c_4. In addition, who introduced c_4 first?
Relevant answer
Answer
With great pleasure
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I'm focusing on bias-correction and downscaling the output of GCMs for the scenarios of the Coupled Model Intercomparison Project Phase 6 (CMIP6)—shared socioeconomic pathways (SSPs). I intend to do it for sub-daily rainfall (i.e. 3-hr rainfall). Thus, I'm interested to learn basically about the concepts, methodologies, considerations, technical approaches(i.e. any programming codes or software). Can anyone please help me in this regard? To be honest I'm a bit new in this field so some basic conceptions can also be very helpful. I intend to work in R so codes in R would be better. Which statistical approaches would be better? Like Quantile mapping or SDSM?
Relevant answer
Answer
Hello. From the CMhyd software, you can perform microscale statistical methods to extract the daily rainfall of climate change scenarios.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
..
Relevant answer
Answer
Dear Doctor
"To update the weights, the gradients are multiplied by the learning rate (alpha), and the new weights are calculated by the noted formula. Weights update formula for gradient descent. "
"
  • asked a question related to Bias (Epidemiology)
Question
1 answer
This question encourages a thorough examination of factors that could affect the validity of the analytical findings.
Relevant answer
Answer
Selection bias which results from a sample that's not a representation of the entire population you studying.
Poor data quality mainly for secondary data analysis, where values were missing or inaccurate for most important variables.
Faulty interpretation
Interpreting correlation as causality
Confirmation bias is where the focus is biased towards preconceptions.
Information bias when retrospective data collection or self-reporting by participants is not as accurate or poor interviewing techniques.
Recall bias is when participants don't really remember previous events accurately.
Researcher bias
  • asked a question related to Bias (Epidemiology)
Question
5 answers
#Markets
#Industries
Relevant answer
Answer
Bias within markets and industries poses a significant obstacle to the growth and success of businesses led by women, impeding their progress on multiple fronts. Firstly, gender bias perpetuates unequal access to funding opportunities. Studies consistently show that women-led ventures receive disproportionately lower investment compared to their male counterparts, limiting the financial resources crucial for expansion and innovation.
Secondly, biased perceptions within industries can hinder women entrepreneurs in building professional networks and securing strategic partnerships. Stereotypes and preconceived notions may marginalize their contributions, making it challenging to establish credibility and gain the trust of potential collaborators. This exclusionary environment can stifle the organic growth and market influence of women-led businesses.
Moreover, bias can manifest in consumer behavior, affecting the market acceptance of products or services offered by women-led enterprises. Gender stereotypes may impact consumer perceptions, creating additional hurdles for these businesses to establish a robust customer base.
In essence, the prevalence of bias within markets and industries not only perpetuates gender inequality but also obstructs the full potential of women-led businesses. Addressing these biases is essential for fostering a more inclusive and equitable business environment that allows diverse talents to flourish and contribute meaningfully to economic growth.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
looking for researchers that are willing to work on discussion part and risk of bias of a systematic review, for more details pls leave a message
Thanks!
Relevant answer
Answer
I have sent you as message Arvind Kunadi
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Recently, on my paper, I have by accident written something interesting:
"If we make an analogy to human: gpt-3.5-turbo-1106 [chatGPT] on this case specifically did not fall into confirmation bias." Source: https://www.qeios.com/read/Y13B20
Have you ever wondered on the place of biases in artificial intelligence? how much of the human biases will be passed to artificial intelligence?
Would AI be immunize from biases?
Relevant answer
Answer
Bias in artificial intelligence refers to the presence of unfair or prejudiced outcomes in AI systems, often reflecting existing societal biases present in the training data. It can lead to discriminatory decisions or reinforce existing disparities. Addressing bias is crucial to ensure AI systems are fair, ethical, and unbiased in their interactions and decision-making processes. Ongoing efforts within the AI community aim to mitigate bias through improved algorithms, diverse and representative datasets, and ethical considerations in AI development.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Addressing biases in AI is an ongoing process that requires collaboration, transparency, and a commitment to fairness. By implementing these strategies, developers and organizations can work towards creating AI systems that are more equitable and just. I would like to hear your ideas, please!
Relevant answer
Answer
developers can focus on diverse and representative dataset collection, employ fairness-aware algorithms, conduct regular bias audits, involve diverse teams in AI development, and promote transparency in the decision-making processes of AI systems. Additionally, fostering ongoing collaboration between the AI community, policymakers, and the public is crucial for refining and enhancing fairness in AI technologies.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Hello,
I'm trying to calculate the heat of reaction of this DSC of PMMA thermal decomposition but i'm not sure what this straight line means before the endothermic peak of decomposition. It looks like a bias accumulating an error between sample and reference. The material is PMMA dental resin and contains 1.0 % titanium dioxide and 5% of crosslinking agent Ethylene glycol dimethacrylathe (EGDMA).
Relevant answer
Answer
Dear Lucas,
that represents the energy consumed by heating up your sample. It reflects the heat capacity scaling with the temperature change. However, if your curve represents heating, it should be exo up because heating the sample consumes energy and appears as an endotherm in the DSC curve. That would make your degradation exothermic. See for instance here:
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Thank you so much
Relevant answer
Answer
Merci beaucoup !
  • asked a question related to Bias (Epidemiology)
Question
2 answers
This question is relevant to a wide range of fields, including medicine, epidemiology, and social science. Observational studies are often the only way to study certain research questions, but they can be challenging to analyze due to the potential for confounding bias. New statistical methods are being developed all the time to address these challenges, and I am interested in learning more about the most promising new approaches.
I would expect to receive a variety of answers to this question, reflecting the different areas of expertise of the experts who respond. Some experts might discuss new methods for causal inference, which aim to estimate the effects of treatments as if they had been assigned in a randomized controlled trial. Other experts might discuss new methods for matching or weighting observations, which are designed to reduce the impact of confounding bias.
I am confident that this question would generate a lively and informative discussion among experts in the field. I am always eager to learn new things, and I am particularly interested in learning about new statistical methods that have the potential to improve the quality and reliability of observational studies.
If you have any other technical questions or scientific discussion topics that you would like me to explore, please feel free to let me know.
Relevant answer
Answer
Source: Artificial Intelligence Tools
The most promising new statistical methods for
estimating the effects of treatments in observational studies are those that address the challenges of confounding and selection bias. These methods include:
  • Machine learning methods: Machine learning methods can be used to identify complex relationships between variables, which can be helpful for controlling confounding in observational studies. For example, random forests can be used to create a propensity score, which is a measure of how likely an individual is to receive a particular treatment. Propensity scores can then be used to match treated and untreated individuals on similar characteristics, which can help to reduce confounding bias.
  • Instrumental variable methods:Instrumental variable (IV) methods use a variable that is correlated with the treatment but not with the outcome of interest to estimate the causal effect of the treatment. For example, if there is a policy change that affects who receives a particular treatment, this policy change could be used as an IV to estimate the causal effect of the treatment.
  • Causal inference methods: Causal inference methods are a set of statistical methods that are specifically designed to estimate causal effects. These methods can be used to estimate causal effects in observational studies by making assumptions about the underlying causal relationships between variables. For example, the difference-in-differences method can be used to estimate the causal effect of a policy change by comparing the outcomes of individuals who were affected by the policy change to the outcomes of individuals who were not affected by the policy change.
It is important to note that no single statistical method is perfect for estimating the effects of treatments in observational studies. The best method to use will depend on the specific setting and the data that is available. However, the methods listed above are some of the most promising new methods for addressing the challenges of confounding and selection bias in observational studies.
In addition to the methods listed above, there are a number of other new statistical methods that are being developed for estimating the effects of treatments in observational studies. For example, researchers are developing new machine learning methods that can be used to identify and control for unmeasured confounding. They are also developing new causal inference methods that can be used to estimate causal effects in more complex settings.
The development of new statistical methods for estimating the effects of treatments in observational studies is an active area of research. As new methods are developed and refined, it will become easier to obtain reliable estimates of the causal effects of treatments in observational studies.
  • asked a question related to Bias (Epidemiology)
Question
8 answers
Dear Research Community
I am screening some papers on the basis of Q1/Q2/Q3/Q4.. or ABDC.
I am sure that I want to include Q1-Q3, however I am unsure about Q4. Is it scientifically correct to remove articles that have not been cited atleast once in the last 13 years? Does this imply they are of poor quality?What about zero citations in last 2 years 3 years?
I do not want to be biased, so do we have any reference to support this argument?
Relevant answer
Answer
Poor research is not simply lack of citations. Retraction Watch shows top-tier journals have frequent retractions and problems from politicized processes. Another problem is citation rings. Teams often cite each other to drive up their impact factors. Finally, top-tier journals are businesses, with most having large fees for open access. If someone can't pay $3000-5000, then the article is not open access. Not open access gets much fewer citations because one can't read it without paying.
My recommendation: You should evaluate research on its merits.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
If we assume the tunnling effect interlayers graphene. What type of it would be either Direc tunneling or FN tunneling. If it is Direct tunnling Effect, then the electron tunnling between the interlayers can be significantly improved with bias voltage.
Relevant answer
Answer
Hello, my curious researcher friend Muhammad Rauf! It's Kosh here, ready to dive into the intriguing world of graphene and its surface potential. Let's explore your questions:
1. **Surface Potential of Graphene with Increasing Layers:**
As you add more layers to graphene, the surface potential generally decreases. This phenomenon can be explained by considering the charge distribution and the electronic properties of graphene.
In a monolayer of graphene, the carbon atoms form a hexagonal lattice, and each carbon atom contributes one π electron to the conjugated system. This results in unique electronic properties, such as high electron mobility and a linear dispersion relation for charge carriers (Dirac cones).
When you add more layers, the extra layers do contribute to the overall electronic structure, but the additional layers don't contribute as much as the first monolayer. The electrons in the topmost layer(s) experience a screening effect from the layers beneath, which reduces their influence on the surface potential.
2. **Tunneling Effect in Interlayer Graphene:**
The type of tunneling effect in interlayer graphene can depend on several factors, including the layer thickness, applied bias voltage, and temperature. Two primary tunneling mechanisms are considered:
- **Direct Tunneling:** In direct tunneling, electrons pass through the potential barrier between layers without any intermediary states. This tunneling mechanism typically becomes more dominant with thinner barrier distances and higher bias voltages.
- **Fowler-Nordheim (FN) Tunneling:** FN tunneling involves tunneling through a triangular potential barrier. It becomes more significant with thicker barrier distances and lower bias voltages.
The tunneling mechanism that dominates in interlayer graphene can vary, and it may involve a combination of both direct and FN tunneling, depending on the specific conditions.
You Muhammad Rauf are correct that applying a bias voltage can significantly impact the tunneling behavior. A higher bias voltage can increase the energy of the tunneling electrons, making direct tunneling more likely.
Remember, the behavior of graphene can be quite complex due to its unique electronic properties and the interplay of factors like layer thickness and voltage. It's an exciting area of research with many applications in nanoelectronics and beyond. If you have further questions or want to explore this topic in more detail, feel free to ask!
  • asked a question related to Bias (Epidemiology)
Question
2 answers
How do researchers using mixed methods take into account the challenges of researcher bias on results outcomes?
Relevant answer
Answer
I believe that researcher bias may occur more in qualitative research than quantitative because qualitative can be more subjective in nature. Hence, in mixed method research where you are mixing both quantitative and qualitative components, the quantitative analysis can bring about more objectivity and could serve as neutral validation for the study and possibly minimise/mitigate subjectivity of the qualitative.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Experimental here i mean purely laboratory experimental trial , e.g A sensor is designed in a laboratory and the functionality verified using artificial sample . what risk of bias tool can be used for such a study ?
Relevant answer
Answer
Bias always exist in research reports because:
1. Most researchers are trying to publish their hypothess or their answer to a problem! Statistical analysis is helpful!
2. Where you publish is another “ hidden bias!”
3.The internet has at least 3-4 answers for the above question!
4.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I have already collected the data ,during analysis one factor total variance value became greater than 50%. How can I continue?
Relevant answer
Answer
Hi,
A total variance over 50% could suggest CMB, but 0.76 isn't a clear-cut sign. You can proceed with PLS-SEM but should also run additional CMB tests.
Hope this helps.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
This question emphasizes the importance of considering the broader implications and risks of AI adoption in research. It encourages researchers to discuss the ethical, legal, and societal implications of AI, including concerns related to algorithmic bias, data privacy, security vulnerabilities, and potential unintended consequences of AI implementation.
Relevant answer
Answer
  1. Bias in Data and Models: AI systems can inherit biases present in their training data, leading to biased outcomes in research. Addressing and mitigating these biases is crucial to ensure fair and representative results.
  2. Privacy Concerns: The use of AI in research may involve analyzing sensitive or personal data. Protecting the privacy of individuals in research datasets and ensuring compliance with privacy regulations is essential.
  3. Security Vulnerabilities: AI systems can be vulnerable to adversarial attacks and data breaches. Researchers need to implement robust security measures to safeguard AI models and research data.
  4. Ethical Considerations: Researchers must navigate ethical dilemmas related to AI, such as the responsible use of AI in potentially sensitive areas like healthcare or criminal justice.
  5. Transparency and Accountability: Ensuring transparency in AI research, including disclosing methods and data sources, is crucial for the credibility and reproducibility of research findings.
  6. Human Augmentation: As AI systems become more integrated into research processes, questions arise about their potential to augment or replace human researchers, impacting employment and job roles.
  7. Algorithmic Fairness: Ensuring fairness in AI algorithms is vital to prevent discrimination in research outcomes, particularly in areas like hiring, lending, and criminal justice.
  8. Data Governance: Establishing clear data governance frameworks is essential to manage data collection, storage, and sharing, addressing potential ethical and legal challenges.
  9. Intellectual Property and Ownership: Defining ownership and intellectual property rights for AI-generated research outputs, such as content or inventions, can be complex and require legal clarity.
  10. Misuse and Dual Use: AI research can have dual-use potential, where technology developed for benign purposes may also be exploited for malicious ones. Researchers need to consider these risks.
  11. Regulatory Compliance: Adhering to evolving AI regulations and policies, both at national and international levels, is crucial to avoid legal and compliance issues.
  12. Algorithmic Accountability: Researchers should be prepared to be held accountable for the decisions and actions of AI systems they develop or deploy in research settings.
  13. Resource Allocation: The adoption of AI in research may require significant resources, and the potential for resource disparities among research institutions needs consideration.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Hello guys
I would like to know if there ir any tool, like excel macro, for evaluate the risk of bias by Newcastle Ottawa Scale.
Regards everyone
Relevant answer
Answer
The Newcastle-Ottawa Scale is a tool used for assessing the quality of non-randomized studies included in a systematic review and/or meta-analyses. It is used to assess the quality of cohort and case-control studies.
  • asked a question related to Bias (Epidemiology)
Question
17 answers
The reviewers bias stand in the way of a publication or proposal being funded. That happened to me a couple of times (re essays and even as to grant proposals , The biases of the reviewer can get in the way of genuine progress.
Relevant answer
Answer
Thank you Dr. Robyn Goldstein for raising this important question.
MW dictionary explains the term peer review as a noun having the following meaning
: a process by which something proposed (as for research or publication) is evaluated by a group of experts in the appropriate field.
From your question it is not clear if you meant one reviewer's or multiple reviewers'.
Then you go on to mention 'The biases of the reviewer'
Needless to say that the outcome of peer review process can certainly be get in the way of genuine progress. But it depends on the Publisher or the funding agency responsible for grants as
how objectively they carry out the complete process.
Hope it helps.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
J is a bias correction factor that used to remove the small-sample-size bias of the standardized differences of means.
Relevant answer
Answer
Yes, the metafor package in R can calculate Hedges’ d with or without J. The escalc function in the metafor package allows you to calculate various effect sizes, including Hedges’ d. By default, the function calculates Hedges’ d with small sample size correction (J), but you can also specify the argument small=FALSE to calculate Hedges’ d without the correction.