Science topic
Bias (Epidemiology) - Science topic
Any deviation of results or inferences from the truth, or processes leading to such deviation. Bias can result from several sources: one-sided or systematic variations in measurement from the true value (systematic error); flaws in study design; deviation of inferences, interpretations, or analyses based on flawed data or data collection; etc. There is no sense of prejudice or subjectivity implied in the assessment of bias under these conditions.
Questions related to Bias (Epidemiology)
Identifying and mitigating biases in data and algorithms is crucial to ensure fairness, transparency, and accountability in AI systems.A comprehensive approach:
Identifying Biases
1. *Data auditing*: Analyze data sources, collection methods, and preprocessing techniques to detect potential biases.
2. *Exploratory data analysis*: Use statistical methods and visualization tools to identify patterns, outliers, and disparities in the data.
3. *Bias detection tools*: Utilize specialized tools, such as AI Fairness 360, to detect biases in data and algorithms.
Types of Biases
1. *Selection bias*: Biases in data collection or sampling methods.
2. *Confirmation bias*: Biases in algorithm design or training data that reinforce existing beliefs.
3. *Anchoring bias*: Biases in algorithmic decision-making that rely too heavily on initial or default values.
4. *Availability heuristic bias*: Biases in algorithmic decision-making that overemphasize vivid or memorable events.
Mitigating Biases
1. *Data preprocessing*: Clean and preprocess data to remove biases and ensure consistency.
2. *Data augmentation*: Increase dataset diversity by adding new data points or transforming existing ones.
3. *Regularization techniques*: Use regularization methods, such as L1 and L2 regularization, to reduce overfitting and biases.
4. *Fairness-aware algorithms*: Develop algorithms that incorporate fairness metrics and constraints.
5. *Human oversight and review*: Implement human review processes to detect and correct biases in AI decision-making.
6. *Diverse and inclusive teams*: Foster diverse and inclusive teams to bring different perspectives and reduce biases in AI development.
Best Practices
1. *Document and report biases*: Transparently document and report biases in data and algorithms.
2. *Continuously monitor and evaluate*: Regularly monitor and evaluate AI systems for biases and fairness.
3. *Establish accountability*: Establish clear accountability and responsibility for AI decision-making and biases.
4. *Foster a culture of fairness*: Encourage a culture of fairness and transparency within organizations developing AI systems.
# 175
Dear He Huang, Shary Heuninckx, Cathy Macharis
I read your paper:
20 years review of the multi actor multi criteria analysis (MAMCA) framework: a proposition of a systematic guideline
My comments:
1- In the abstract you say “emergence of stakeholder-based multi-criteria group decision making (MCGDM) frameworks. However, traditional MCGDM frequently overlooks the interactions and trade-offs among different actors and criteria”
I completely agree with this possibly most important feature in MCDM, as is interaction, that 99% of methods ignore. They prefer to work as if criteria were independent entities and then, adding up results. MCDM does not work with the concept that the result is A U B or sum, when it should be A ∩ B or intersection.
I have been claiming this for years in RG, and yours is the first paper I read that addresses it
Your paper also addresses the very important issue that the stakeholder who decide the alternatives, projects or options as well as the criteria they are subject to.
2- Page 2 “it is necessary to involve more than one decision maker (DM) to appraise the possible alternatives in the interest of, for example, diverse perspectives, increased acceptance of decision, and reduced bias”
In my opinion, the DM, in the first stage of the process, is only an instrument that receives information and demands form the stakeholders, translates them to the decision matrix, and selects the MCDM method.
His most Important function is to analyze the result, make corrections as per his/her know-how and expertise., and recommend the solution to the stakeholders. They are the decision-makers.
3- Page 2 “. The stakeholders can be defined as individuals or interest groups that have vested interests in the outcome of a particular issue or the decision being considered (Freeman et al. 2010).”
Absolutely correct, because each one is responsible for an area of the project. This is the people that know what is needed or wished, and you also emphasise it.
4- Page 3 “The original objective of MAMCA is to help actors understand the preferences and priorities of all relevant stakeholders, and to identify and evaluate different alternative solutions for which a consensus can be reached (Macharis & Bernardini, 2015). It is a decision-support framework with ’stakeholder involvement’ as a keyword”.
5- In my opinion, the word ‘preferences’ should be banned in MCDM. Normally, a stakeholder does not have preferences. A production manager does not have preference to fabricate a product A or product B, or on the importance of each product; he follows instructions on a plan that has been decided at the highest levels. It is absurd to think for instance that rejects are 3 times more important than quality, when this comes from a person that possibly does not have the faintest idea on production. The stakeholder has a production plan and has to comply with it.
In my opinion, and after reading hundreds of papers I realized that many authors have only a theoretical vision of the problem and ignore the reality, and try to solve a problem that is only in papers.
Another for me inappropriate word is “consensus”. IN MCDM consensus is a weird word, because most of the time there is a fight among the different stakeholders and components, where some must give and others receive.
In 1974 Zeleny defined the MCDM problem as a ‘compromise’, a balance between all parts, and that is only possible using a MCDM method, that is, it is the method which for example, must decrease a production goal to satisfy another goal as is the financial objective of a return of say 6 %. It is impossible for a human being to consider all the hundreds of interactions necessary to reach a balanced solution.
The MCDM knows nothing about consensus, but knows how to find an equilibrium or balance for the whole system
6- Page 4 “The most relevant criteria are selected for every stakeholder and weights are elicited that reflect their importance”
I am afraid I don’t concur on weights. Weights are useful to quantify the relative importance of criteria, using either subjective or objective procedures.
In the first kind, they are useless in MCDM, while in the second kind they are very useful. In countless publications as in yours, it is said that there is fundamental in MCDM. This is an intuitive concept without any mathematical support.
I however agree that in general, in most projects, criteria have different importance, no doubt about it, and that the experience of the DM is valuable, and it must be incorporated in the MCDM process, but at the right time and in the proper mode.
Just think that criteria are linear equations and as that, subject to the laws of lineal algebra.
Linear equations can be graphically represented as straight lines in a x-y graphic, and have different slopes that depend on their two values.
When you apply a weight to a criterion it multiplies each value within it. This provokes that a criterion line displaces parallel to itself, but the distances between values are preserved. When this is done for other criteria, that are multiplied by different weight values, the respective lines displace parallel to themselves, because in each one the distance between values is the same.
What is not the same is the existing distance between two criteria, because they depend on the different weight values. As can be seen, there is nothing in these weights that are used to evaluate alternatives.
It is different with entropy, where each criterion obtains an entropic value that quantitatively informs the degree of dispersion between the values. It is precisely this property what makes them useful, because a criterion with high entropy denote a closeness of the criterion distances between values.
The complement to 1 indicates the amount of information each criterion has to evaluate alternatives, that is, the Shannon Theorem.
Therefore, weights only show the geometrical displacement of a whole criterion, while entropy shows the discrimination of values withing each criterion.
7- Page 4 “Different MCDM methods can be used, like for example analytic hierarchy process (AHP)”
You are contradicting yourself when at the beginning you talk about interaction and now, you mention using AHP where interaction is not allowed (Saaty dixit, not me)
8- “A primary difference lies in MAMCA’s high regard for stakeholder autonomy; stakeholders are empowered to introduce criteria that reflect their interests and to evaluate alternatives based on personal preferences”
9- I agree excerpt in the word ‘preferences’
I do not know you, but I have worked in project management and in several countries, in large hydro, mining, oil, paper, and metallurgical projects, assisting at many meetings and I do not remember that somebody was asking or expressing preferences.
We were the stakeholdersand as other fellows, I was just following the direction from the highest levels. Of course, they were open discussions and everybody was free to express his opinions. Nobody was saying that “my preferences are…..”
Where the scholars in MCDM got that preference word? We expressed the needs in our own departments, and our opinions, discusssd with other colleagues, usually the financial guy, about what we need and explain why, and usually it was the project manager who closed the discussion trlough his own opinion
This is how the real world works, not with classroom examples. From there the DM must consider without discussion what each manager said, and put it in a matrix format. Normally the DM has no authority to decide if criterion environment is less or more important than criterion transportation. A DM is a specialist in decision making, involving, mathematics, knowledge, and experience on other projects, similar or not,something than in general is unknown to stakeholders. Thus, each one must operate in his own field: the stakeholders provide information and needs, and the DM process that, analyzes the result, modify it if necessary, and submit it to stakeholders.
Imagine that if in his/her presentation, the DM is interrupted by a stakeholder, asking for the origin of data in the matrix, and the DM responds that come from pair-wise comparison and thus, involving intuition. What do you think would be the reaction of the stakeholder other than incredulity on what he is hearing? I certainly know what would be mine
These are some of my comments. Hope they can be of service
Nolberto Munier
Benefits of AI in Higher Education
Improved Educational Opportunities:
education that is customized to meet the needs of each unique learner.
enhanced accessibility via assistive technologies for students with disabilities.
Simulations and interactive materials make learning more interesting.
Efficiency in Administration:
Routine chores can be automated to free up employees for more important work.
enhanced analysis and processing of data to facilitate better decision-making.
Developments in Research:
Research advances more quickly as a result of speedier data processing.
AI-powered systems that enable international research cooperation.
Assistance for Students:
Using predictive analytics, retention rates can be raised by identifying students who want more assistance.
International Cooperation:
Geographical distances can be overcome by AI, resulting in global research collaborations.
AI's drawbacks in higher education
Employment Displacement:
Automation may result in the loss of administrative positions.
Bias and Ethical Issues:
danger of biased AI systems for grading and admissions.
Ethics-based supervision and accountability for AI choices are required.
The Digital Divide
differences in how well-funded and under-funded institutions use AI.
Security and Privacy of Data:
difficulties in guaranteeing the security and privacy of student data.
Over-reliance on artificial intelligence
Potential for greater susceptibility to system faults and less human control.
Gap in Skills:
Faculty and students must acquire new skills in order to use AI technologies efficiently.
Research homogenization:
There is a chance that using similar AI techniques will result in the loss of varied research perspectives.
Expense and Obsolescence of Technology:
high upfront costs and the difficulty of staying up to date with the quick changes in technology.
Regulatory and Political Difficulties:
navigating financial priorities and governmental regulations that could affect the use of AI.
I would like to know if I suspect that two studies (same authors) was based on the same patients but with different follow-up. Can we include both? Can we dowgrade one of them (high risk of bias) to exclude one in a sensitivity analysis? I cannot find any clear answer in the different tools assessing quality of evidence (GRADE, RoBs, ...)
- Imputation: Use statistical methods like mean, median, or mode imputation for numeric fields (e.g., average age).
- Deletion: For substantial gaps, consider removing incomplete records if doing so doesn't bias results.
- Follow-up: If feasible, revisit schools to collect missing information. For critical fields, prioritize completeness during data collection. For example, if 20% of oral health records lack caries status, imputing based on the school’s average caries prevalence might help.
Citation: Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147–177.
The Impact of Artificial Intelligence on Recruitment: Artificial intelligence enhances efficiency by streamlining candidate selection, analyzing large datasets, reducing human bias, and providing tailored training opportunities. It also supports performance evaluation while posing challenges such as losing the human touch and potential algorithmic bias.
I'm interested in exploring this topic for my research. Research direction would be:
1. Investigating the effectiveness of ML in mitigating specific biases (e.g., confirmation bias, loss aversion).
2. Developing ML-based decision support systems for financial advisors.
3. Analyzing the impact of ML-driven interventions on investor behavior.
4. Exploring the role of ML in fostering financial literacy.
#BehaviouralFinance #MachineLearning
I am curious to know if any biases, narcissistic tendencies, empathy, or any other psychological traits impact politicians decision-making.
what is the purpose of this portal for the researcher and how the data of the scholars are protected from any bias and affection ? who is responsible?
Hope this helps you guys to write the discussion part of your qualitative papers:
- Thematic Analysis: Firstly this is essential , I organised the discussion by key themes from interviews (e.g., communication, resources) with direct quotes to capture stakeholder perspectives.(Braun & Clarke, 2006).
- Linking to Literature: This is crucial and tricky. I related my study findings to existing studies, highlighting agreements and differences to show new insights.Yes there were some areas that presented scarce research, then I expanded the literature to include similar settings. This helped me strengthen the context and value of my study (Silverman, 2011).
- Addressing Bias and Limitations: We cannot think that our studies never have limitations,Please include a reflection on researcher bias and study limits, explaining how these were managed with techniques like journaling. Please note my friends, this builds transparency and credibility, despite challenges in achieving balanced self-reflection (Creswell & Poth, 2018).
Happy to share knowledge,
Anitha
The risk of bias significantly influences the validity of systematic review conclusions, as studies with higher bias are more likely to overestimate treatment effects. Systematic reviews that incorporate assessments of bias, such as the Cochrane Risk of Bias Tool, tend to provide more reliable estimates of intervention effectiveness.
Higgins, J. P. T., & Green, S. (2011). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0. The Cochrane Collaboration. [Available at: http://handbook.cochrane.org]
“In a dynamic and uncertain environment, how can behavioral finance theory be used to explain investors’ decision-making biases in the formation of asset price bubbles, and whether corresponding policy intervention measures can effectively curb the formation of such bubbles?”
Recently I read an article by 'Wagdy Sawahel' on 05 September 2024, in the University World News: African Addition titles "How Africa can help to tackle global bibliometric coloniality?"
As time passes and world conflicts are becoming more aggressive and as the politican arena worldwide is impacting any dimension one can imagen [sports, education, economics, safety & security, etc.], of concern here is the actual Global Bibliometric Bias that is saliant in the countries located mostly in the southern hemisphere of the earth. The call nowaday is to establish new indexes that provide a balance and fairness. What do you think? Are you as a researcher, expert, professional, etc. with such a move?
Note: Read along the rference articles.
I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
I am currently validating a tool on assessing patients perspectives on primary care. Initial research showed me that using a reverse order Likert scale ( 1= " I completely agree' , 5= "I completely disagree", 3 = "" I don't know") would avoid response bias so I collected data from this tool. I have passed the analysis stage. What are the important aspects to consider during the scale validation? What is the impact on descriptive features of the tool- Scoring? What precautions to follow?
What are the potential risks of Artificial Intelligence (AI) in higher education, particularly concerning data privacy, bias and the digital divide?
How can these risks be mitigated?
So let us assume in whatever field there is a difference between 'life' and 'research'.
Consider the following bias:
- in a research project, cases are added to one group which would in 'real life scenarios' not be identified as such.
For example a patient with a rare disease is added to a cohort for research purposes, while in real life the diagnosis is not strong enough to justify dangerous therapy.
The research cohort is inflated (possibly to allow 'stronger' statistics or reach a minimum group size), yet the over-included cases would better be suited as control.
Obviously this problem is rather simple. I am asking:
- is there a name already for that kind of bias?
- can you name a citation or researcher?
Thanks a bunch,
Stefan
Dear researchers,
In a systematic review (not a meta-analysis) compiling studies on various diagnostic systems against a specific pathogen. The most relevant parameters evaluated are the type of material used, the type of sample used, and the detection limit achieved.
It is important to note that these are not clinical or observational studies, and no diagnostic performance parameters are being compared to a gold standard.
What type of application or platform would be most suitable for evaluating the quality (risk of bias) of the selected articles? Would it be possible to modify any published platform?
Thank you in advance for your assistance.
Sincerely,
Daniel
I am conducting a qualitative study that uses interviews to investigate the perceptions of teachers about a particular leadership practice and I am focusing on 3 schools which have a total number of 300 teachers. I work at one of the schools. I am hesitating about my sampling strategy. I considered snowball sampling or convenience sampling which are easy, time efficient and flexible. I could utilise people who have particular knowledge and experience in the area in question, which is the teachers perceptions about promotion strategies.
But I’m also thinking of limiting and controlling my sample size and focus only on senior teachers who have 10 years plus experience. For one thing they have a lot of rich input and for another I would avoid the potential bias of snowball or convenience sampling. What do you think? I would really appreciate your input. Thank you so much indeed.
The question specifically looks at how lenders' biases (e.g. confirmation bias, anchoring bias) influence their evaluations and approval of loans for green projects, and what implications this has for carbon neutrality.
Give more suggestion to connect pychology, finance and Carbon neutrility.?
To check difference of post test between test group and control group by controlling pre test score.
I read that ANCOVA is based on the assumption that the means of the covariate are equal across groups. In other words, the covariate (pre-test scores) should not differ significantly between groups. If there is a significant difference in the covariate across groups, it suggests that the pre-test scores themselves differ by group, which could bias the analysis results.
However, what I understand is that ANCOVA is conducted to control for covariates. If the pre-test (covariate) is already same, why do we need to perform ANCOVA to control covariate? it is already same.. Isn't it the same as an independent t-test? Wouldn't it be more logical to say: despite the difference between the two groups (experimental, control) in the pretest, set it as a covariate to see the difference between the two groups in the posttest?
How do you address potential biases in diary entries?
How do you handle the issue of observer bias in your research?
How do you handle potential biases in survey responses, especially regarding sensitive topics?
How do you ensure that your observations are objective and not influenced by personal biases?
How do you balance including your own research in
a literature review without allowing bias to influence
your analysis?
Hi there,
I recently read some case-control studies and noticed that not all studies match their participants on the length of follow-up. (The matching variables in this study include length of follow-up, See DOI:10.1002/cpt.2369)
In a case-control design, researchers index at the date of event occurrence and look back over several months to explore the incidence of exposure. I'm wondering if, by not matching on the length of follow-up, those who experience a longer follow-up period might also have a higher possibility of being exposed, finally leading to time-window bias (DOI: 10.1097/EDE.0b013e3182093a0f). Instead, some researchers proposed that using time-varying sampling is a viable way to deal with this bias. (DOI: 10.1136/bmjopen-2015-007866)
Thus, I'm confused about:
(1) is it necessary to control for time-window bias?
(2) the difference between these two methodology
(3) based on question (2), which one is better?
I have designed magnetic material with different biasing conditions in HFSS. Now I want to give an RF AC signal and do a transient simulation in HFSS. Is it possible to do in HFSS? Please help me to figure this out. Thanks.
In my transnational teaching context, I noticed that many learners learn rigidly. For example, they gain knowledge through watching news, but they are not critical about the source of the news. Also, they express themselves very subjectively with no trace of criticality in their speech. I mean, we are human beings. It is understandable to be biased toward certain things due to the lack of knowledge. But my question is, what exactly is being critical for students in a transnational context?
I want to know that n-doped side of solar cell is connected with positive or negative electrode.
To the best of my knowledge, a systematic review aims to collect and summarize all the published & unpublished literature revolving around a certain topic/sub-topic.
Sometimes, I encounter results in ClinicalTrials.gov which are yet to be published, or abstracts which do not have their full-texts available yet, or conference proceedings which do not include their methodologies in fine detail.
In this case, when the methods section is not addressed appropriately, what tools could be employed to assess the risk of bias/quality of such research types?
Thank you beforehand.
I am doing a systematic review, and I am measuring risk of bias with RoB2 for RCT, and ROBINS-I for non RCT. My questions is, for single arm studies, can I use ROBINS-I? I am not sure how to answer the questions for the domain regarding confounding in this case.
Thank you!
The source of classification bias in marker gene metagenome sequencing?
a variability in the taxonomy classification of microbial communities when using different primer pairs (e.g. for 16S rDNA) is commonly known. However, the mismatches to these primers are not described as the major reason for this bias. My question is: what are other possible causes of this bias and which is now supposed to be the major one?
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim
I randomly interviewed 250 poor people and 250 non-poor people. Considering 1 for poor and 0 otherwise, does estimating a logit model aiming to capture the probability of becoming poor make sense? What are possible biases?
What is the difference between limitation in recall and recall bias?
Can anyone suggest how to design a transistor model in ansys circuit. snp file with respect to different bias is available but from the datasheet I want to design transistor and observe it's effect with respect to any bias condition.
Any suggestion would be really helpful.
A new 𝑇𝑟𝑒𝑛𝑑𝑠 𝑖𝑛 𝐶𝑜𝑔𝑛𝑖𝑡𝑖𝑣𝑒 𝑆𝑐𝑖𝑒𝑛𝑐𝑒𝑠 article challenges characterizing people as irrational and argues behavioral science aimed at policy should start by assuming people are reasonable.
Traditional models often label deviations from 'perfect rationality' as a seemingly never ending list of biases. Maybe this is less useful lately? The article gives examples that what may seem irrational can be appropriate responses to specific contexts.
From climate change to COVID-19, they show how assuming people are reasonable shifts the focus. For instance, trust in health authorities correlated with higher vaccine uptake, which makes the behavior appear reasonable.
This reframing encourages participatory methods, turning targets of interventions into partners. Methods like citizens' assemblies and 'nudge plus' highlight the value of engaging those affected by policies.
By recognizing reasonableness, maybe behavioral science can craft more effective, context-aware interventions. What do you think of this argument?
Hello, friends. Currently, I am working on species distribution modelling using maxent. I have run the model using occurrence and climate data from WorldClim. Where i can find calibration area (e.g., buffer zones, minimum convex polygons, enclosing rectangles) biases introduced during the calibration process from my model
When submitting a manuscript to journals, sometimes we are asked to recommend reviewers. How to choose the reviewers to recommend?
I prefer to maintain the integrity so not to recommend someone I have favorable conflict of interests. On the other hand, what recommendation we should avoid to prevent unfavorable bias?
Although the two terms are discussed separately in many textbooks, some other epidemiologists suggest that confounding is a type of bias.
It is worth considering their perspective and exploring the relationship between these concepts. By understanding how confounding and bias are related, we can improve our research methods and draw more accurate conclusions.
في مجال العلاقات الدولية، تعتبر الحاجة للفصل والتصنيف من الدواع الأساسية لفهم واستيعاب النظريات المجردة والمقاربات النظرية المتعددة لتحليل الظواهر السياسية المعقدة. فعلى سبيل المثال، يمكن لتصنيف الدوافع الدولية أو العوامل المؤثرة في السلوك الدولي أن يوفر إطارًا مفيدًا لتحليل سلوك الدول واتخاذ القرارات على الساحة الدولية.
ومع ذلك، يظهر أحيانًا أن هذه الضرورة الأكاديمية للفصل والتصنيف قد تؤدي إلى نتائج عكسية، حيث يمكن أن تحدث فجوات في فهم الوضع السياسي وتقدير المواقف الدولية. فعلى الرغم من أن الحوارات النظرية والتصنيفات المفسرة للسلوكات والفواعل الدولية يمكن أن توفر إطارًا للتحليل، إلا أنها قد تقيّد التفاعل مع تعددية العوامل والمتغيرات في الساحة الدولية.
وهكذا، قد يجد المحلل نفسه محاصرًا داخل مجموعة من المصطلحات والمفاهيم التي قد تكون محدودة في تفسير السلوك الدولي المعقد. ومن هنا، قد يتعين على المحللين الاستماع بانفتاح إلى مختلف العوامل والمنظورات والتحليلات، والابتعاد عن الانحياز لإطار نظري أو مصطلحات محددة، حتى يتمكنوا من فهم وتقدير الديناميات السياسية بشكل أكثر شمولية ودقة.
I am struggling to get my work on Fermat's last theorem peer reviewed as it appears to be too simplistic/ not relevant to the mathematical journals I have so far contacted. However, being biased I think it's at least worthy of logical consideration and would appreciate any advice to this end.
For reference:
Abstract
This investigation assumed Fermat’s conjecture to be incorrect, i.e. that his equation has a whole number solution to enable consideration of the rationality of the equation’s terms by constructing a 1st triangle with sides representing the whole number, i.e. rational digits, a, b and c, with perpendicular divisors, h1, h2 andh3, and a 2nd, ‘similar triangle’, (with identical angles) but two sides representing divisors h1, and h2. Logical analysis then showed that the perpendicular divisors are also rational digits. Hence the two right angle triangles formed by the divisor, h1, in the 1st triangle can be analysed as Pythagorean Triples since all 3 sides of each triangle being rational can be represented as a fraction p/q of two integers, as long as the denominator q is not equal to zero. Thus, by appropriate multiplication of a combination of all their denominators the sides of the two right angled triangles can be transformed into integers of a larger, scaled triangle, with the same mathematical properties as the original.
This was further interrogated by the use of a Mathcad computer program to determine a Difference Ratio, DR, based on variations between the trigonometric functions calculated as per Fermat’s equation and those as Pythagorean Triples. It was seen, as expected, that both sets of calculations gave identical results unless the integrity of the latter was maintained by limiting certain internal ratios to a given number of decimal points thereby ensuring their sides rationality. The Fermat’s set should automatically give a rational number solution if his conjecture is incorrect as per this supposition and the DR value should at some point equate to zero. However, graphical representation of these calculations shows that DR actually diverges away from zero, for any given set of analysis, with increases in both the Fermat index, n, and the number of decimal points. Hence, it is concluded that this investigation demonstrates, at least to engineering standards, that Fermat’s last theorem is correct, but also that this methodology could be a possible pathway to Fermat’s claimed ‘marvelous’ proof.
i want to know diode code of silvaco tcad and how to manipulate bias voltage. can i get a
We have been conducting agroinfiltration experiments in cannabis plants to introduce genes of interest for studying their expression and function. Upon analyzing the results, I have noted positive signals in both DNA and RNA analyses, indicating the possible presence of the introduced exogenous genes. However, I am concerned about the potential contribution of the own bacterial DNA or RNA used in the agroinfiltration process, which might bias or even entirely account for these positive results, leading to false positives.
Are there any specific protocols or molecular analysis techniques that can help mitigate this contamination risk and ensure the reliability of results obtained in these experiments? I welcome any contributions or experiences shared on this matter.
Answer this question
What are some common cognitive biases that affect negotiation outcomes besides the anchoring effect?
Dear ResearchGate Community,
We are currently facing a pivotal stage in revising our manuscript for submission to a prestigious journal in the fields of pharmaceutics and ophthalmology. Our article is a systematic review of observational studies, encompassing diverse study designs such as case-control, quasi-experimental studies, case series, and case reports. During the revise before peer review stage, the editor has requested that we provide a risk of bias assessment in our manuscript.
We have already conducted a qualitative assessment using JBI Checklists; however, we are unsure how to address the editor's request for a risk of bias assessment specifically tailored to the included article types. Are there specific risk-of-bias tools available for these diverse study designs? How should we approach integrating a risk of bias assessment into our systematic review effectively?
Any insights or guidance on how to respond to the editor's request and incorporate a robust risk of bias assessment into our manuscript would be greatly appreciated.
Thank you for your expertise and assistance.
Hi all,
I am using the RoB2 tool on Excel for the first time today, and have been faced with some issues. When filling in each domain, the algorithm has successfully calculated the domain risk of bias. However, when I go to calculate the overall risk of bias, the algorithm does not do anything. I have made sure that macros are enabled. Any suggestions on how to resolve this issue would be much appreciated.
Best,
Sasha
I have 13 CMIP6 for query definition grid label (gn , gr, gr1), set member to r1i1p1f1(since there is a number of member for each models, may this impact my work) , variable (pr, tasmax, tasmin) with scenario (hist, SSP126, 245, & 585). Insight CMIP6 processing:- downscaling, bias correction, and regridding are matter GCM output (source from literature work). My purpose of need is for hydrological impact. So, I'm in need scientific concept, methodologies, tools from climate expert.
It is well known that c_4 is the bias correction factor for the sample standard deviation and is used to construct control charts. However, why it's called c_4. In addition, who introduced c_4 first?
I'm focusing on bias-correction and downscaling the output of GCMs for the scenarios of the Coupled Model Intercomparison Project Phase 6 (CMIP6)—shared socioeconomic pathways (SSPs). I intend to do it for sub-daily rainfall (i.e. 3-hr rainfall). Thus, I'm interested to learn basically about the concepts, methodologies, considerations, technical approaches(i.e. any programming codes or software). Can anyone please help me in this regard? To be honest I'm a bit new in this field so some basic conceptions can also be very helpful. I intend to work in R so codes in R would be better. Which statistical approaches would be better? Like Quantile mapping or SDSM?
This question encourages a thorough examination of factors that could affect the validity of the analytical findings.
looking for researchers that are willing to work on discussion part and risk of bias of a systematic review, for more details pls leave a message
Thanks!
Recently, on my paper, I have by accident written something interesting:
"If we make an analogy to human: gpt-3.5-turbo-1106 [chatGPT] on this case specifically did not fall into confirmation bias." Source: https://www.qeios.com/read/Y13B20
Have you ever wondered on the place of biases in artificial intelligence? how much of the human biases will be passed to artificial intelligence?
Would AI be immunize from biases?
Addressing biases in AI is an ongoing process that requires collaboration, transparency, and a commitment to fairness. By implementing these strategies, developers and organizations can work towards creating AI systems that are more equitable and just. I would like to hear your ideas, please!
Hello,
I'm trying to calculate the heat of reaction of this DSC of PMMA thermal decomposition but i'm not sure what this straight line means before the endothermic peak of decomposition. It looks like a bias accumulating an error between sample and reference. The material is PMMA dental resin and contains 1.0 % titanium dioxide and 5% of crosslinking agent Ethylene glycol dimethacrylathe (EGDMA).
This question is relevant to a wide range of fields, including medicine, epidemiology, and social science. Observational studies are often the only way to study certain research questions, but they can be challenging to analyze due to the potential for confounding bias. New statistical methods are being developed all the time to address these challenges, and I am interested in learning more about the most promising new approaches.
I would expect to receive a variety of answers to this question, reflecting the different areas of expertise of the experts who respond. Some experts might discuss new methods for causal inference, which aim to estimate the effects of treatments as if they had been assigned in a randomized controlled trial. Other experts might discuss new methods for matching or weighting observations, which are designed to reduce the impact of confounding bias.
I am confident that this question would generate a lively and informative discussion among experts in the field. I am always eager to learn new things, and I am particularly interested in learning about new statistical methods that have the potential to improve the quality and reliability of observational studies.
If you have any other technical questions or scientific discussion topics that you would like me to explore, please feel free to let me know.
Dear Research Community
I am screening some papers on the basis of Q1/Q2/Q3/Q4.. or ABDC.
I am sure that I want to include Q1-Q3, however I am unsure about Q4. Is it scientifically correct to remove articles that have not been cited atleast once in the last 13 years? Does this imply they are of poor quality?What about zero citations in last 2 years 3 years?
I do not want to be biased, so do we have any reference to support this argument?
If we assume the tunnling effect interlayers graphene. What type of it would be either Direc tunneling or FN tunneling. If it is Direct tunnling Effect, then the electron tunnling between the interlayers can be significantly improved with bias voltage.
How do researchers using mixed methods take into account the challenges of researcher bias on results outcomes?
Experimental here i mean purely laboratory experimental trial , e.g A sensor is designed in a laboratory and the functionality verified using artificial sample . what risk of bias tool can be used for such a study ?
I have already collected the data ,during analysis one factor total variance value became greater than 50%. How can I continue?
This question emphasizes the importance of considering the broader implications and risks of AI adoption in research. It encourages researchers to discuss the ethical, legal, and societal implications of AI, including concerns related to algorithmic bias, data privacy, security vulnerabilities, and potential unintended consequences of AI implementation.
Hello guys
I would like to know if there ir any tool, like excel macro, for evaluate the risk of bias by Newcastle Ottawa Scale.
Regards everyone
The reviewers bias stand in the way of a publication or proposal being funded. That happened to me a couple of times (re essays and even as to grant proposals , The biases of the reviewer can get in the way of genuine progress.
J is a bias correction factor that used to remove the small-sample-size bias of the standardized differences of means.