Science topic

Bias (Epidemiology) - Science topic

Any deviation of results or inferences from the truth, or processes leading to such deviation. Bias can result from several sources: one-sided or systematic variations in measurement from the true value (systematic error); flaws in study design; deviation of inferences, interpretations, or analyses based on flawed data or data collection; etc. There is no sense of prejudice or subjectivity implied in the assessment of bias under these conditions.
Questions related to Bias (Epidemiology)
  • asked a question related to Bias (Epidemiology)
Question
4 answers
As a result of tools such as Chat GPT, Bing and others, What would be the main risks for democratic systems when using AI like these? personalized fake news? Perpetuation of biases? Or what other elements?
Relevant answer
Answer
Hello Dr Jonathan Piedra
You raise some very large issues. Noah M. Kenney mentioned another one in what was called "personalised fake news". On one hand, I think that is a funny comment. On the other, I can see it is very serious. In fact, we are starting to see that already with personalised advertisements on some websites. That is just a very rudimentary form of AI. But we are on the way.
And it, like said, encourages people to think less. That is also what fake news does.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
How to derive a conclusion from systematic review without biases
Relevant answer
Answer
Hi Ellen,
I'm sorry I can't understand which one is your question. You have different title and body question, but anyway I'm trying to answer the body's question. You can derive a conclusion from systematic review qualitatively.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
  • Working conditions were simulated by forward biasing the 1 mm2 solar cells at the same current level (250 mA, i.e. 25 A/cm2) they would handle at the operating concentration (i.e. 1000 suns)’
Relevant answer
Answer
It is demonstrated that the relationship between current flow and irradiance in a photovoltaic circuit with zero bias voltage can be simulated by the equation,
I = Im tanh (αH/Im) + βImH + γIm,
where Im is the maximum photocurrent, H is the irradiance, and α, β, and γ are constants. This study ,to investigate the relationship between solar radiation (flux) and current, voltage, solar radiation and efficiency of solar panel, in location (Oman ). Solar Radiation (flux) measurements as well as formal meteorological data were utilized. Data were recorded from the digital instruments used. Analyses were made between solar radiation (flux) and current, voltage and efficiency.
The Results obtained show that there is a direct proportionality between solar flux and output current as well as solar flux and efficiency of solar panel. This implies that an increase in solar flux leads to increase in output current which enhances efficiency (performance) of a solar panel.
Result
The solar panel was of crystalline silicon type with surface area of 0.19m2 and capacities of 9.0V and 2.5A, respectively.
Area of solar panel x 1000W/m2
  • asked a question related to Bias (Epidemiology)
Question
3 answers
For evaluation of quality evidence in systematic review and meta-analysis in animals, what methods are best or better indicated? GRADE pro, ARRIVE 2.0, or STAIR 2021? To assess the risk of bias, I proceeded with SYRCLE
Relevant answer
Risk of bias (quality) assessment of the included articles in a systematic review is different from assessing the overall strength of the body of evidence of the results.
depending on the study design that is included, there are multiple risk of bias assessment tools such as RoB.
For assessing the overall strength of the body of evidence of the results, there are tools such as Grade.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
To the question above, I know there is a RoB issue AFTER, but not sure if there is before. Another question...can per-protocol analyses be enough (for masters level dissertation) to cover RoB regardless of dropout or not? I really can't get my head around the difference in RoB analyses (per-protocol and intention-to-treat) on my papers, seems to be the same answer for everything. My tutor told me that per-protocol analyses is fine for all my studies, but I wanted to check this...I'm currently writing a Cochrane-style systematic review and meta analysis.
Thanks for your support :)
Relevant answer
Answer
I think there's no risk of bias concerns if participants drop out before randomization as long as more participants are recruited to cover for the drop out.
Best wishes, Victor.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Suppose i have collected the data on Emotional Intelligence of Secondary School teachers using The Schutte Self-Report Emotional Intelligence Test (SSEIT). I am afraid there may be a brassiness in respondents scores. How can I know about and what are the method to check these biases?
Relevant answer
Answer
When outcome is mainly one sided and there is no much facts but just opinions, then there is biasness.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Does the Criminal Justice System show bias towards the upper classes?
Relevant answer
Answer
The criminal justice system favours those that are educated and wealthy. Those that are uneducated are unable to grasp the navigation required in any justice system and those that are not wealthy cannot afford representation to navigate the system for them.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I am interested in predicting the occurrence of different species s {e.g. weeds, fungi, insects} at time t {e.g. month, season, year}. Presence in the t-1 period may increase the probability of presence in period t. The data is crowd-sourced presence-only observations over multiple years, and the user-base increased over time. Due to the presence-only nature, I thought of using maximum entropy. However, I'd like to take into account that the sampling bias changes over time (i.e. the increase in user base).
What ways are there to explicitly consider time-variant sampling bias?
Relevant answer
Answer
A preprint is a version of a scholarly or scientific paper that precedes formal peer review and publication in a peer-reviewed journal and should not be allowed. What's the point of even reading them if they haven't been reviewed?
  • asked a question related to Bias (Epidemiology)
Question
5 answers
Risk of bias assessment (sometimes called "quality assessment" or "critical appraisal") helps to establish transparency of evidence synthesis results and findings. and it is mandatory to have it in your systematic review!
if you know any tools or used ones, can you please share it/them with me?
or if you have extra information regarding the risk of basis assessments, can you share it with me?
Relevant answer
Answer
Systematic reviews and meta-analyses are proliferating, as they are an important building block to inform evidence-based guidelines and decision-making. Enforcement of best practice in clinical trials is firmly on the research agenda of good clinical practice, but there is less clarity as to how evidence syntheses that combine these studies can be influenced by bad practice. Our aim was to conduct a living systematic review of articles that highlight flaws in published systematic reviews to formally document and understand these problems...
Many hundreds of articles highlight that there are many flaws in the conduct, methods and reporting of published systematic reviews, despite the existence and frequent application of guidelines. Considering the pivotal role that systematic reviews have in medical decision-making due to having apparently transparent, objective and replicable processes, a failure to appreciate and regulate problems with these highly cited research designs is a threat to credible science...
  • asked a question related to Bias (Epidemiology)
Question
3 answers
After reading some articles and watching videos, I realized I need to draw a funnel plot for reporting bias assessment and GRADE approach for certainty evidence. But I am not getting on how to plot funnel plot and make GRADE approach.
Relevant answer
Answer
A risk of bias assessment is done to improve the transparency of the evidence synthesis findings. Each study included in the evidence synthesis (systematic review/meta-analysis) is assessed for the possibility of bias either in the design, conduct or analysis of the study. It is done with the help of critical appraisal tools. Numerous critical appraisal tools are available for various study designs.
Some of the available tools:
RCTs: Cochrane’s Risk of bias 2 (RoB 2) tool, Jadad scale
Non-randomized trials: Risk of bias in Non-randomized Studies of Interventions (ROBINS-I tool)
Observational (case-control/cohort studies): Newcastle Ottawa Scale, JBI Checklist for Case-control/Cohort studies
Cross-sectional studies: Newcastle Ottawa scale adapted for cross-sectional study, AXIS tool, JBI Checklist for Prevalence Study
We should be cautious in choosing the right tool for the right design and make sure that it is the latest version of the tool.
For example, Cochrane’s Risk of bias 2 (RoB 2) tool for RCTs has 25 questions divided into five domains.
Based on the response to 25 questions, each study will be classified into one of the three categories: low risk of bias, some concerns and high risk of bias.
This will tell us the quality of each study included in the evidence synthesis.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
1. How a study of high risk of bias study influences result of meta analysis?
2. Author should exclude the study of high risk of bias study or there is any other method to compensate this problem?
Relevant answer
Answer
In this case, there are too few studies and they are too homogeneous in terms of risk of bias for a meta-regression or subgroup meta-analysis to make sense. If it is a subjective outcome measure the review is concerning, then the lack of blinding might have a large impact on the effect estimates. If it is a more objective outcome, such as a blood sample, then the lack of blinding might not be important.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
The data has endogeneity due to omitted variable bias as revealed by Ramsey OV test in Stata
Relevant answer
Answer
Thank you very much dear Dinesh Kumar for your time and such a detailed reply. Your answer is really worthy and very helpful. Regards
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Can anyone help us to share Bias correction codes for wind speed correction?
We have an "observed file and model file" for wind speed data.
kindly assist me
Relevant answer
Answer
There is no agreed upon bias correction code for wind speed corrections. Some measures use a LINDZ correction factor, while others use a KOSHER correction factor.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
reviewers preferentially accept or reject articles based on a number of demographic factors, most especially authors country & affiliation.
Relevant answer
Answer
Of course not. If the writing is good, a research gap is filled, methodology is sound, and discussion of findings is appropriate, then it should be approved. My biggest complaint is journal editors having some obscure requirement for publication.
I've seen peers reject research because they felt no gap was filled, despite the literature describing that there was a gap.
I have not yet met any clear bias against researchers' country of origin, but I would not deny the existence of such bias. Researchers are human after all, despite any assumed 'enlightenment.'
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I tried to apply a series of pulse voltage on the gate of MOSFET, and i used the "solve" statement,
with "sqpulse" and "trans.analy" parameter ,like:
"solve vdrain=5
log outf=mos_t.log master
solve trans.analy name=gate sqpulse tdelay=0.05 trise=0.01 PULSE.WIDTH=0.5 tfall=0.01 frequence=0.5"
But no transient solution was obtained . the warning said:"
Warning: No solution present. Performing zero carrier, zero bias
calculation for initial guess."
how can i solve it?
Relevant answer
Answer
Thanks for the question.
Silvaco TCAD (Technology Computer-Aided Design) is a software suite that is used for simulating the behavior of semiconductor devices and integrated circuits. The specific warning message that you are encountering can be caused by a variety of factors, and the exact cause may be difficult to determine without more information about the specific simulation that you are running and the specific warning message that you are seeing.
Some possible causes of warnings in Silvaco TCAD include:
  • Incorrect or inconsistent inputs to the simulation. For example, if the boundary conditions or material properties are not specified correctly, it could lead to a warning.
  • Numerical errors or instability in the solution process. For example, if the simulation is too large, or if the solver is not able to converge on a solution, it could lead to a warning.
  • Missing or outdated libraries or software components.
  • Attempt to use a feature or option that is not compatible with your simulation setup or that has been deprecated.
It's also important to note that Silvaco TCAD could produce a warning message if the solution of the problem is not unique or if the solution is not physical.
It is often helpful to carefully check the input files, settings and the input parameters used in the simulation to ensure that everything is specified correctly and that all necessary inputs are present. Also, checking the documentation and the release notes of he specific version of Silvaco TCAD that you are using can help to identify any known issues or updates that may be related to the warning message you are encountering. Also, consulting with other experienced users or the Silvaco support team might be helpful as they could help you to troubleshoot the issue.
It's also recommended to check if you have set the appropriate simulation and solver settings, taking into account your specific problem and the hardware available. This can include adjusting the time step, solver tolerances, or other numerical settings to improve the stability and accuracy of the simulation.
It's important to keep in mind that some warnings may not necessarily indicate a problem with the simulation, but it's always best to investigate any warnings that you encounter to ensure that the simulation is running as expected and the results are accurate.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Greetings, all researchers! Right now, I'm looking for technical assistance with my research. I've set a goal to assess agroclimatic indicators' current and future impacts on maize crop yields using the impact modeling approach. How can I correct agroclimatic indicator biases in raster data?
Relevant answer
Answer
There are several methods that can be used to bias-correct raster climate data, and the most effective method will depend on the specific dataset and the research question being addressed.
Some of the commonly used methods for bias-correction of raster climate data are:
  • Delta method: This method involves calculating the ratio of the model-simulated change in a variable to the observed change in that variable for each grid point. This ratio is then applied to the model-simulated data to correct for bias.
  • Bias correction factor (BCF) method: This method involves estimating the ratio of the bias-corrected simulation to the original simulation for each grid point. The bias-corrected data is obtained by multiplying the original data by the estimated BCF.
  • Quantile mapping (QM): This method maps the model output to the observational data distribution by matching the cumulative distribution functions of both datasets.
  • Bayesian hierarchical model (BHM): This method allow to incorporate information from multiple sources, such as observational data, reanalysis data, and model output to improve the estimation of the bias correction factor.
  • Machine learning (ML) method: This method includes a range of approaches including Neural Networks, Random Forest, etc. These methods have been proposed for bias correction and are being explored as a potential approach, but it is still ongoing research.
It's important to note that, regardless of the method used, bias correction should be done with caution and the performance of the bias corrected dataset should be evaluated in a robust way, through a validation process, and also to report the uncertainty associated with the bias correction.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I have observation data from ERA5, historical data from CMIP6 and want to forecast temperature and Precipitation which are derived from GeoMIP G6Sulfur.
I have to perform bias correction on each grid point for a period of 2020 to 2100. I have searched and find some tutorial to bias correct timeserise data but I need to perform bias correction on each grid point as well. Can I do it directly on netcdf data or is there any other way? I need to perform bias correction on each grid point and over the time period. Looking forward for your help.
Relevant answer
Answer
Bias correction on a grid point for a long period of time can be a complex task and there are several methods that you can use depending on the specific details of your analysis.
One common approach is to use a bias correction algorithm, such as the delta method, the bias correction factor (BCF) method or the quantile mapping method. These methods are commonly used to correct for systematic errors in climate models, and they can also be applied to other types of datasets.
The delta method (also known as the "delta change factor" or "delta factor" method) involves calculating the ratio of the model-simulated change in a variable to the observed change in that variable for a specific grid point and time period. The BCF method, on the other hand, involves estimating the ratio of the bias-corrected simulation to the original simulation for each grid point and time period.
The quantile mapping method is based on the empirical cumulative distribution function of the model output and the corresponding observational data. It maps the model output to the observational data distribution.
Another approach could be to use a Bayesian hierarchical model, which allows to incorporate information from multiple sources, such as observational data, reanalysis data, and model output. This approach allows to use information from all the available data to improve the estimation of the bias correction factor.
It is important to note that, regardless of the method used, bias correction should be done with caution and it is important to evaluate the performance of the bias corrected dataset in a robust way, through a validation process, and also to report the uncertainty associated with the bias correction.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Hi,
I am working on a meta-analysis on the volume measurement of pulmonary nodules by automatic software tools. I used I^2 to calculate heterogeneity between studies but I was advised to do funnel plots. The studies are either phantom based (artificial nodules) or coffee-break in vivo studies were the actual volume of the nodule does not change. This is because there is no gold standard for the in vivo volume, since after surgery the nodule is known to shrink.
to my understanding, funnel plots apply to sensitivity / specificity studies, and I am having a hard time understanding what that means in this particular case, since all nodules, do not change (I.e., no false negatives or true positives for growth).
how else could I check for bias of publication?
Relevant answer
Answer
One approach is to create a funnel plot of the diagnostic test accuracy estimates for each study, which can reveal whether small studies with lower diagnostic test accuracy estimates are disproportionately missing from the literature. Funnel plot is a scatter plot of test accuracy against a measure of precision, such as standard error or 1/square root of sample size. A funnel-shaped asymmetry in the plot may suggest publication bias. The Egger's test or the trim and fill method can also be used to formally test for funnel plot asymmetry.
Another approach to testing for bias is to use statistical tests, such as the Peters test or Duval and Tweedie's trim and fill method. These tests are used to estimate the amount of missing studies and to correct for publication bias. These methods can be applied to both phantom and coffee-break studies.
It's also important to note that even after testing for bias, it is important to consider the risk of bias when interpreting the results of the meta-analysis. This may be particularly important when the studies are generally small, or have high risk of bias. Additionally, it is important to consider the external validity of the study by assessing the generalizability of the results, how well the sample represents the target population, and how well the index test being evaluated perform in the real world.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Greetings!
Me and my team are conducting a systematic review with meta-analysis focused on correlation between two variables. The purpose of that work is not focused on any intervention or outcome. Therefore, we do not need to assess risk of bias of e.g. allocation sequence or randomization simply because it does not influence the relationship between the two variables we are trying to analyse. However, there are certain things that could bias this correlation and we would like to incorporate those things into assessing the risk of bias of individual studies. Is it, therefore, methodologically correct to not use RoB2 tool to assess quality of RCTs, but instead use our own developed risk of bias score?
Thank you in advance,
Zbigniew
Relevant answer
Answer
Yes, you can develop your own quality tool for meta-analysis. Meta-analysis is a statistical method for combining the results of multiple studies on a specific topic in order to increase the overall power of the analysis and to identify patterns or trends in the literature. Quality tools are used to assess the quality of the studies included in a meta-analysis, in order to ensure that the studies being included are of high quality and that the results of the meta-analysis are reliable.
When developing your own quality tool, it is important to consider the specific research question and the types of studies that will be included in the meta-analysis. The tool should be designed to assess factors that are relevant to the specific topic and question being investigated. Additionally, the quality tool should be developed in a transparent, consistent and reproducible manner, taking into account the previous studies that have been used to assess the quality of studies in similar meta-analyses.
It is also important to note that an established quality assessment tool such as Cochrane Risk of Bias tool, QUADAS-2, Newcastle-Ottawa Scale for observational studies, etc. are widely used and have been validated in previous studies, it is recommendable to use them as a guidance and compare the results with your own tool.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Hi all,
UKAS have asked me to include bias into my uncertainty calculation to have an expanded uncertainty.
'Consider bias in the calculation for each accredited test and provide the data, calculation
and calculated uncertainties as evidence'
I ran 20 control samples for a moisture that I used to set up my QC charts. I obtained the following:
Average 9.94%
Stdev: 0.31
RSD%: 3.08%
We were reporting uncertainty with k = 2 so that RSD% x 2 = 6.16 uncertainty.
UKAS were happy with this. However they wanted us to add bias from our PT data and create an expanded uncertainty. Reading different articles has left me very confused on how to do this.
Past PT data:
I calculated bias for each round of moisture sample as follows:
Bias = (obtained result - expected result)/ expected result
-4.7936
-3.8741
-2.3148
-4.2735
1.4689
0.0000
-6.474
-5.1151
I read you then had to get RMS bias for the PT data which gave me : 4.06
How to you now get the expanded uncertainty using my 20 moisture runs and including the bias from my PT???
Thank you for your help.
Relevant answer
Answer
Hi thank you for the reply. I followed the article:
Our audit finding was cleared.
Thanks for your time.
Kind regards,
Robert
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Fama and french calculated value weighted portfolio return while framing SMB, HML and WML factor. But why not equal weighted portfolio returns because I think equal weighted portfolio have more diversification as compared to value weighted portfolio. Because value weighted portfolio is biased by giving more weight to large cap companies so the mean return are mostly driven by large cap. I also found one study which found equal weighted portfolio outperform price - weighted or value weighted portfolio. So can someone enlighten me which one to use while framing these factors.
Relevant answer
Answer
It is generally believed that market capitalization-weighted portfolios, also known as cap-weighted portfolios, tend to outperform equally-weighted portfolios over the long term. This is because cap-weighted portfolios reflect the underlying market, whereas equally-weighted portfolios do not.
Market capitalization-weighted portfolios are constructed by allocating a larger portion of the portfolio to companies with a higher market capitalization (market cap). Market cap is a measure of the size of a company and is calculated by multiplying the company's stock price by the number of outstanding shares. Companies with a higher market cap are generally more established and financially stable than those with a lower market cap.
On the other hand, equally-weighted portfolios are constructed by allocating the same amount of capital to each company in the portfolio, regardless of the company's market cap. This means that smaller companies and larger companies are given the same weight in the portfolio.
One of the main arguments in favor of cap-weighted portfolios is that they tend to outperform equally-weighted portfolios over the long term. This is because they are more closely aligned with the underlying market and tend to benefit from the long-term growth of larger, more established companies. In contrast, equally-weighted portfolios may underperform due to the higher volatility and lower liquidity of smaller companies.
However, it is important to note that different investment strategies may be appropriate for different investors, depending on their individual risk tolerance and investment objectives. Before making any investment decisions, it is advisable to consult with a financial advisor or professional to determine the best strategy for your specific needs.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
In research, we usually worry about endogeneity problem arising from reverse causality as it leads to biases in the estimates. So, researchers usually use the lag values of the endogenous covariate or use instruments to deal with the problem. However, these approaches are also not without próblems. So, why ís reverse causality as such a concern in research?
Relevant answer
Answer
I will answer my question as follows:
If we are sure that there is a first round effect, then reverse causality might not be a research concern if our goal is to estimate the total effect. However, there is no mechanism to check the presence or otherwise of a first round effect unless we control for the reverse causality. Hence, regardless of whether we want to estimate first round or total effect, we need to control for the reverse causality in our model
  • asked a question related to Bias (Epidemiology)
Question
3 answers
I am currently carrying out a meta-analysis on metabolite levels in Case vs controls and I am trying to understand which risk of Bias assessment to use. Cochrane suggest ROBINS - however this is for interventions (which I am not looking at). Then others suggest NOS however I have seen it has been critised. I wondered if anyone could give me some advice in which one to use?
Relevant answer
Answer
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Being Phd Student , i need suggestion from experts , that please guide me what innovative or challenging i can do in BIAS IN DATA DRIVEN AI. May it be research gap .
Relevant answer
Answer
Thanks for kind response. My research is more towards data bias in AI.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I do experiments on lipid droplet for a project and it's not my area of ​​expertise. For my experiments, I use plastic falcons, plastic cones, etc... My colleague says that the lipid droplets are adherent to plastic and using plastic instruments is a mistake. I should only use glass. I couldn't find any articles discussing this topic. Could you confirm that the use of plastic is a bias and that lipid droplets can adhere to plastic? Thank you for your answers.
Relevant answer
Answer
It makes sense that lipid droplets will adhere to plastic much more than to glass, as long as the plastic is of a non-polar type like polyethylene, polypropylene, polystyrene... (A polar plastic like polyimide or maybe even nylon-6 may be more similar to glass.)
You can measure contact angles to verify your colleague's opinion. Lipids should spread more (have a smaller contact angle) on non-polar plastics than on glass. A qualitative experiment can be done even without contact angle measurement: just measure the diameter of lipid droplets of the same volume on the different surfaces. The droplets will spread more on the plastic surface. One indirect way to experience this phenomenon is - funny enough - while washing dishes: It takes considerably longer to wash off cooking oil from a polypropylene plate than from a glass or ceramic one...
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Research studies including randomized controlled trials often have a time-to-event outcome as the primary outcome of interest, although competing events can precede the event of interest and thus may prevent the primary outcome from occurring - for example mortality may prevent observing cancer recurrence or may preclude need for reoperation in patients who undergo surgical repair of heart valves. Researchers often use Kaplan-Meier survival curves or the Cox proportional hazards regression model to estimate survival in the presence of censoring. These models can provide biased estimates (usually upward) of the incidence of the primary outcome over time and therefore other models which address competing risks, such as the Fine-Gray subdistribution hazards model, may be more suitable for estimating the absolute incidence of the primary outcome as well as the relative effect of treatment on the cumulative incidence function (CIF). My question is whether the Nelson-Aalen estimator is a reasonable option for estimating the hazard function and the cumulative incidence of the outcome of interest in the scenario of competing risks and if so, why is this a preferred approach over the Kaplan-Meier estimator?
Relevant answer
Answer
Google search for this title and I hope it helps you. It's in the attachment. Best wishes David Booth
  • asked a question related to Bias (Epidemiology)
Question
4 answers
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Relevant answer
Answer
Sorry, Jianhing. But I think you have misunderstood something in the lecture. Frequentist statistics, which is an interpretation of probability to be assigned on the basis of many random experiments.
In this setting, on designs functions of the data (also called statistics) which estimate certain quantities from data. For example, the probability p of a coin to land heads is given from n independent trials with the same coin and just counting the fraction of heads. This is then an estimator for the parameter p.
Each estimator should have desirable properties, as unbiasedness, consistency, efficiency and low variance and so on. Not every estimator has these properties. But, in principle one can proof, whether a given estimator has these properties.
So, it is not a characteristics of frequentist statistics, but a property of an individual estimator based on frequentist statistics.
  • asked a question related to Bias (Epidemiology)
Question
5 answers
I am looking for publications with specific error metrics (BIAS, MAE, RMSE etc.) for allometric tree volume equations for European tree species. Preferably in comparison to destructive measurements. If anyone knows of such publications, please leave a link. Many thanks in advance.
Relevant answer
Answer
Hi, our research group recently published these works on biomass estimation and allometric equations:
Hope they are useful,
Best regards
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Hi mates, im looking for a risk of bias tools, that can be used in interventional studies ( either RCT or NON RCT).
Im doing a meta analisys about exercise and inflammatory markers in the elderly. i got some studys RCT and some Non RCT, what tool do you advise ?
Thanks for any help, im a newbie, best regards Luís Silva.
Relevant answer
Answer
Please check this article; it is very helpful
  • asked a question related to Bias (Epidemiology)
Question
3 answers
We are doing a systematic review and the majority of the studies meeting our inclusion criteria are case series. We are unable to find any risk of bias assessment tool specifically for case series. Can you suggest any risk of bias tool for case series?
Thank you
Relevant answer
Answer
Risk of bias assessment for case-series? (researchgate.net)
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Dear colleagues,
My research group and I are conducting a systematic prevalence review, and we are having difficulty understanding the tools for assessing the risk of bias.
Which would be more suitable to fulfill this role - ROBINS-I or ROBINS-E?
Relevant answer
Answer
Sa sssezzz de vv
  • asked a question related to Bias (Epidemiology)
Question
4 answers
I would like seek your knowledge about how to reply for a biased reviewer?
Also, sometimes the author feels that the reviewer does catch the main idea of the research; how to respond to them?
Relevant answer
Answer
How to respond to biased reviews (linkedin.com)
  • asked a question related to Bias (Epidemiology)
Question
2 answers
A patient is taken off a treatment because the outcome value of interest dropped below value B. For whatever reason the exact outcome value is missing. I need to impute it to avoid bias and to reduce my confidence intervals.
Is multiple imputation something I can use and if yes, how should I adjust it? This is obviously Missing Not At Random. If not multiple imputation, what else can I do? Is there a standard approach? Non–random attrition should be a very common thing in RCTs.
Relevant answer
Answer
I don't know much about imputation but it seems to me that MI only gives the same result if you impute values with the same mean as the mean of the observed values. That would probably not be the case if you impute values that are all lower than that value B. From how I understand the question, it may even be the case that all reported values are higher than that value B.
Question remains what should you impute?
Irina Titova: do you have ideas on the distribution of those missing values?
If you only know that it is lower than B, then maybe you should analyze two scenarios, one where imputed values are drawn from a distribution that centers toward B and another from a distribution that centers toward A.
  • asked a question related to Bias (Epidemiology)
Question
20 answers
Dear Respected Researchers,
A few months ago, I saw an article published in a very good journal (IF ~ 14) and this article used a dataset that is available in a famous data source. After reading the article, I found errors in the dataset, however, the authors made some conclusions based on this biased data. Recently, I found another article based on the same biased dataset and this new article cited the previous article. This seems that researchers are using that biased dataset without verification. I am afraid that this dataset may be used by many other researchers. So, my question is, should I write a letter to the editor/comment on those published articles and submit it to the journals for consideration OR Should I ignore those articles?
I am confused because the authors may think that I am targeting them if I write a letter to the editor or comment on those articles, however, I intend to highlight errors in the dataset so that other researchers should be careful before using this dataset.
What do you suggest? Your suggestions will be helpful for me.
I look forward to hearing from you.
Relevant answer
Answer
Science thrives on criticism and corrections. I would suggest writing to the authors first and politely pointing out the erroneous data. If they do not respond, you can write to the editor. See also this discussion: https://www.researchgate.net/post/Have_you_encountred_a_research_containing_a_scientific_error_and_the_same_author_published_another_article_in_which_he_corrects_his_previous_mistake/1
  • asked a question related to Bias (Epidemiology)
Question
3 answers
Which Risk of Bias tool can we use for a Qualitative systematic review that contains either cross-sectional or case control questionnaires ?
Relevant answer
Answer
Hello
I recommend Jonna Briggs Institute critical appraisal tools. Please see the link below:
  • asked a question related to Bias (Epidemiology)
Question
8 answers
To whom who may read this question
Data extraction (data abstracting) is a key step in writing umbrella reviews. But is there any method to confirm the validity of this step? How should we ensure that this step is done correctly?
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I=II %input data
T=TT %target data
net=newff(minmax(I),[1,5,1],{'logsig','tansig','purelin'},'trainLM');%Logisig,tansig,purelin are the activation function, trainlm is for training neural network
net = init(net); % Used to initialize the network (weight and biases)
net.trainParam.show =1; % The result of error (mse) is shown at each iteration (epoch)
net.trainParam.epochs = 1000; % Maximum limit of the network training iteration process (epoch)
net.trainParam.goal =1e-12; % Stopping criterion based on error (mse) goal
net=train(net,I,T)
ERROR
Error using network/train (line 340)
Output data size does not match net.outputs{3}.size.
Relevant answer
Answer
Make the number of the node of the output layer equal to the number of the target
  • asked a question related to Bias (Epidemiology)
Question
1 answer
We are using the Elementar Analyser for carbon and nitrogen content of plant, soil and fertilisers. The carbon is in low bias (the factor is~0.89...), when it should be 0.9 to 1.1 and nitrogen is in high bias.
Is there any paticular reason for this? Could it be a calibration issue or maintenance problem?
Thanks
Regards
Adiel
Relevant answer
Answer
Low recoveries are typically due to too low a combustion temperature but high recoveries must be a calibration issue
  • asked a question related to Bias (Epidemiology)
Question
2 answers
It is a scientific/philosophical question that how much a paper review is reliable? To what extent is the reviewer's scientific mind influenced by non-scientific issues, such as prejudices against rival researchers or research groups, biased affiliations and institutes, and special nationalities? So let me know your opinion:
Q 1: How much do you think the reviewers of scientific papers written by non-famous people have made up their minds about rejecting them before reading the papers?
1- 100% 2- 75% 3- 50% 4- 25% 5- 0%
Relevant answer
Answer
2- 75%
  • asked a question related to Bias (Epidemiology)
Question
5 answers
Regression and matching are the most common econometric tools used by scholars. In the case of regression, regression always calculate correlations, but such correlation can also be interpreted as causation when certain requirements are satisfied. As Pearl says, " ‘Correlation does not imply causation’ should give way to ‘Some correlations do imply causation.’ "
One of the most critical assumptions for making causal inferences in observational studies is that (conditional on a set of variables) the treatment and control groups are (conditional) exchangeable. Confounding and selection bias are two forms of lack of exchangeability between the treated and the untreated. Confounding is a bias resulting from the presence of common causes of treatment and outcome, often viewed as the typical shortcoming of observational studies; whereas selection bias occurs when conditioning on the common effect of treatment and outcome, and can occur in both observational studies and randomized trials.
In econometrics, the definition of confounding and selection bias is not very clear. The so-called omission variable bias (also known as selection bias, as distinct from the selection bias we mentioned above) in econometrics, in my opinion, refers to bias due to confounding. As a simple regression model Y = a + bx + ɛ, we say there is omitted variables bias when the residual term is correlated with the independent variable, that is, the regression model omits variables related to the independent variable that may affect y. In another words, the omitted variable is correlated with 1) the independent variable and 2) the outcome variable. By the above definition, the common effects of X and Y should also be controlled for, and such control is known to lead to another type of bias - selection bias. Angrist addresses this issue in his book, saying:” There’s a second, more subtle, confounding force here: bad controls create selection bias …..., the moral of the bad control story is that timing matters. Variables measured before the treatment variable was determined are generally good controls, because they can’t be changed by the treatment. By contrast, control variables that are measured later may have been determined in part by the treatment, in which case they aren’t controls at all, they are outcomes.” Now we know that variables that are measured before the treatment variable is determined are not necessarily good control variables, such as M-bias. The econometric definition is confusing, and it seems to me that omitted variable bias should be distinguished from selection bias, and omitted variable bias should be defined as the variable in the residual that causes Y also causes X.
Due to presentation problems, omitted variable bias is often mistaken as the omission of variables associated with y. We often see articles with statements such as "To mitigate the omitted variables bias of the model, we also control for ....." , followed by a long list of variables that (may) have an effect on y. However, adding a series of control variables to the regression model maybe not helpful to our assessment of causal effects, but even amplify the bias. The inclusion of control variables without consideration may trigger the issue of conditioning on a collider opens the back-door path, which was blocked when the collider was not conditioned on. Therefore, when using regression for causal inference, all we have to do is to pick a set of variables based on reliable causal diagrams.
I believe that simple regression methods should not be worthless in causal inference; what we need to do is to scrutinise our assumptions before using regression (using causal diagrams to choose the control variables using to block backdoor paths is a good way), to increase the transparency of our research, and to show the reader what assumptions our results are based on, and to what extent these assumptions are reliable. Of course, no matter how much effort we put into proving that our conclusions are reliable, there is still the inevitable threat of unobservable confounding in studies based on observational data, and regression methods that coping with this by adding control variables can only address those observable confounding. However, you cannot deny one method if you cannot clearly identify where these threats are coming from. As Robins says, the critic is not making a scientific statement, but a logical one.
These are just some of my personal views from the study, all comments are welcomed!
References.
Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. basic books.
Hernán, M. A., & Robins, J. M. (2010). Causal inference.
Angrist, J. D., & Pischke, J. S. (2014). Mastering' metrics: The path from cause to effect. princeton university press.
Relevant answer
Answer
Dear Chao,
Thank you for the information.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Hello
Can anybody share Bias correction codes in R please?
Relevant answer
Answer
I have downloaded 6 Models of precipitation data at CMIP6, now I want to Bias correction of these 6 models according to existing methods. Please, does anyone know how to guide me. Is there any special software, Excel file or codes for doing Bias correction of precipitation data please provide it to me.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
For external validation of diagnostic accuracy study, is a prospective study superior to a retrospective study? What is the reason for the superiority of a prospective design? Avoidance of recall bias? or avoiding sampling bias?
Relevant answer
Answer
Prospective studies of diagnostic test accuracy have important advantages over retrospective designs.
Prospective studies of diagnostic test accuracy when disease prevalence is low - PubMed (nih.gov)
  • asked a question related to Bias (Epidemiology)
Question
5 answers
Mixed Methods Study
Relevant answer
Thank you for sharing the good question and answers.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
More to behavioural biases
Relevant answer
Answer
How compensation/salary.reward or "money" is a driving factor to influence taking up risks by individuals(Astronaut/MMM Boxer) in their jobs.
How money could be an influencer in breaking relationships: An exploratory study
  • asked a question related to Bias (Epidemiology)
Question
5 answers
I am interested in doing research in the field of behavioural finance. But I am unable to select cogntive biases that are the base for the research.
Kindly help
Relevant answer
Answer
I concur with the above suggestion by Shay. Furthermore, you can check Edwards, W. (1968). Conservatism in human information processing; and De Bondt, W. F. M., And Thaler, R. (1985). Does the stock market overreact?. In these articles, the authors have discussed the details of conservatism; underreaction, and overreaction to market movement respectively, which are also relevant concepts under cognitive biases.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
I am writing a systematic review where I will be including RCT and also randomized trials without the control group. Which risk of bias will be best to use for both please?
Relevant answer
Answer
Gracie Pretty It is a standard to use the Version 2 of the Cochrane risk-of-bias tool for randomized trials (RoB 2). And, it is expected by PRISMA reporting also.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I want to collect data from Spanish and UK hotel employees. I am concerned about possible survey response bias resulting from nationality of employees. Are there any questions I should insert in the questionnaire to control for this?
Relevant answer
Answer
I think that the simplest way is just to ask them what their nationality is. During the data analysis, you can perform a Chi-squared test or t-test (depending on what is your dependent variable), and see if nationality correlated with the survey answers. A more complex option would be to think about hypothesized reasons for differences between nationalities. What are the cultural differences between those societies and add a relevant questionnaire that measure those aspects in your participants.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
one way from literature is testing the vif which is lower than 3.3 is ok.
is there other way to test Common method bias in smartpls?
thanks
Relevant answer
Answer
You can reduce CMB at both design stage and analysis stage. You may refer to this article on available strategies to reduce CMB.
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I did Bland-Altman test, to assesses the agreement between two methods of measuring height in children under 5 years (a manual height board and a 3D imaging phone). Results from the analysis shows:
Bias= 0.5596
SD of Bias =3.535
95% lower limit of agreement = -6.369
95% upper limit of agreement = 7.488
How do I interpret these findings? Is there an agreement or not?
Relevant answer
Answer
If you're looking for an agreement/reliability maybe ICC would be more helpful.
Passing Bablok regression would also help knowing if constant and proportional bias are present.
The Bland-Altman interpretation has important assumptions that need to be met (eg constant bias). there are many articles on the BA-analysis inetrepretation see for example ( )
  • asked a question related to Bias (Epidemiology)
Question
5 answers
When DC voltage bias is very high, plasma intensity is very low, and vice versa.
I'm not able to get deposition.
Relevant answer
Answer
Hi Nitesh Singh. I am not as expert as Sir Jürgen Weippert, but since I am recently going through these types of problems, maybe we can solve the problem together.
For the deposition, stable plasma is necessary. The plasma stability depends on the bias voltage, but along with it, it also depends on the proper flow of gas. Therefore check if the gas flow is stable or not.
As recommended by the Engineers of our system, you must check the connections not only outside but also in the system. If possible, check for grounding also. Is it proper or not?
As already stated by Jürgen Weippert Sir, the matching network only fixes the reflected power (in our case also). The DC supply adjusts itself while obtaining plasma. Therefore, you perform the matching for low reflected power only and try to get stable plasma.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Can we ask independent variables related questions to an employee and dependent variable(employee performance) questions to the employer ? As employee answering about his performance will be self biased response. If there are any papers published with two different sets of respondents for same conceptual model please post the url for my reference.
Relevant answer
Answer
You can get them in the link mentioned below
48+ SAMPLE Employee Questionnaires in PDF | MS Word | Excel
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Hi, hope you are well.
I am doing a literature review on the use of VR simulation in diagnostic imaging and 3 of my studies have the same lead author as they have created the software and use the updated version in each study and but in 2 of their study, they compare VR with role playing or a computer programme. but i was not sure if there is bias or more credibility as they have a better understanding. in addition 2 of the studies, they used the same participants. was wondering if you can give me some guidance, thank you
Relevant answer
Answer
Having multiple studies from the same group is not a problem per se. You can for example use a random effect that accounts for authors publishing multiple studies. However, if the same patients have been used in these studies, this does not work properly. Including these studies in the same meta-analysis will overestimate influence of these studies. For this case, you could simply randomly pick 1 of these study in your main model. You could collect these studies in the systematic review but not including all of them in the meta-analysis.
You could see the impact of this random choice in your sensitivity analysis (how your final pooled effect is impacted by this choice to see if these studies are influencing your results or not).
My 2 cents...
  • asked a question related to Bias (Epidemiology)
Question
4 answers
I've been looking around for a risk of bias assessment tool specifically made for observational studies, mouse references suggest using Cochrane's ROBINS but it doesn't seem sound enough to use for observational studies(esp since the tool itself states it's for non-randomized interventions) I'm also not comfortable with the comparability implication of using it with observational studies.
Do you have any other tools to suggest or if you suggest using ROBINS (or perhaps a modified version?)
Thank you
Relevant answer
Answer
These Assessment Tools maybe useful
3. CHECKLIST FOR QUASI-EXPERIMENTAL STUDIES (NON-RANDOMIZED
4. Methodological quality (risk of bias) assessment tools for primary and
secondary medical studies: what are they and which is better?
  • asked a question related to Bias (Epidemiology)
Question
5 answers
How does the inadequate metaphysical knowledge leads to methodological bias?
Relevant answer
Answer
Yes, it can.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
can anyone help me in doing Bias correction such as Quantile mapping using climate data in python ?
Relevant answer
Answer
Agatambidi Balakrishna Have you performed the bias correction? How did you proceed with the given code above? Could you explain to me elaborately?
  • asked a question related to Bias (Epidemiology)
Question
3 answers
It is well known that there are three types of systematic bias in causal inference–confounding, selection, and measurement. Is reverse causation, then, a fourth source of bias independent of these three forms?
Relevant answer
Answer
I would explain it this way. When two variables, A and B, are correlated, there are three possible explanations related to causation. The first is that A causes B. the second is that B causes A. The third is that another variable, C, causes both A and B. I think that you are asking if the second scenario I described could be an explanation and yes it can. Here is an example, suppose that you correlate ice cream sales (A) and the number of people (B) at a beach over a one-year period. Did A cause B or did B cause A? The more likely explanation is that the weather (temperature, rain, snow) caused both the increase in people and increase in ice cream sales.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Dear all,
I've recently processed some samples for ATAC-seq. My corresponding ATAC-seq library looks different (see picture: Bio-Analyzer) than the expected profile. I was wondering if I can still sequence it or it will be too biased.
Thank you for your help
Best,
Karim
Relevant answer
Answer
Hi Karim! Did you get the answer? B'cause I got similar profile with my ATAC sample recently. It looked like the nucleosome peaks are not very significant and the only prominent peak was from excess amount of index primer. I was suggested not to proceed with this sample. It'd be really helpful if you can share your views on this. Thanks!
  • asked a question related to Bias (Epidemiology)
Question
6 answers
According to someone, using country-level data for study might bring the "aggregate fallacy" introduced by macro data, resulting in possible bias in estimate findings. If this is true, then why have researchers published several research articles in prestigious journals in which they analyzed country-level data? What if we conducted estimates using data from income groups? Simply said, if this is indeed a problem, what will be the solution?
Relevant answer
Muhammad Ramzan , I believe this quote may express the meaning: the aggregation fallacy = aggregation bias, or better, the conclusion that what is true for the group must be true for the sub-group or individual. It's called aggregation bias because you're using aggregated data and extrapolating it inappropriately. It emerges, as one but not the only one reason, due to heterogeneity!
  • asked a question related to Bias (Epidemiology)
Question
1 answer
I am designing a perceptual study, with repeated measures design. Participants will hear some audio stimuli and be asked to respond to a target stimulus with the question "was the stimulus early, on-time, or late". There will be several experimental conditions, and all participants will undergo all conditions. I want a dependent variable measuring how accurate the participants are on this task, with the ability to see differences between early, on-time or late stimuli.
One option is to calculate the percentage of correct responses in each category, but i believe there is a problem of response bias with this approach. E.g., if someone responds "on-time" to every stimulus it will look as if they were great at recognising the on-time stimuli, but actually they were just biased towards that option.
In the past, I have used signal detection theory on 2-choice categorical data (i.e. yes/no data), to produce measures of sensitivity (d prime) and bias. This accounts for response bias in the data.
My question is, is there a way to extend this yes/no signal detection analysis to data with three response options?
Or is there another way to account for response bias?
Thanks!
Relevant answer
Answer
Are you familiar with California Verbal Learning Test? It uses d' for the 2-choice categorical data (true/false) and also for the number of correct responses (correct recall words). If you know which response is hit or miss, you can dummy code the three response options, manipulate it in a way that you have a sum of correct responses, then you can manually compute d'. There are also several R packages (https://cran.r-project.org/web/packages/psyphy/index.html). You can check their source code and adapt it for your needs.
Another option would be using item response theory (search for IRT adjustment for guessing), which can penalize partial guessing behavior.
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Self-reporting bias is a challenge in case of GPS data collection, in studies where participants have to manually START and STOP recording their trip, unlike studies where GPS data is passively collected (continuously in the background) without the need for user intervention.
I specifically want studies which have mentioned the existence of self-reported bias in the context of GPS data collection.
Relevant answer
Answer
I'm still trying to understand your question. In the passive collection, you are referring to a GPS device that is at a fixed location so the deviations in reporting can be seen?
And you are looking for what?
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Berksonian bias occurs due to differential rate of hospital admission or cases may be similar to control(Blunting of effect)which may lead to decrease generalizability . Methods to reduce the bias?
Relevant answer
Answer
Do not use hospital controls at all or try to ensure that no hospital controls have diseases (not the disease under study) which could be caused by the exposure(s) being investigated..
  • asked a question related to Bias (Epidemiology)
Question
4 answers
Hi there
I have undertaken SRs in the past and I know the ideal is to have 2 reviewers and a third person to resolve any discrepancies. But is there a maximum number of reviewers for scoping reviews? Would having 4 or 5 full text reviewers actually be a good thing to minimise bias or would it be a case of too many cooks?
Relevant answer
Answer
It is difficult to determine the exact number of reviewers that would be needed for scoping reviews. It depends on the type of project and the number of stakeholders involved in it.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
In survey research, it is advised that the response rate should be high to avoid self-selection bias. What methods can be used to assess if the data is affected by biases resulting from low response rate,
Relevant answer
Answer
Hello Saima,
If you're lucky enough to have information about characteristics of the target population, and to have collected some of that information about your sample, you could:
1. Run comparisons to see whether your sample deviated notably from the population characteristics. For example, if the target population was 60% female, but your sample was 80% female, then you have evidence that your sample deviates from the population in one potentially important aspect.
2. You could apply weights for these variables to your sample data set, to represent how the results might have looked, had your sample more closely matched with the characteristics of the population.
Good luck with your work.
  • asked a question related to Bias (Epidemiology)
Question
4 answers
I have heard in videos that variation in R2 and path coefficients (before and after common method bias correction) should be <10% for unmeasured marker variable
and <30% for measured latent marker variable correction method.
Can anyone share the articles or references? Where does this cut-off value come from?
Relevant answer
Answer
Please see the CMB related discussion in this latest article:
Syed Mahmudur Rahman, Jamie Carlson, Siegfried P. Gudergan, Martin Wetzels, Dhruv Grewal. (2022). Perceived Omnichannel Customer Experience (OCX): Concept, measurement, and impact. Journal of Retailing, https://doi.org/10.1016/j.jretai.2022.03.003
  • asked a question related to Bias (Epidemiology)
Question
5 answers
Dear colleagues,
I am currently working with a mentor to conduct a systematic review and meta-analysis for a prevalence and risk of a specific condition in a specific duration. The purpose is to evaluate cross-sectional studies. The problem is that there is plenty of different risk of bias assessments to assess the risk of bias of cross-sectional studies, and I am confused about which assessment is the best for my study. Therefore, can you guide me to solve this problem?
I am looking forward to your help. Thank you in advance.
Relevant answer
Answer
Fatimah Albahrani you can use Jonna Briggs Methodology and use their critical appraisal tools for assessing risk of bias of studies. Please see link below:
  • asked a question related to Bias (Epidemiology)
Question
1 answer
Hi all,
I'm currently in the full text review phase of an SLR on predictors / risk factors of treatment resistant depression. I'm including observational studies and RCTs. Virtually all of these studies are non-interventional. For data synthesis, I'm looking to conduct a narrative summary of results, separated in categories such as clinical, genetic, demographic etc. I'm finding it quite difficult to source a risk of bias tool for non-interventional studies with the end product being a narrative summary. Does anyone have any suggestions regarding suitable RoB tools?
Thanks in advance!
Relevant answer
Answer
Hi Shane,
For non-interventional studies, I would suggest the Newcastle-Ottawa Scale (NOS) instead of ROB tools because as far as I know ROB tools are mostly used for interventional studies (randomised and non-randomised). This is a link to the Newcastle-Ottawa Scale (NOS): http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp.
For interventional studies, I would use ROB tools. I hope this is useful.
Regards,
Reza
  • asked a question related to Bias (Epidemiology)
Question
1 answer
According to Podsakoff et al. (2003), measuring some types of construct (i.e. attitude, personality .. etc.) would probably exist bias due to systematic bias - common method variance/bias (CMV/CMB). However, I found that in my research that most of constructs are perceptional or self-referencing (i.e. Work-family conflict/enrichment/balance). Is there any thesis articulating that these constructs could avoid the threat from CMV, or anyone who has experiences dealing with this issue?
Relevant answer
Answer
Pretty much any measure will result in some amount of method specific variance. As Campbell and Fiske (1959) pointed out in their seminal Psychological Bulletin article on the MTMM matrix, each measurement can be seen as a trait method unit, reflecting both trait and method (e.g. rater) variance. Every measurement is specific in some way--nothing is completely objective. Especially not self and other report measures.
  • asked a question related to Bias (Epidemiology)
Question
5 answers
I would like to calculate average field size of agricultural fields in the landscapes surrounding my study sites. Calculating field size in ArcGis Pro is easy enough, but when intersecting my landscape buffers with the field layer and then calculating average field size based on the intersected layer, some of my fields are cut off and "made smaller", which will underestimate average field size.
Any ideas on how to deal with this issue? Are there tools in ArcGIS Pro (or anopther programm) that account for/deal with this bias?
I very much appreciate your ideas and suggestions!
Relevant answer
Answer
Hadrien Dicostanzo That is what I did so far, calculate the mean based on the new intersected layer, but it cuts off some fields at the edge (where they intersect with the buffer) and reduces average field size.
Thanks for the suggested approach, that may be a nice way of deciding which filed to include in the calculation
  • asked a question related to Bias (Epidemiology)
Question
2 answers
Through an intervention program-8 weeks, I am working on behavior change. Due to time constraints and, ethically, not letting anyone unexposed to the intervention as the targeted behavior may cause danger to health, I have chosen non-randomly selected one group for pre and post test. Post test will be repeated at 10th, 12th, and 14th week.
How can I provide sound rationale for using this design and how can i address the well-known biases associated with this design e.g. maturation, RTM, history, and pre-test etc. I will be thankful for your suggestions, please.
Relevant answer
Answer
We had a similar problem in a s study addressing communication of probabilistic risk on preventive behaviour in a single occupational cohort.
see Leigh J Harrison J . J Occ Health Safety ANZ 1991 ; 7:467-472
paper on my rg site.
  • asked a question related to Bias (Epidemiology)
Question
11 answers
Hi Dear,
Was somebody able to use R Shiny web application (https://
cinema.ispm.unibe.ch/rob-men/) for Risk of Bias due to missing evidence evaluation?
A continuous error occurs when I apply for Data analysis.
Relevant answer
Answer
Khrystyna Zhurakivska The second para of my previous comment contained the reason and solution for my problem.
The reason that I could run my data on bugsnet but not on robmen is coz one of my studies in nma had zero events in both the experimental and control groups. Robmen runs network metaregression using the smallest observed variance as covariate as part of its data analysis and hence is stuck when there are zero events in both comparator groups. Hence the “disconnected from server” error msg in my case (I wish the error msgs were more specific!). The trick in this case is to run robmen excluding studies with zero events in both treatment groups.
Robmen works fine if only one of the groups has zero events.
The general instructions for formatting data for robmen is given on their home page and is similar to the file you shared with me.
Hope this helps!
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I define one part of my model as a ferrite material (saturation more than zero), when we have ferrite material, we should define the magnetic bias for it, I defined it, but after simulation it does not get converged and I got the error "matrix solver exception file i/o failure" , what should I do ? How much this magnetic bias should be?
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I am using NIH quality assessment tool (https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools) to critically appraise controlled intervention studies and before-after studies as part of my Msc dissertation. I am aware that it is not advised to use a numeric score to infer quality of the study, can anybody advise if there is a best way to determine poor/ fair/ good or is it simply opinion? I am concerned of the bias introduced if I simply state I decided poor/ fair/ good.
Thanks in advance
Relevant answer
Answer
You can mark 1 point for every yes and 0 for no; (although there are other options like CD-cannot determine, NR-not reported ) and score the same out of 14 points in Microsoft excel. The grading will be decided on the total score: 0-5 (Poor) , 6-10 (fair) and 11-14 (Good)
Hope it helps
  • asked a question related to Bias (Epidemiology)
Question
2 answers
I have a model in HFSS, I want to simulate in under two condition, the first condition is without any magnetic media, assign excitation and calculate fields. The second condition is putting a magnetic media (not exactly ferrite, but with permittivity=1, permeability = from 1 to 10, magnetic saturation = more than zero from 1 to1000, G=2, delta-H=0) the frequency is 100MHz, three terminals, one as input, and two as outputs. incident voltage = 0.5,(driven-Terminal)
under the first condition the simulation is working. But, in the second circumstance, when I define the new material with the properties that I mentioned, I got the error that I must define the magnetic bias for ferrite material (Ms>0), this magnetic bias needs the internal bias, in order to align the dipole of the ferrite, but it did not converge. My question is that why I should define the internal bias? My goal is to magnetize the media by the magentic field that is producing in the model, not by the assigned magnetic bias.
Relevant answer
Answer
Indirajith Kanagaraj I mentioned in my question that I am using driven Terminal, because the voltage in the input terminal is important for me.
  • asked a question related to Bias (Epidemiology)
  • asked a question related to Bias (Epidemiology)
Question
4 answers
  • Survey-based studies are commonly used to study the inter-intraobserver reliability of thoracolumbar fractures classifications? What are the pitfalls or inherent biases related to these survey-based studies?
Relevant answer
Answer
Interesting question, some surveys use key images to show the pattern of outcome that they desire, this is a selection bias for the reliability, I think is better to give the assessor the possibility to use the entire file cases to reproduce a more real scenario, so if the survey include a pool of images is better than just one or thwo images.
  • asked a question related to Bias (Epidemiology)
Question
3 answers
There are many tools for assessing risk of bias, however, I'm confused which is most proffered for evaluation risk of bias in studies?
Relevant answer