Science topic

# Observational Studies - Science topic

Explore the latest questions and answers in Observational Studies, and find Observational Studies experts.
Questions related to Observational Studies
• asked a question related to Observational Studies
Question
On social media platforms (e.g. Twitter), after posting a tweet that has been used to refute false information, is it reliable to use the comments below it to assess the credibility of the tweet? As it is the user's choice whether or not to comment, does this practice lead to selection bias? For example, under the pressure of certain social norms, it takes courage to refute information posted by an official account, i.e. users who support the official claim are more likely to comment.
A. Greede Thank you for your comments. However, it is difficult to confirm whether people in a certain country have adequate security (or freedom of speech).
• asked a question related to Observational Studies
Question
Regression and matching are the most common econometric tools used by scholars. In the case of regression, regression always calculate correlations, but such correlation can also be interpreted as causation when certain requirements are satisfied. As Pearl says, " ‘Correlation does not imply causation’ should give way to ‘Some correlations do imply causation.’ "
One of the most critical assumptions for making causal inferences in observational studies is that (conditional on a set of variables) the treatment and control groups are (conditional) exchangeable. Confounding and selection bias are two forms of lack of exchangeability between the treated and the untreated. Confounding is a bias resulting from the presence of common causes of treatment and outcome, often viewed as the typical shortcoming of observational studies; whereas selection bias occurs when conditioning on the common effect of treatment and outcome, and can occur in both observational studies and randomized trials.
In econometrics, the definition of confounding and selection bias is not very clear. The so-called omission variable bias (also known as selection bias, as distinct from the selection bias we mentioned above) in econometrics, in my opinion, refers to bias due to confounding. As a simple regression model Y = a + bx + ɛ, we say there is omitted variables bias when the residual term is correlated with the independent variable, that is, the regression model omits variables related to the independent variable that may affect y. In another words, the omitted variable is correlated with 1) the independent variable and 2) the outcome variable. By the above definition, the common effects of X and Y should also be controlled for, and such control is known to lead to another type of bias - selection bias. Angrist addresses this issue in his book, saying:” There’s a second, more subtle, confounding force here: bad controls create selection bias …..., the moral of the bad control story is that timing matters. Variables measured before the treatment variable was determined are generally good controls, because they can’t be changed by the treatment. By contrast, control variables that are measured later may have been determined in part by the treatment, in which case they aren’t controls at all, they are outcomes.” Now we know that variables that are measured before the treatment variable is determined are not necessarily good control variables, such as M-bias. The econometric definition is confusing, and it seems to me that omitted variable bias should be distinguished from selection bias, and omitted variable bias should be defined as the variable in the residual that causes Y also causes X.
Due to presentation problems, omitted variable bias is often mistaken as the omission of variables associated with y. We often see articles with statements such as "To mitigate the omitted variables bias of the model, we also control for ....." , followed by a long list of variables that (may) have an effect on y. However, adding a series of control variables to the regression model maybe not helpful to our assessment of causal effects, but even amplify the bias. The inclusion of control variables without consideration may trigger the issue of conditioning on a collider opens the back-door path, which was blocked when the collider was not conditioned on. Therefore, when using regression for causal inference, all we have to do is to pick a set of variables based on reliable causal diagrams.
I believe that simple regression methods should not be worthless in causal inference; what we need to do is to scrutinise our assumptions before using regression (using causal diagrams to choose the control variables using to block backdoor paths is a good way), to increase the transparency of our research, and to show the reader what assumptions our results are based on, and to what extent these assumptions are reliable. Of course, no matter how much effort we put into proving that our conclusions are reliable, there is still the inevitable threat of unobservable confounding in studies based on observational data, and regression methods that coping with this by adding control variables can only address those observable confounding. However, you cannot deny one method if you cannot clearly identify where these threats are coming from. As Robins says, the critic is not making a scientific statement, but a logical one.
These are just some of my personal views from the study, all comments are welcomed!
References.
Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. basic books.
Hernán, M. A., & Robins, J. M. (2010). Causal inference.
Angrist, J. D., & Pischke, J. S. (2014). Mastering' metrics: The path from cause to effect. princeton university press.
Dear Chao,
Thank you for the information.
• asked a question related to Observational Studies
Question
Hello,
As I know Graphpad Prism is being used for experimental studies, but can I analyze data for cross-sectional studies using Graphpad Prism as well? Or do I have to use another program?
I am looking forward to the answer. Thank you so much!
Regards,
Depends on the study but for serious econometrics which I assume this project is I would not do that. The time wasted on a failed project is better spent on the world's best package that doesn't cost you anything at all. The choice is yours. Good luck David Booth PS compare the two attached screenshot searches and take your pick
• asked a question related to Observational Studies
Question
I am doing a study on the causal effects of information source (X1) and text sentiment (X2) on information sharing behaviour (Y). How can I explain whether my predictors can be considered as quasi-experimental factors, given that my observational data are derived from social media data?
Note that the following statement is from "Hernán, M. A., & Robins, J. M. (2020). Causal Inference: What If." to illustrate some of the differences between observational and experimental studies and how reliable causal inferences can be drawn from observational studies.
"Ideal randomized experiments can be used to identify and quantify average causal effects because the randomized assignment of treatment leads to ex-changeability.
Observational studies, on the other hand, may be much less convincing. A key reason for our hesitation to endow observational associations with a causal interpretation is the lack of randomized treatment assignment.
We analyze our data as if treatment had been randomly assigned conditional on measured covariates L–though we often know this is at best an approximation. Causal inference from observational data then revolves around the hope that the observational study can be viewed as a conditionally randomized experiment."
• asked a question related to Observational Studies
Question
Consider the following study: assess the causal impact of the authority (X) of the publisher of a debunking message (a message that refutes a rumour) on the user's acceptance of that message (Y). The independent variable (authority, X) is measured by a scale (higher scores indicate higher authority) and the dependent variable (acceptance, Y) is determined by the user's position (acceptance or non-acceptance) of comments made in response to the debunked message.
Is there selection bias in this research design? Since the only data for the dependent variable are those who make comments, and those who do are more likely to be the ones who accept the debunking message; conversely, those who do not accept the debunking message are likely to remain silent out of fear of authority. In other words, the process of selection of individuals into the analysis guarantees that debunking messages from high authority publishers have a higher acceptance rate, regardless of whether the increased authority actually increases acceptance.
Your analysis and conclusion make sense to me. This isn't a very strong design for convincingly demonstrating a causal effect. A better experimental design would use random assignment to different levels of authority (experimental manipulation) and then measure acceptance in everyone exposed to that experimental manipulation.
• asked a question related to Observational Studies
Question
ROBINS-I (Observational Study)
Selection bias may arise when the analysis does not include all of the participants, or all of their follow-up after initiation of intervention, that would have been included in the target randomized trial. The ROBINS-I tool addresses two types of selection bias: (1) bias that arises when either all of the follow-up or a period of follow-up following initiation of intervention is missing for some individuals (for example, bias due to the inclusion of prevalent users rather than new users of an intervention), and (2) bias that arises when later follow-up is missing for individuals who were initially included and followed (for example, bias due to differential loss to follow-up that is affected by prognostic factors). ROBIN-I consider the first type of selection bias under “Bias in selection of participants into the study” and aspects relating to loss to follow up are covered under “Bias due to missing data”.
For further information, read 'ROBIN-I detailed guidance- 2016' (https://www.riskofbias.info/welcome/home/current-version-of-robins-i/robins-i-detailed-guidance-2016).
• asked a question related to Observational Studies
Question
Which risk of bias tool is appropriate for these articles types:
1- Non-RCT experimental studies (Quasi-experimental)
2-Pre-test post-test and time intermittent series (observational studies)
If the resulted articles in the systematic reviews had different types (RCT, QUASI, observational), should we use 3 distinct risk of bias tools, or is there a universal tool for all?
Thanks
Thank you
• asked a question related to Observational Studies
Question
There are two ways one can classify Cluster Sampling technique. First, based on the number of stages followed to obtain the cluster sample (i.e., one -stage, two-stage, multiple-stage). The second way is the representation of the groups in the entire cluster. (This is to ensure a fair and yet right representation that could give the most accurate answer to a research question about a particular population and event)
I quote*: "Probability sampling requires that each member of the survey population have a chance of being included in the sample, but it does not require that this chance be the same for everyone." How true is this? Using the parsimonious principle in allocation may not make the analysis easier nor the model/test more but less powerful!
If quote* this is true, Probability proportional to size (PPS) sampling is a method that can be used to make sure in sampling the clusters, those with auxiliary variable of interest, or bigger population were selected against smaller population and would give the best answer to the research questions about the population! Am I right? Really want to learn pls.
Abiodun -
I am still not understanding everything, but it sounds like you have a very big project going on.  But even with a very large, complex project you can still test how well such a project is estimating ("predicting" in the case of regression), if you take out some of the observed data and see how well you would have estimated for it.  Well, that would be for the quantitative data.  I wonder what kind of 'quality assurance' one might be able to do for qualitative data?  That is not an area for me.
Best wishes - Jim
• asked a question related to Observational Studies
Question
Hi, I'm quite confused on the type of study design of this research paper (study is available in the attachment)
It seems to be a secondary type of research (does not collect primary data, data source was obtained from previously collected data over a period of 5 years (2015-2020)), the study analyses incidence of TB notifications pre- and post- pandemic from that data source. Would this study be classified as a cross-sectional study or a retrospective cohort study? Any advice would be of much help, thank you very much
Robert Boer, Mohanad Kamaleldin Mahmoud Ibrahim Babak Jamshidi Thank you everyone for answering. Robert Boer I'm currently conducting a systematic review on the availability of TB services pre- and post- pandemic. I've included that report in my review, and I need to know the study design for reporting on study characteristics and study quality assessment. May I ask how would I assess the study quality for a surveillance study? This is my first time conducting a systematic review and there are a few things I'm still unsure of, any advice would be highly appreciated.
• asked a question related to Observational Studies
Question
I want to investigate the clearance of some medicines on dialysis. These meds are regularly taken by patients and I am going to only control they are taken. Will it be a clinical trial (meds are given) or an observational study (meds are given not by me)?
Hi,
Since there is no intervention or control and only observation, it is a cohort observational study.
Is this a phase IV, surveillance study?
• asked a question related to Observational Studies
Question
I have observational panel data where all the observations went through the same treatment at a certain point of time say t. What is the best way to analyze the impact of that treatment? Thank you in advance!
Hasan Ahamed Hope that will help. Also, we're currently doing a similar research in order to describe a pattern of changes in dynamics of financial time series caused by some types of events, the above method is something we are going to try too.
• asked a question related to Observational Studies
Question
Greetings and merry christmas with everyone. I have a question regarding observational studies and it is that when factors associated with a condition / disease are evaluated, there are certain variables that are repeated (sex, educational level, age, income). My question is whether there is a series of universal variables that should always be considered in these types of studies or these variables vary according to the condition / disease. Thanks
I think there is no universal set of socio-demographic variables that should be applied to all the studies. Instead, it is more about the research question, e.g., whether the people's education level can help them avoid the infection of COVD-19. This can be done by simply comparing two groups of higher educated individuals and lower educated individuals.
Therefore, there should be no boundaries for the socio-demographic variables. Still, you always should ask yourself, what value this variable adds to the research, and if you can come with a reasonable argument, then I think it should be in the study.
Best Regards,
Belal Edries
• asked a question related to Observational Studies
Question
Hi all,
I am trying to do an observational study on admissions during the COVID period 2020, and comparing this to the same period in 2017, 2018 and 2019.
I am having trouble using statistics to analyse the data.
I have converted the data to admissions per day and trying to use ANOVA to get a p value however this does not seem to be getting a valid value (first time I got 6, next time I got 0). The data is zero heavy, especially in the 2020 group where there are many days with no admissions at all. I used Excel (cannot get my head around SPSS).
There is clearly a significant difference and similar studies have shown a p value of <0.01.
Is this the right statistical test to be using as there are 4 groups? Is there a particular way I should format the data to be able to analyse it? Is the fact that the data is zero heavy affecting the p value and the reason why the ANOVA does not seem to be working?
Any help would be greatly appreciated, many thanks
From what you are describing I am not sure that you can assume that your data meets all of the requirements for using ANOVA. Did you look to see if your data is normally distributed in each of the years as it does not sound as if it is. To be on the conservative side you could consider using a Kruskel-Wallis one-way analysis of variance (Nonparametric test) which does not have all the same constraints and I think you can easily do it by hand if it is not available in Excel. You could use the tools at https://www.statisticssolutions.com/kruskal-wallis-test/ as a quick way forward. The K-W test is not as powerful as a parametric ANOVA but in this case I don’t think it is going to be a problem.
• asked a question related to Observational Studies
Question
Hi all,
I'm looking into observational studies to try and better characterize treatment-resistant depression. I'm having trouble deciphering the paragraph in bold below, having read it multiple times. What does this mean in plain English? I have provided some additional text for context.
----------------------------------------------------------------------------------------------------------------------------------------------------------
The study sample began with identifying patients with Major Depressive Disorder and included 572,682 patients with at least one service claim for a depressive disorder including dysthymic disorder.
Patients were excluded if:
a) they had a pre-defined exclusion diagnosis for schizophrenia,
schizoaffective or bipolar disorders
b) the patient’s age on Index Diagnosis Date was not within the 18–64 year age bracket;
The Index Diagnosis Date for each patient was defined as the date of the first inclusion diagnosis that:
1) had no other inclusion diagnosis or prescription claim of antidepressants in the prior 120 days (guided by the definition of a ‘‘new episode’’ contained in PQRI 2007 Measure 9) [19], and
eligibility for 4 months prior to and 24 months after the date (to ensure a minimum study period of 2 years for each patient).
Classification of MDD Episode
After patients were included in the study sample, the data were examined and episodes of MDD were established for each patient. An MDD episode was defined to begin on a first relevant date and end 120 days after the last relevant date, where the first relevant date was a date of inclusion diagnosis that was not preceded by any inclusion diagnosis or an antidepressant prescription claim in the preceding 120 days. The last relevant date was a date of inclusion diagnosis or antidepressant prescription claim that was followed by a clear period of 120 days without any inclusion diagnosis or antidepressant claim.
Yes, that is a bit peculiar description.
The title of the bold part speaks of classification, but the following text describes a period, rather than a classification.
It looks like this study concerns a claims analysis, i.e. a study of data that were originally generated for health insurance claims.
There are dates at which a patient contact with the health care system gets a diagnosis code for MDD and there are dates at an insurance claim for a prescription of an antidepressant.
The start of the episode is defined as the date with an MDD diagnosis where the patient had not diagnosis or prescription claim during the 120 days before that start date.
From that start time on, they sort through the patient data and the date where they find that the last diagnosis or prescription claim was 120 days ago, is the end date of the episode.
• asked a question related to Observational Studies
Question
We are going to start a study without intervention, just electrophysiological exams.
Do We have to register it at clinical Trials?
thanks
Hello Roberto,
Regarding your question, registration of observational studies not mandatory, but I advise you to register it. You can follow the instructions in the attached link: https://bit.ly/2t130EQ
Best wishes.
• asked a question related to Observational Studies
Question
It is, nowadays, very popular to present research in a more objective way (With few statistical analysis) rather than documenting, presenting and interpreting crucial observations, experienced during the research.
Fortunately, in field-based studies, researchers get an opportunity to observe certain things and make notes on them. But the observations remain uninterpreted, even unpublished.
I want to take the help of you, academicians, about (1) How I write a proper observation-based research paper?, (2) What are the systematic ways to do so?, (3) Whom to contact and discuss about the same?
Dear Akash,
I’m not sure I entirely understood your question but if you are about to look for some guidelines for how to write a quantitative paper for a peer-reviewed journal, you are welcome to have a look at the guidelines on our SMART ACADEMICS blog at www.tressacademic.com/blog and our guide available at http://bit.ly/avoid_paper_rejection . I hope this helps a bit, best wishes, Gunther
• asked a question related to Observational Studies
Question
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects "enamel defects". The authors report this as a "cross-sectional study".
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects or dental anomalies. The authors report this as a "retrospective study" (case-control?)
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with dental anomalies or developmental defects. The authors report this as a "cohort study"
My question is why is the study design different, if in all cases we recruit patients and examine clinically / with radiography to assess the outcome.
In the past (and sometimes still) clinical researchers sometimes used the term "case control study" to mean comparing outcome in an exposed group to that in a "control" (ie un exposed group) ,cross sectionally. That is why in modern epidemiology (ie since about 1985) the term case-referent study is now preferred .
• asked a question related to Observational Studies
Question
Dear RG community,
I found it a bit confusing on few terms and aspect of study design.
1. From my understanding, longitudinal study is an observational study consist of cohort (prospective or retrospective) and panel study.
How about diary study, is it considered panel study?
How about repeated cross sectional studies that also often referred as longitudinal study?
2. Experimental study consist of before-after (or known as pre-post) or repeated measure interventional study, either randomized or non-randomized. Am i right?
3. For longitudinal study, especially prospective cohort, most references highlight on two group (exposed vs unexposed) with similar baseline and categorical outcome measure.
How about single group cohort with different baseline and outcome continous measure?
4. For longitudinal study especially cohort study, most references talk about months to years of followup duration.
But, for some psychlogical measure such as stress and fatigue, there should be no problem to conduct shorter duration of cohort study e.g. within 8 hours, am i right? For example, we measure baseline stress level prework, and followup the outcome stress messure postwork, i.e. after 8 hours of continous work. Is it appropriate?
.
Thank you for kind assistance.
Regards,
• asked a question related to Observational Studies
Question
Dear colleagues,
I have been struggling a lot about Ethic Committee. At the moment, I am working on observational studies and I would like to know if investigating suicidal ideation in a specific timeframe (in the last 12 months), and not "at the moment" would make necessary the approval by the Ethic Committee.
Does anyone know anything about it?
Probably, but it depends which setting you are working in. For example, if you are working in a clinical or educational setting you will certainly need approval. But first of all I would escalate the issue to your own supervisor or in your own institution.
• asked a question related to Observational Studies
Question
Hi,
For example: If I have 20 eligible studies (Both case-control studies and cross-sectional studies without any control group ) coming out of a systematic search.
Can I do meta-analysis with 10 of the studies (as those 10 studies have presented data of interest and case-control) and systematically review the rest of the 10 studies (without control group or not normally distributed data)?
Regards,
Asiful
Yes you can and this is methodologically right.
Systematic review usually consists of narrative and statistical presentation of data. According to the available data you can do meta-analysis of even two studies out of twenty included studies due to abscence of data for analysis.
• asked a question related to Observational Studies
Question
Hi,
Currently I am screening articles for my systematic review. Most of the articles are human retrospective cohorts studies and couple of articles have animal (mice) studies. If mice experimental studies meets our interventional criteria, can we include those studies in systematic review?
Thanks.
If you decide to include both types of studies because they address your research question (as mentioned previously), it may be a good idea to stratify the results by study type. How you will judge the quality of research and interpret effect sizes etc is likely to vary between animal and human studies.
• asked a question related to Observational Studies
Question
DIET DRINKS are being more and more preferred over energy drinks with the mindset that they are less in caloric intake. This new research challenges the conventional concept by claiming it to be linked with higher incidence of stroke in females.
According to Medscape words "Drinking artificially sweetened beverages is linked to increased risk for ischemic stroke, coronary heart disease, and all-cause mortality in women, new research shows. Among almost 82,000 participants in the Women's Health Initiative Observational Study, risk for fatal and nonfatal stroke was 23% higher among women who self-reported drinking the most diet beverages — two or more per day — compared to the women who consumed the least. The latter group drank none or less than one of these beverages per week. "
Regards
Thanks a lot about sharing this evidence. It covers various kinds of diet drinks, soda use and sweeteners. Very elaborate indeed.
Regards
• asked a question related to Observational Studies
Question
There are available guidelines of systematic reviews of RCT (Cochran) and observational studies (Moose guidelines).
The challenge is assessing risk of bias. It would be great if Cochrane's tool to assess risk of bias widely available
• asked a question related to Observational Studies
Question
I can not, despite trying for the last three hours, a systematic review and/or meta-analysis of observational studies where the the sysnthesized quantitative evidence has been evaluated by the GRADE approach to assess the confidence in evidence or the strength of it. Could some one help ?
Int Clin Psychopharmacol. 2018 Jul;33(4):181-196.
Antipsychotic drug exposure and risk of fracture: a systematic review and meta-analysis of observational studies.
Papola D1,2, Ostuzzi G1, Thabane L2, Guyatt G2, Barbui C1.
• asked a question related to Observational Studies
Question
Hello , Researchers
I'm Ahmed , undergraduate medical student and i create new research group with my friends and we are going to start our first research .
We still confuse about The researching as you know , we don't have from where we start .. is the Pubmed and other websites can help us to create new ideas and how ?
I saw so many clinical researches but most of them was for the postgraduate and for those people who work in all the day .
I need help and support
The first step of doing research to find out the health problems in your community. Develop a research question? for example, in your community there is a new diseases or increasing in number of patients in specific disease such as stroke or MI. now you have to ask a question about it: what is the causes of this increasing cases? or ask is smoking in specific group cause increasing MI cases? then You develop the objectives and based on that design the methodology.
• asked a question related to Observational Studies
Question
I am using
Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guidelines for public health. One journal is asking this " Indicate the guideline(s) used. Checklists for reporting guidelines may be uploaded as Reporting Checklist files." So my question is do I have to send them only the points of the MOOSE guidelines that I have followed or I have to elaborate the points? Thanking you in advance
If I understand the interesting questionI would suggest the following. In general think that it is much preferable to address precisely what you have done regarding the meta-analytic protocol you have followed. Just stating the points does not indicate how you followed them specifically and how you may have justified exclusions or derogations (e.g., changes) and so on.
I think meta-analysis is sufficiently tricky to warrant concise explanation as to the methodology (e.g., bayesian meta-analysis) as well as the issue you raise.
Hope this helps.
Paolo
• asked a question related to Observational Studies
Question
How many people do I need to recruit if I conduct randomized, between subject pilot study using 4 different condition for 4 type of manipulations?
Pilot study
Saunders et al., (2007) state that prior to using the questionnaire to collect data it should be pilot tested. Saunders et al., (2007) point out the purpose of the pilot test is to refine the questionnaire so that the respondents will have no problems in answering the questions and also there will be no problems in recording the data
Fink (2003b) as cited in Saunders et al., (2007) state that the minimum number for a pilot study is 10.
Reference
Saunders, M.N., 2007. Research methods for business students, 5/e. Pearson Education India.
• asked a question related to Observational Studies
Question
STROBE checklists seem to be used frequently. However, it might not be the most appropriate tool as the original purpose was to serve as a guideline for reporting observational research and not specifically as a methodological quality-assessment tool (da Costa et al., 2011).
Thanks in advance for any recommendations.
da Costa, B. R., Cevallos, M., Altman, D. G., Rutjes, A. W., & Egger, M. (2011). Uses and misuses of the STROBE statement: bibliographic study. BMJ open, 1(1), e000048.
Dear Carina and others,
I some time see people confused to critically understand the difference between quality assessment tools used during the (1) planning and carrying out of systematic review and (2) assessing the quality of individual articles selected for the review. Following your question, it would be also good to add the the question "WHEN"? For example the well known tools like PRISMA, STROBE and MOOSE, provide a standardized guide for carrying out systematic reviews, including constructing a protocol, testing for bias and heterogeneity, and other aspects of the review process. How ever, Newcastle-Ottawa Scale (NOS) checklist (cohort and case-control studies), Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies, etc are tools that are used to assess and rate the quality of individual articles that are included in a systematic review or meta analysis.
List of methodological assessment tools,  https://www.ncbi.nlm.nih.gov/pubmed/25594108
• asked a question related to Observational Studies
Question
As we know that variation exists between dimensions and morphology of cornea in different strains of mice. Even the susceptibility and resistance to some group of pathogens has been observed (eg. C57BL/6 susceptible to Pseudomonas aeruginosa and resistant to Staphylococcus aureus etc.). Considering different factors (corneal diameter, thickness, cost effectiveness and handling etc) which mice strain do you prefer for studying Keratitis?
I think all of them are important, It is better we considering different factors (corneal diameter, thickness, cost effectiveness and handling etc)  in our study.
• asked a question related to Observational Studies
Question
i am planning a systematic review of observational study ,the aim is to compare the outcomes of an intervention on different stages of the same disease,  so the included population and comparison groups have different stages which will obviously affect the outcomes , now can i assess the outcomes of the intervention in presence of such selection bias ?
In my view, you can assess the out comes if the intervention approach remains the same. The comparison of similarities and differences in the outcomes with the changing age will help in explaining the generalization or specification of the outcomes of intervention.
• asked a question related to Observational Studies
Question
research question: what is the role of gender in people's preference of reality-based show and fiction-based show?
I am thinking about a non-experimental design because we can't randomly assign gender to people.So I will have a group of male and a group of female and then give them both types of show and compare the preferences. Two groups will be matched on other demographic characteristics to reduce the influences of confronting variables.
Another opinion I got is 2*2 factorial design which is an experimental design. Basically create two groups of male and two groups of female and give them two types of show separately.
Here is my question: while we can randomly assign male to the two male groups and assign female to the female groups, these male and female groups essentially come from two separate population and are not equivalent. Would 2*2 factorial design really work in this case? How should we analyze the results?
Thank you!
Thank you for the answers! My only concern was that the male and female groups are not comparable because we draw them from two separate populations.I guess we can use statistical method to control other confounding variables?
I also wonder to what extend this kind of experimental design can be applied to other observational research? For example,  if we want to examine whether being employed or unemployed have an impact on people's health. If we only think about one population, since we can't  randomly assign job to people,then we have to use quasi-experimental design and rely on existing groups.  However,if we treat employed people as one population and unemployed as another, we can get two non-equivalent groups and ask them to take the same fitness test.
The textbook says that random assignment is impossible in observational research and this makes me think experimental design won't work in this kind of settings . But it looks like if we draw treatment and comparison groups from two populations, we can still get things work? I am getting confused on this part. Can someone give me an example of observational research that we have to use quasi-experiential design?
• asked a question related to Observational Studies
Question
Hy, i've a set of Catphan and ACR CT phantom images at different mAs values (dose). I will prepare a 4 AFC test for radiologists: they must see a sphere of 6mm of Catphan in a four image test. The same test must be done by a Model Observer (CHO or something). I will acquire phantom image from different CT scanners in my hospital and i will evaluate images with: 1)traditional Medical Physicist approach(Catphan or ACR contrast, noise MTF..),2) with dose indexes (CTDI and DLP), 3) With model observer index in order to obtain dose reduction on CT exams and a comparison between different CT scanners/centers. I need a software (macro, matlab os something) to do model observe studies.
Check out ImageIQ. Matlab and other associated software is not objective or consistently reproducible to do such a comparison especially if you plan to do it manually. An algorithm-integrated software assistance of some sort would be helpful.
• asked a question related to Observational Studies
Question
Hello, I'd like to know what is a valid tool to appraise for risk of bias for observational studies, mainly cohort and case-control studies, where the exposure is not an intervention, e.g. cigarette smoke, genetic aberrations, etc. Cochrane recommends ACROBAT-NRSI however it is suitable when the exposure is an intervention. Thank you.
Several tools for assessing the methodological quality of observational studies can be identified. Deeks et al. identified 182 in 2003. Of these the Cochrane Handbook chapter 13.5.2.3 Tools for assessing methodological quality or risk of bias in non-randomized studies recommended to use the Downs and Black instrument and the Newcastle-Ottawa Scale (NOS). However recent studies (Hartling et al 2013; Stang 2010) have questioned the reliability and validity of the NOS scale.
The Center of research dissemination (CRD) has set some criteria’s for evaluating observational studies in CRD report 4 – Study quality assessment (separated in case-control, cohort and case series) but their criteria’s for cohort studies are missing the evaluation of the validity and reliability of the outcome. One way of dealing with that is the method in Øiestad et al. 2015 where criteria’s from CRD are added one criteria from the Black and Down about the outcome measure. Another possibility is to use the Scottish International Guideline Network (SIGN) checklist. The STROBE checklist is very useful tool for evaluating the reporting of observational studies.
• asked a question related to Observational Studies
Question
Currently doing a Systematic Review of observational studies, Narrative synthesis has been employed, Metanalysis not feasible
Yes you could. GRADE provide a systematic process of evaluating the quality of any RCT or observational study on 5 domains. For a single study there is no need to pool the effect sizes together but you still need to measure the quality of the evidence.
Best regards,
• asked a question related to Observational Studies
Question
For my observation interview I will use a sample of 4 Italian people and 4 English. However, in order to carry on the observation interview I have to provide stimuli. The best option I have is to show a video or some images so that i will be able to provoke some reactions and then analyzing the results. Conditions must be the same for both the Italian and English group. Thank you
• asked a question related to Observational Studies
Question
appropriate method for the observation of social media upload frequency selfie behavior?
Interesting question, I would try an empirical research using mine or other people’s (i.e. friends), social medias.  Use your time/date and look for selfies posted within that time/date. If you start with your friends, then your friend’s friends so-on you should get a large empirical study group. Statistic out each social media: Facebook, snapchat, and Instagram then total all, so-on. Luck.
• asked a question related to Observational Studies
Question
For Example, if we carry out an event and want to see the level of participation of individual person, how can we measure it? Is making a video a good option? are there any other tools to measure the level of participation the person is offering?
Regarding public participation in urban and spatial planning I propose my forthcoming book "Planning, Participation, and Knowledge: Public Participation as a Tool for Integrating Local Knowledge into Spatial Planning" by Springer (200 pages).
This book provides a state of the art new approach to participatory planning, and generates innovative thought in planning theory and knowledge study. It draws on the rich repertoire of public participation practices that have developed globally over the last 50 years, and investigates the following questions: Which participatory practices most effectively capture residents’ genuine spatial needs, perceptions and desires? And how can these be incorporated into actual plans? These questions are treated in the book through the introduction of a new conceptual framework for participatory planning, one which redefines concepts that have been taken for granted for too long: those of “public participation” and “local knowledge”. The book is based on an empirical comparative examination of the effectiveness of various participatory processes, and it proposes practical solutions for public participation through two new instruments: the Practices Evaluation Tool, and the Participatory Methods Ladder. These instruments calibrate participation methods according to certain criteria, in order to improve their ability to extract local knowledge and incorporate it into planning deliverables. These new instruments correspond to and elaborate on Arnstein’s ladder - the 1969 theoretical landmark for participatory planning. Both academics and practitioners in the area of urban and regional planning will find this book to be an invaluable resource, given the way it develops both theoretical and practical cutting-edge outcomes.
• asked a question related to Observational Studies
Question
I am currently writing a systematic review and the majority if not all my studies are descriptive. I looked for quality assessment tools and found out that the
QAT is widely used: http://www.nccmt.ca/registry/view/eng/14.html but it is somehow applicable to intervention rather than descriptive studies.
I also came across circum which seems appropriate but I haven't seen any review that used circum before http://circum.com/index.cgi?en:appr
Do you think I should be using QAT? what other tools would you suggest?
Thank you
Below are tools that I have previously used, are easy to use, and some of them are recommended by Cochrane:
For cross-sectional/survey studies: the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies: http://www.nhlbi.nih.gov/health-pro/guidelines/in-develop/cardiovascular-risk-reduction/tools/cohort
For intervention studies: the EPHPP tool http://www.ephpp.ca/tools.html (first two files)
For risk of bias assessment for interventions: the EPOC criteria https://www.biomedcentral.com/content/supplementary/2046-4053-3-103-S2.pdf
• asked a question related to Observational Studies
Question
Most observational studies provide adjusted estimates in the form of OR (with 95% CI), however some of them use HR instead that includes the notion of time to event. Is there a way to transform one to the other?
There are some effect sizes that are similar, if not exactly the same, and judgments are required as to whether it is acceptable to combine them. One example is odds ratios and risk ratios. When the outcome is rare, or in conducting a nested case control study then these are approximately equal and can readily be combined. As the event gets more common the two diverge and should not be combined. Other indices that are similar to risk ratios are hazard ratios and rate ratios.
Some researchers decide these are similar enough to combine; others do not. The judgment of the meta-analyst in the context of the aims of the meta-analysis will be required to make such decisions on a case by case basis.
I suggest to convert OR to risk ratio (RR) and with some consideration you can assess the similarity of HR and RR. HR deal with time and wouldn't cross-compare OR and HR. you can consider the type of study for concise selection of effect measures.
RR = odds ratio /1 − risk0 + risk0 × odds ratio
where risk0 is the risk of having a positive outcome in the control or unexposed group. Equation can be utilized for both the unadjusted and adjusted odds ratio.
If the rate ratio is the ratio of two rates calculated as  cases/person-time, then that is actually a hazard ratio estimate as well.
When in doubt, it's probably best to split up the studies and look at each sub-group.
• asked a question related to Observational Studies
Question
how I can overcome the hawthorne effect during observational study.?
A simple strategy some qualitative investigators who use observational methods recommend is discarding the first time interval of observation (to allow the subjects to get used to being observed) and use subsequent observations for your actual data analyses.
• asked a question related to Observational Studies
Question
Are there studies comparing structured with unstructured observation methods?
Hi,
For full details on the characteristics, use, meaning, strengths, limitations for structured and unstructured methods, you can read everything in the follwoing textbook:
Waltz C., Strickland O., Lenz E. (2010; 4th ed.). Measurement in Nursing and Health Research Fourth Edition. Springer; New York.
Good luck
• asked a question related to Observational Studies
Question
I am unsure as to whether I should use Downs and Black to evaluate all studies or alternatively using Jadad for RCTS, Downs and Black for non- RCTs and observational studies or even Cochrane RoB instead?
I advise you to use Cochrane ROB for RCTs, ACROBAT NRSI for Non-RCTs, and Newcastle Ottawa Scale (NOS) for observational studies.
Jedad score is no longer recommended.
• asked a question related to Observational Studies
Question
MOOSE: Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group (2000)
PRISMA: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement (2009)
Except using that guidelines reporting a Meta-analysis, you also need some questionnaires to evaluate each articles of your meta-analysis. There are multiple questionnaires  could be used, and you should choose an appropriated one for you study. I am sorry that I cannot give you more information about that because I do not know what kinds of articles in your study.
I used o the 9-star Newcastle-Ottawa Scale of my study. You could see the article by the link:
Hope this helps!
• asked a question related to Observational Studies
Question
My idea is to do a research on the subject and make an observational study on my own, with a few participants due to the limited resources. So I would need to set some questions, which characteristics would allow me to make the study.
Dejar Elina, the tasks they performe are among others, sit to stand transfers, bed to chair,  repositioning in bed, etc...
What I'm going to evaluate is the interaction process between carers and persons, focusing in every communication uterance (verbal, paraverbal and non - verbal).Many tthanks for your intetest. I'll keep in touch. Sorry for the délétère answer.
• asked a question related to Observational Studies
Question
Dear All,
I am conducting a systematic review to assess determinants, levels of uptake of IPTp-SP and associated pregnancy outcomes in sub-Saharan Africa in order to evaluate implication of the current World Health Organization (WHO) policy on Intermittent Preventive Treatment of malaria during pregnancy with sulphadoxine-pyrimethamine (IPTp-SP).
All the included studies are observational e.g. cross sectional, prospective cohort and mixed-methods. I have gone through a couple of quality appraisal tools but they almost reviewing RCT studies.
Therefore, I would like to ask for your assistance in identifying suitable tools for these kind of studies.
Steven
Hi Steve, you can use Newcastle-Ottawa Scale for assessing the quality of your observational studies in systematic review and meta-analysis. Hope you find this information helpful.
Best regards,
John
• asked a question related to Observational Studies
Question
The observations were collected to assess the effect of a group art session for people living with dementia
T1=they were observed in a group but before the 12 week programme started =control session,
T2 Individuals were observed at least once but sometimes over two sessions in sessions 1/&2/3/4,  this was dependant on their attendance,
T3= Collected in sessions 11&/12 again this varies some observed once other s over two sessions.
FYI A simular project looked at the responses from people living with dementia attending the National Gallery in Syndney.
• asked a question related to Observational Studies
Question
It is well known that studies on rare events, the reported risk ratios (RR) are quite similar to odds ratios (OR). And generally the accepted range for the term "rare" is less than 5%. In this context, my primary question is: can I combine RRs and ORs to conduct a meta-analysis of observational studies that explored a rare event (Prevalence < 2%)?
If so: can I transform the RRs and ORs to Cohen's d (according to Borenstein et al.) in order to perform the meta-analysis with effect sizes?
Dear Andres,
Check these papers out:
• asked a question related to Observational Studies
Question
My PhD is a design study of a visual analytics system that visualises text cohesion, designed to help editors make documents more coherent. I am in the process of analysing and writing up the findings of my first user evaluation study (a ‘lab’ one, rather than an ‘in-the-wild’ one, the latter of which is yet to come). My background is as a domain expert (professional editor), so I have minimal experience with HCI methods.
I have the data, in the form of transcripts of sessions where I sat with domain-expert users and had them play with the tool (using their own data as well as several other example sets of data) and discuss their impressions and thoughts. I already know what phenomena I find interesting, but I can't seem to just write the chapter--I keep reorganising and renaming and remixing my structure. I can't seem to get beyond that stage of structuring and restructuring the chapter. I think this is happening because I want to assure myself that my observations are legitimate and relevant, and that they are elicited and expressed in some useful and systematic way. I don't know what the norms are in the way this kind of research is written up, or how to make best use of the data. As I said, I already know what phenomena I personally find interesting in the data, but I haven’t used any particular theory or process to identify those things. I’ve pretty much just used my knowledge/intuition. Is this OK? And if so, how do I organise that? It's just a series of observations right now. For example, should I organise them:
1. by what component of the designed tool I think they relate to (cohesion theory, LSA rendering of cohesion, visualisation, work practices in the domain, individual differences in users?)?
2. By what body of theory I want to use to explain why they happened (Affordances for interface design problems, Gestalt for visual perception problems, lack of connection with linguistic theory in writing/composition instruction for users' difficulties in understanding the theory of cohesion, etc)?
3. Or just put the observed phenomena in there one by one, as is ('users had unexpected ideas about what the system was for', 'users took a long time to learn how to use the system', 'some users found the lack of objective standard of cohesion challenging', etc), and then address the possible reasons for why these phenomena might have happened within the body of each of those sections (because, after all, this part will only be speculation, given that I won't be isolating variables and testing any of these theories--I will just be suggesting them as possible leads for further studies)?
Each of these options has a limitation. I feel that number one, organising by component, is a bit difficult and presumptuous. I don't necessarily know that a user's behaviour is caused by a problem with the visualisation design or by the theory the visualisation is trying to communicate, or an unintuitive interface with which to interact with the visualisation, or a lack of familiarity on the part of the user with the sample text, or the user's individual problems with computers/technology in general, or a limitation in the way I explained how the system works, or an incompatibility with their practice as an editor, or... etc etc. It could be one of those things or several of those things or none of those things, and I won't have enough in the data to prove (or sometimes even guess) which. This same problem plagues the second option--to organise by theory. That presumes that I know what caused the behaviour.
In fact, now that I have typed this out, it seems most sensible to use the third option--to just list out what I noticed and not try to organise it in any way. This to me (and probably to others) looks informal and underprocessed, like undercooked research. It's also just a bit disorganised.
I think looking at other similar theses will help. I have had difficulty locating good examples of design studies with qualitative user evaluations to show me how to organise the information and get a feel for what counts as a research contribution. Even if I find something, it's hard to know how good an example it is (as we all know, some theses scrape in despite major flaws, and others are exemplary).
Can anyone offer some advice, or point me to some good examples? Much appreciated.
Caroline, there may be some value in comparing results of another objective reviewer of the transcripts. From that you can compare overlap and get an interrater reliability measure that may provide some assurance to your committee that there are others who at least somewhat agree with your list.  If there  is no agreement, you may want to consider what that other objective reviewer of the transcripts is saying. They should, in advance of reviewing, have some level of expertise, or you may consider training them (or having someone else train them) until they are competent to evaluate the transcripts.
In terms of other researchers who have some related experience, I would encourage you to consider seeing research of Jesse Crosson (on ResearchGate; was at New Jersey School of Medicine and Dentistry, now at Princeton Health and has experience with Medical Informatics).
You may also see the research articles from Mihaela Vorvoreanu (also on ResearchGate).  Both have specialized for a number of years in more qualitative methods related to human-computer interaction.
Their research (and the research of their students or co-authors) will be considered credible in view of your committee members.  If additional questions arise, you may contact me directly for any follow up.
• asked a question related to Observational Studies
Question
My research proposes a quantitative approach using data from previous longitudinal observational studies to answer the following questions.
(sample size is approximately 300 participants)
1. What is the prevalence of a disease?
2. What clinical signs and symptoms were present in patients are associated with this disease?
3. What risk factors were present in those who diagnosed with this disease?
I don't get it. As I see this, there is not any need to perform any test or to calculate any power to answer the three questions.
Q3 is a bit strange. A good diagnosis should depend on risk factors. If this is the case, the presence of the risk factors is not independent of the diagnosis, and the answer of such a question will be quite meaningless.
• asked a question related to Observational Studies
Question
Can someone guide me in the direction of an elaborate paper or literature on a detailed step by step guide of how to conduct a meta analysis, especially if it is for observational studies.
Mine is a question please... I hope it is polite to ask it here. Please can i use the same procedure given for meta analysis for quantitative studies? I need to be sure before paying for the journal. Thanks
• asked a question related to Observational Studies
Question
The Modeling Structuring Equations (MEE) is a statistical analysis tool of social research, could be used or adapted to data from observations?
Indeed, it can! One of the great things about SEM models is the ability to model the error structures/components. It is a very flexible tool and can be used in a variety of contexts. I particularly like the ability to diagram the models.
That said, there is a bit of a learning curve to it... Once you get past that, you'll have a field day!
Ariel
• asked a question related to Observational Studies
Question
Case Control Ratio 1:4
The question can be taken as to learn how to test for statistical significance in an inferential design. In such a case,  It all depends on whether cases and controls are matched or not. Moreover, it depends of the design you are using to test your hypothesis. If cases and controls are not matched, Chi square should be suitable in cohort studies (if the main hypothesis is being tested). In case-controls Mantel-Haenzsel Chi square is generally better and the number of control per case (rarely exceeding three, actually!) does not affect this testing procedure. If they are matched, things are quite different: a sort of McNemar´s test is to be used as Mantel-Haenszel procedure. The 1:4 rate of cases and controls does not influence the results, provided you are sure that the number of controls (4) is fixed for every case. Otherwise, as is the case when the number of controls per case varies along the series, Woolf's method would be preferable. But I think this is becoming too much this time...
• asked a question related to Observational Studies
Question
Do we need Ethics committee approval and CTRI registration for Observational studies (Simplified PMS study) for Ayurvedic medicines.?
Yes EC is required because you are collecting patient related information through some investigation. CTRI will not be needed as it is just observational in nature.
• asked a question related to Observational Studies
Question
I'm looking for an appropriate quality appraisal tool (QAT) to assess observational studies that will be included in a systematic review. The QAT will not be used to exclude studies with low quality scores, but it will be reported in the review. STROBE is inappropriate as it was not designed for the purpose of assessing methodological quality in a systematic review. Does anyone have any suggestions for a robust/widely accepted QAT in public health?
Hi thanks Pieter and Medhi for these suggestions, much appreciated. Apparently STROBE should only be used in reporting of observational studies, not for appraising quality in SRs (attached). I met the librarian at LSHTM yesterday, she said the jury is still out on tools and checklists for this purpose and recommended not doing scores or ranking of studies (at least in the actual review). She suggested making note of the study's limitations and including these in the body of the review or in the table summarising the studies. So I'll probably go with this.
Poon, thanks for the CASP link - I was actually using an adapted version of CASP by one of my advisors who does SRs full time (also, the librarian said this is most suitable if i do use one, as it poses good questions about a study's quality), so I will probably go through my adapted CASP checklist for my info only, and report the study limitations qualitatively in the summary table. Thank you again everyone for the suggestions!
• asked a question related to Observational Studies
Question
Experience in systematic review of observational studies.
Please see Quality Score for cohort studies: Allam MF, Campbell MJ, Hofman A, Del Castillo AS, Fernández-Crehuet Navajas R. Smoking and Parkinson's disease: systematic review of prospective studies. Mov Disord. 2004 Jun;19(6):614-21. Review. PubMed PMID: 15197698.
• asked a question related to Observational Studies
Question
case study is useful to understand the problem of why and how. the most common methods are single case study and camparative case study. now i face a problem caused by the data unavalibility. i have collected relatively rich materials of one case, which is not enough to do deep,longitudinal case study. and some related cases which are less detailed. my strategy: on the one hand, i will continue to collect data. on the other hand, i try to find more appropriate method.
i wonder whether there is a case study method wich based on core case and peripheral(subsidiary) cases.
thank you !
I broadly agree with the above three answers, but their can be another dimension to your question. if your research question is not answer by a single case, then you can use a comparative case-study, or multiple case-study (which ever fits your research design after revisions). moreover, for generalization from a core case to other cases see Tsang, (2013, Academy of Management review; Pikkari et al, 2008; the same journal; stake, 1980; among others.
• asked a question related to Observational Studies
Question
We are using the STROBE and CARE checklists to assess data quality for observational studies and case studies respectively.  How do we then 'convert' these ticks on the checklists into high, medium or low quality?
STROBE is not a rating system, but a checklist to assist authors to publish their observational study by addressing the essential ingredients. It is not really suitable for assessing study quality, but only the completeness of reporting.
• asked a question related to Observational Studies
Question
Hey
I am about to plan a systematic review with meta-analysis of observational studies and want to find out if age is an exposure variable for a clinical outcome. My outcome is dichotomous as the occurtence of a clinical event or not.
Meanwhile, some observational studies will report age as a continuous variable and some will dichotomise age at 50 years. Is it allrisght to include ORs for the outcome varibale from studies using both age as a continous and as a dichotomous exposure variable?
Best regards,
Daniel
Darren - nicely put.  If you can put everything on the same metric, do so, and since continuous variables usually retain more information than the same variable categorized, that would be the preferred approach if possible.  Separate analyses of the same variable at different levels of aggregation should be a last resort, and you are right that if you get different answers you are in a bit of an interpretive pickle.  (On the other hand, if you get broadly compatible answers, you've answered the original research question - age probably has an effect that is somewhere in the range of X1 - X2.)
• asked a question related to Observational Studies
Question
I have completed an observational study with six nominal variables. We have a fully crossed design with 2 coders rating all 162 subjects. I'd like advice on the best method for setting up an SPSS database to compute inter-rater reliability for this data please?
If you just want to run basic IRR statistics such as Cohen's Kappa, you can do this by running a Cross-Tabs analysis via the "Descriptives" option under the "Analyze" menu in SPSS. However, statistics such as Kappa can be biased in many circumstances, so you might want to try something more robust like Krippendorff's alpha. Andrew Hayes has developed a macro ("Kalpha") to compute Krippendorff's alpha in SPSS - you can find it at this link if you scroll down: http://www.afhayes.com/spss-sas-and-mplus-macros-and-code.html
Also, a helpful summary of several considerations going into IRR calculation (along with an overview of Krippendorff's alpha) can be found in this article: http://www.afhayes.com/public/cmm2007.pdf
• asked a question related to Observational Studies
Question
for assessing the qualities of individual studies included in the systematic review
Example cross Sectional survey,
Dear Ramesh
For systematic review and metanalysis  is better PRISMA guidelines. In relation to scoring interpretation, I fear that the answer is: "no".
Kind regards,
• asked a question related to Observational Studies
Question
Trial participation rate can give additional informations on patient representativeness. No consensus on this indicator have been published since it seems to have no ideal rate. However, some people says that by convention, a participation rate lower than 80% may increase the risk of selection bias.
Vinay gives a good answer with regards to surveys. In regards to drawing causal inference from intervention studies, my response is "it depends". It depends on whether you are estimating effects on "as treated" versus "intent to treat". It depends on the treatment effect estimator you will use (average treatment effects, or average treatment effects on the treated). It depends on the outcome variable, and whether censoring is an option.
So there are many angles here and the answer is not straight-forward. If you can provide more specifics about your study, I am sure we can provide more specific responses.
• asked a question related to Observational Studies
Question
In Meta-analyses: Can standardized ratios (SMRs/ SIRs) be pooled with ORs or RRs (not standardised)? Has anyone done this? any supporting reference?
Best wishes, Jonas Ludvigsson
You can combine anything that's trying to estimate the same thing if you have the estimate and standard error, using the inverse-variance method. The question is whether it makes sense to pool them. But I guess that's what you're asking? It's not just the way the estimates are derived, it's the different study designs that lead to them. So you may have RR and OR, then assuming the OR is a good approximation to the RR in your case, you can theoretically combine them. But we would usually stratify the forest plot and provide different pooled estimates, because the case-control studies are more prone to bias, and this is design heterogeneity. But I wouldn't want to combine cohort studies (leading to RR) with case-control studies (leading to OR) if I could help it. It's presumably the same for the sort of studies leading to SMR and SIR. Particularly if one is measuring mortality, the other incidence. Does it make sense to combine these?
I can't remember if I've combined SMR with RR, in real life. Despite my reservations (above) I think I would still consider it on a case-by-case basis and be careful to justify it in the report if I did. Of course, there will be plenty of heterogeneity in that the SMR will probably be age-sex standardized, but the RR will probably be adjusted for other things too, so you'll have plenty to explore. Good luck.
• asked a question related to Observational Studies
Question
Suppose I have four methods A, B, C, and D, and I want to run a longitudinal study to measure participants' performance with each method.
I choose a within-subject design because I have a few number of volunteers. Since I also want to see how performance changes with practice, the study will have ten sessions. In every session participants will execute some task using each of the four methods.
I have read about counter-balancing using a latin square to avoid any interference effect between the methods. However, as far as I understood, this approach seems to be used in cases where you have only one session. For example, consider the 4x4 latin square:
A B C D
B C D A
C D A B
D A B C
In a single-session study, participant 1 would use the methods in the order defined in row 1, participant 2 in row 2, ..., participant 5 in row 1 and so on (the number of participants must be a multiple of 4).
In the example, interference is then assessed by analyzing the effect of "group" (i.e. each order of the methods), a non-significant effect means no interference.
What about the second, third, ... tenth session? Will the order be the same in every session for each participant? Keeping the same order means that participant 1 will always use method D after he or she used methods A, B, and C, therefore performance could be poor with D due to tiredness.
My question is: how to change the order of the methods from one session to the next to avoid interference between conditions?
Can I assume that you are randomly assigning subjects to sessions? If so, it seems that you can randomize the order ABCD for each subject. This randomization approach will ensure that if there is interference, it will be random as well.
• asked a question related to Observational Studies
Question
I'm interested phenomena of appearance of oscillations when kernel and f(t) are non-periodic functions. This phenomena I observed studying behaviour of a solution of difference equations of Volterra type. And I'd like to get a submission of it through some theoretical continuous-time model.
Dear Igor; I think if you try enough you can transform Bessel differential equation, which is known has oscillatory solutions (i.e., Hankel Fct)  into the Volterra linear integral format.  First try to find the Green's function as such that it satisfies the boundary condition  G(r-r', t-t') = 0    r' at  S(BC), and the initial condition where G should go every where zero in the domain of  interest, exception at point   r where it should go to infinity represented by a Generalized function, which would be the point source solution of the differential equation. In the case of Parabolic DE, one has:
G(r-r', t-t')= 8-1[Pi Kappa(t-t')]-3/2 exp{[- (r-r')2/kappa(t-t')]}
Note: The Greens functions of many DE which are popular in science and engineering are are known in the literature. Since in Finite or Boundary Element methods of numerical solution of DE,  we are employing Integral Equation format to convert them into the matrix form as a final step for the machine computation practices.
Best Luck , New Year Greetings.
Tarık
Note: Elements of Partial Differential Equations by Ian N  SNEDDON.
• asked a question related to Observational Studies
Question
I am going to conduct a prospective observational study on the attribution of dispositional factors and rehab interventions to the vocational outcomes of people with mental illness. From literature review, there are about 3 to 4 contributing factors and 2 to 3 dispositional factors (confounders). I've read about the "10 cases for each predictor" rule on the internet. Is it valid? Any software or program can help in calculating the minimal sample size?
Dear gary,
The "10 cases for each predictor" rule  was originally meant for linear regression analysis, according to my experience it works for logistic too. The method to estimate sample size in logistic regression analysis is given here:
Sample size considerations
Sample size calculation for logistic regression is a complex problem, but based on the work of Peduzzi et al. (1996) the following guideline for a minimum number of cases to include in your study can be suggested.
Let p be the smallest of the proportions of negative or positive cases in the population and k the number of covariates (the number of independent variables), then the minimum number of cases to include is:
N = 10 k / p
For example: you have 3 covariates to include in the model and the proportion of positive cases in the population is 0.20 (20%). The minimum number of cases required is
N = 10 x 3 / 0.20 = 150
If the resulting number is less than 100 you should increase it to 100 as suggested by Long (1997).
References
Long JS (1997) Regression Models for categorical and limited dependent variables. Thousand Oaks, CA: Sage Publications.
Pampel FC (2000) Logistic regression: A primer. Sage University Papers Series on Quantitative Applications in the Social Sciences, 07-132. Thousand Oaks, CA: Sage Publications.
Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR (1996) A simulation study of the number of events per variable in logistic regression analysis. Journal of Clinical Epidemiology 49:1373-1379. [Abstract]
• asked a question related to Observational Studies
Question
"Although we defined high quality studies as those that scored full marks on the Newcastle-Ottawa scale, many of these high and medium quality studies probably did not control for all possible confounders. Although we restricted positive cannabis results to drivers that showed the presence of tetrahydrocannabinol in the absence of other drugs or alcohol, other potentially important confounders were probably not controlled for. These hidden confounders, as well as the differing study designs used, might have affected the results of the individual studies and hence the estimates of the pooled odds ratios." M. Asbridge, J. Hayden, J. Cartwright (2012) Acute cannabis consumption and motor vehicle collision risk: systematic review of observational studies and meta-analysis. BMJ 2012;344:e536 doi: 10.1136/bmj.e536
Hi Dr. Aldallal,
Thank you for the links. I have seen these articles prior by NCBI/NIH and while the MEEQ scale is most often referred to in the United States, Australian-based MJ expectancy measures use the CEQ scale along with a SDS-C scale and GHQ-28 scale. While MEEQ discriminates between users and non-users, the limited research on MJ in the US due to MJ's current drug classification laws has resulted in my searching for MJ research/information from international sources. Also, there are scales that don't seem to warrant as much attention as others, namely the ACEQ as it is based on the AEQ, replacing the word 'drinking' and 'drinking alcohol' with 'cannabis' and 'smoking cannabis' which I personally feel is a misguided attempt in understanding MJ and the reasons for its use.
Thank you,
Terri
PS: I applaud University of Kufa's directed attention regarding the education of females with The Faculty of Education for Girls.
• asked a question related to Observational Studies
Question
I was hoping someone might be able to recommend some literature that discusses the benefits and limitations of carrying out an task based observation study in a controlled/simulated environment as apposed to observing individuals in their natural environment. My interest is in decision making in a clinical setting, but any suggestions on background literature would be helpful.
Thanks
Mark
Hi both,
Thanks for your replies. I should have been a bit more specific in my post. I'm interested in the cognitive processes a group of staff utilise whilst using imaging software to make clinical decisions on a group of patients. I'm planning to use a think aloud method in a simulated environment (for practical and ethical purposes). I need to justify the fact that the simulated environment doesn't invalidate the data. Having spoken to a couple of psychologists it seems fine, but finding an article of two that justifies it seems to be a bit more tricky.
Thanks again
Mark
• asked a question related to Observational Studies
Question
A confounding variable should meet the following conditions:
1. It is associated with the outcome
2. It is associated with the outcome, independent of the variable of interest
3. It is associated with the variable of interest
4. It is not in the causal pathway between variable of interest and outcome
How should 'association' be determined? The outcome of a statistical test or size of the effect? Do we need to provide a rationale for confounding variables in our research? If so, what guidelines should we follow? Can you suggest any good papers to guide our decisions on what should and should not be controlled for in observational studies?
I agree with you Stefan. I'd probably also highlight that this means you need to think about it, draw up a DAG etc., *before* you do the analysis. Specifically addressing one of the questions, it's generally a bad idea to base the decision on statistical tests, and a bad idea to base those on the dataset you're using. That approach leads to problems in almost every department.
• asked a question related to Observational Studies
Question
I am trying to calculate population attributable fractions from a cohort study that uses hazard ratios adjusted for other covariates as an approximation of relative risk.  I have read that one way to do this is to use Levin's classic formula ( PAF = p(RR-1) / p(RR-1)+1) substituting the relative risk estimates for hazard ratio estimates.
I think there are two issues with this:
1) It is not strictly accurate to use adjusted estimates of relative risk in Levin's PAF formula as it cannot account for the adjustment in the estimation of PAF
2) The use of Levin's formula (substituting RR with HR)  has been demonstrated in cohort studies (Benichou 2001) though it does not consider how the prevalence of exposed individuals changes during the follow-up (Samuelson & EIde 2008). While this may not be a problem over a short-time time frame, Crowson et al (2009), argue that there is uncertainty around how accurate such measures are in the case of long-term cohort studies.
Can anyone suggest a formula that will accurately account for the adjusted hazard ratio estimates and also how the prevalence of exposed individuals changes during the follow-up period?
Many thanks
With regards to your comment on using adjusted RRs, if you look at publications on PAF (example those of Parkin 2010) you will see that they use adjusted RRs (not HRs, but anyway....) rather than unadjusted. Of course the problem is always that one uses RRs as estimated in a study, or a meta-analyses, but that usually your own study population is different in both prevalence of the factor of interest, but certainly also of the other factors. So the reference population used in the RRs is different from your own reference population! But unadjusted RRs would certainly give a wrong answer, one hopes that adjusting will reduce a lot of the ´error´. PAFs inevitably come with their problems, unless you have very precise estimates for RR from your own population (in the same periods and same population subgroups), and prevalence estimates of the population for all imaginable combinations of exposure (e.g. prevalence of persons consuming alcohol and tobacco with a BMI of 20-25), preferably by agegroup and sex...
With regards to the HR... I guess the fact that the time factors is not involved may cause some problems but no expert there...
• asked a question related to Observational Studies
Question
The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)statement: guidelines for reporting observational studies.
STROBE was developed to be used as more of a check-list than a score, just to have a standard way of reporting those 3 types of research i.e. cohort, case control, and cross-sectional. It would be inappropriate to use STROBE as a scoring tool to validate or assess methodological quality of studies. The aim of STROBE is to provide guidance to authors on how to improve the reporting of observational studies to facilitate critical appraisal and interpretation of results. It is a misuse to apply STROBE to explain how a research should be done or on how it should be designed.
By the way, STROBE is becoming increasingly applied by researchers and journal editors worldwide.
Further clarification on the paper attached. Hope this helps.
• asked a question related to Observational Studies
Question
We are looking for new observational coding software.
Actually, I am looking for a quantitative coding system that can measure frequency, duration, percent of behaviors that occur during an observation. I will be primarily observing children and parent-child interactions.
• asked a question related to Observational Studies
Question
Research protocol registration improves the transparency of research as well as allows completeness of information from the publications that ensue. Being essential for interventional trials, there has been a move towards registration of observational studies as well.
I am planning to start a research registry, focussing essentially on observational studies in health care research.
What would be the expectations of a researcher/scholar from such a registry?
What qualities and attributes should I aim for and what are the potential problems that I am likely to encounter?
Thank you Emma for such a detailed response.. This is very useful and i'll certainly keep these points in mind..
regards, raza
• asked a question related to Observational Studies
Question
We are researching on tactical performance in basketball with 4 observers, analysing both nominal/categorical and continuous variables. I have used Randolph's free-marginal multirater kappa (see the link attached, Randolph, 2005; Warrens, 2010) so far, but I would like to know which different kind of Inter-observers reliability measures do you know and why do you think could be more suitable when you have more than two observers. Thank you in advance.
To mention other methods of inter-observer reliability measurements (could also be used for intra-observer reliability):
Intraclass correlation coefficient (ICC) is an assessment of inter-observer reliability which expresses the relation of explained variance to the total variance (in terms of reliabilty, the nearer to one, the better). Be aware that the exact description of ICC formula is needed, since there are several ways to calculate ICC. For further information I could recommend “Atkinson G, Nevill AM. Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine. Sports Med 1998;26:217e38.” “McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods 1996;1:30e46.” “Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull 1979;86:420e8.”
Mixed model analysis is another sophisticated method to determine the explained variance in a more detailed way (thereof, ICC could also be calculated). For example it might be interesting how much of the variance can be explained by the basketball observers (the less the better, since a high explained variance by the observers means a greater variability of the ratings between the observers). For STATA users I could recommend “Rabe-Hesketh S, Skrondal A. Multilevel and longitudinal modeling using Stata. 3rd ed. College Station, Tex.: Stata Press Publication; 2012.”
• asked a question related to Observational Studies
Question
The work of the "observers" involved in a research is really complicated and extremely tedious; usually, they have to spend so many hours with a laptop analysing around 15-20 matches. Moreover, researchers have to expend lot of time on specific training sessions about the variables, the recording instruments, etc. for achieving a good reliability value. However, the average of observers used is hardly ever over two.
Therefore... why don't increase the number of observers? First, the data would be recorded earlier (more number of "workers", the more work done in less time). And second, in my opinion, a reliability value achieved by 5 or 6 people should be stronger that one achieved by only two.
What do you think? Thank you in advance.