Science topic
Observational Studies - Science topic
Explore the latest questions and answers in Observational Studies, and find Observational Studies experts.
Questions related to Observational Studies
On social media platforms (e.g. Twitter), after posting a tweet that has been used to refute false information, is it reliable to use the comments below it to assess the credibility of the tweet? As it is the user's choice whether or not to comment, does this practice lead to selection bias? For example, under the pressure of certain social norms, it takes courage to refute information posted by an official account, i.e. users who support the official claim are more likely to comment.
Regression and matching are the most common econometric tools used by scholars. In the case of regression, regression always calculate correlations, but such correlation can also be interpreted as causation when certain requirements are satisfied. As Pearl says, " ‘Correlation does not imply causation’ should give way to ‘Some correlations do imply causation.’ "
One of the most critical assumptions for making causal inferences in observational studies is that (conditional on a set of variables) the treatment and control groups are (conditional) exchangeable. Confounding and selection bias are two forms of lack of exchangeability between the treated and the untreated. Confounding is a bias resulting from the presence of common causes of treatment and outcome, often viewed as the typical shortcoming of observational studies; whereas selection bias occurs when conditioning on the common effect of treatment and outcome, and can occur in both observational studies and randomized trials.
In econometrics, the definition of confounding and selection bias is not very clear. The so-called omission variable bias (also known as selection bias, as distinct from the selection bias we mentioned above) in econometrics, in my opinion, refers to bias due to confounding. As a simple regression model Y = a + bx + ɛ, we say there is omitted variables bias when the residual term is correlated with the independent variable, that is, the regression model omits variables related to the independent variable that may affect y. In another words, the omitted variable is correlated with 1) the independent variable and 2) the outcome variable. By the above definition, the common effects of X and Y should also be controlled for, and such control is known to lead to another type of bias - selection bias. Angrist addresses this issue in his book, saying:” There’s a second, more subtle, confounding force here: bad controls create selection bias …..., the moral of the bad control story is that timing matters. Variables measured before the treatment variable was determined are generally good controls, because they can’t be changed by the treatment. By contrast, control variables that are measured later may have been determined in part by the treatment, in which case they aren’t controls at all, they are outcomes.” Now we know that variables that are measured before the treatment variable is determined are not necessarily good control variables, such as M-bias. The econometric definition is confusing, and it seems to me that omitted variable bias should be distinguished from selection bias, and omitted variable bias should be defined as the variable in the residual that causes Y also causes X.
Due to presentation problems, omitted variable bias is often mistaken as the omission of variables associated with y. We often see articles with statements such as "To mitigate the omitted variables bias of the model, we also control for ....." , followed by a long list of variables that (may) have an effect on y. However, adding a series of control variables to the regression model maybe not helpful to our assessment of causal effects, but even amplify the bias. The inclusion of control variables without consideration may trigger the issue of conditioning on a collider opens the back-door path, which was blocked when the collider was not conditioned on. Therefore, when using regression for causal inference, all we have to do is to pick a set of variables based on reliable causal diagrams.
I believe that simple regression methods should not be worthless in causal inference; what we need to do is to scrutinise our assumptions before using regression (using causal diagrams to choose the control variables using to block backdoor paths is a good way), to increase the transparency of our research, and to show the reader what assumptions our results are based on, and to what extent these assumptions are reliable. Of course, no matter how much effort we put into proving that our conclusions are reliable, there is still the inevitable threat of unobservable confounding in studies based on observational data, and regression methods that coping with this by adding control variables can only address those observable confounding. However, you cannot deny one method if you cannot clearly identify where these threats are coming from. As Robins says, the critic is not making a scientific statement, but a logical one.
These are just some of my personal views from the study, all comments are welcomed!
References.
Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. basic books.
Hernán, M. A., & Robins, J. M. (2010). Causal inference.
Angrist, J. D., & Pischke, J. S. (2014). Mastering' metrics: The path from cause to effect. princeton university press.
Hello,
As I know Graphpad Prism is being used for experimental studies, but can I analyze data for cross-sectional studies using Graphpad Prism as well? Or do I have to use another program?
I am looking forward to the answer. Thank you so much!
Regards,
I am doing a study on the causal effects of information source (X1) and text sentiment (X2) on information sharing behaviour (Y). How can I explain whether my predictors can be considered as quasi-experimental factors, given that my observational data are derived from social media data?
Consider the following study: assess the causal impact of the authority (X) of the publisher of a debunking message (a message that refutes a rumour) on the user's acceptance of that message (Y). The independent variable (authority, X) is measured by a scale (higher scores indicate higher authority) and the dependent variable (acceptance, Y) is determined by the user's position (acceptance or non-acceptance) of comments made in response to the debunked message.
Is there selection bias in this research design? Since the only data for the dependent variable are those who make comments, and those who do are more likely to be the ones who accept the debunking message; conversely, those who do not accept the debunking message are likely to remain silent out of fear of authority. In other words, the process of selection of individuals into the analysis guarantees that debunking messages from high authority publishers have a higher acceptance rate, regardless of whether the increased authority actually increases acceptance.
Which risk of bias tool is appropriate for these articles types:
1- Non-RCT experimental studies (Quasi-experimental)
2-Pre-test post-test and time intermittent series (observational studies)
If the resulted articles in the systematic reviews had different types (RCT, QUASI, observational), should we use 3 distinct risk of bias tools, or is there a universal tool for all?
Thanks
There are two ways one can classify Cluster Sampling technique. First, based on the number of stages followed to obtain the cluster sample (i.e., one -stage, two-stage, multiple-stage). The second way is the representation of the groups in the entire cluster. (This is to ensure a fair and yet right representation that could give the most accurate answer to a research question about a particular population and event)
I quote*: "Probability sampling requires that each member of the survey population have a chance of being included in the sample, but it does not require that this chance be the same for everyone." How true is this? Using the parsimonious principle in allocation may not make the analysis easier nor the model/test more but less powerful!
If quote* this is true, Probability proportional to size (PPS) sampling is a method that can be used to make sure in sampling the clusters, those with auxiliary variable of interest, or bigger population were selected against smaller population and would give the best answer to the research questions about the population! Am I right? Really want to learn pls.
Hi, I'm quite confused on the type of study design of this research paper (study is available in the attachment)
It seems to be a secondary type of research (does not collect primary data, data source was obtained from previously collected data over a period of 5 years (2015-2020)), the study analyses incidence of TB notifications pre- and post- pandemic from that data source. Would this study be classified as a cross-sectional study or a retrospective cohort study? Any advice would be of much help, thank you very much
I want to investigate the clearance of some medicines on dialysis. These meds are regularly taken by patients and I am going to only control they are taken. Will it be a clinical trial (meds are given) or an observational study (meds are given not by me)?
I have observational panel data where all the observations went through the same treatment at a certain point of time say t. What is the best way to analyze the impact of that treatment? Thank you in advance!
Greetings and merry christmas with everyone. I have a question regarding observational studies and it is that when factors associated with a condition / disease are evaluated, there are certain variables that are repeated (sex, educational level, age, income). My question is whether there is a series of universal variables that should always be considered in these types of studies or these variables vary according to the condition / disease. Thanks
Hi all,
I am trying to do an observational study on admissions during the COVID period 2020, and comparing this to the same period in 2017, 2018 and 2019.
I am having trouble using statistics to analyse the data.
I have converted the data to admissions per day and trying to use ANOVA to get a p value however this does not seem to be getting a valid value (first time I got 6, next time I got 0). The data is zero heavy, especially in the 2020 group where there are many days with no admissions at all. I used Excel (cannot get my head around SPSS).
There is clearly a significant difference and similar studies have shown a p value of <0.01.
Is this the right statistical test to be using as there are 4 groups? Is there a particular way I should format the data to be able to analyse it? Is the fact that the data is zero heavy affecting the p value and the reason why the ANOVA does not seem to be working?
Any help would be greatly appreciated, many thanks
Hi all,
I'm looking into observational studies to try and better characterize treatment-resistant depression. I'm having trouble deciphering the paragraph in bold below, having read it multiple times. What does this mean in plain English? I have provided some additional text for context.
Thanks in advance for your help!
----------------------------------------------------------------------------------------------------------------------------------------------------------
The study sample began with identifying patients with Major Depressive Disorder and included 572,682 patients with at least one service claim for a depressive disorder including dysthymic disorder.
Patients were excluded if:
a) they had a pre-defined exclusion diagnosis for schizophrenia,
schizoaffective or bipolar disorders
b) the patient’s age on Index Diagnosis Date was not within the 18–64 year age bracket;
c) an Index Diagnosis Date was not found.
The Index Diagnosis Date for each patient was defined as the date of the first inclusion diagnosis that:
1) had no other inclusion diagnosis or prescription claim of antidepressants in the prior 120 days (guided by the definition of a ‘‘new episode’’ contained in PQRI 2007 Measure 9) [19], and
2) had continuous service
eligibility for 4 months prior to and 24 months after the date (to ensure a minimum study period of 2 years for each patient).
Classification of MDD Episode
After patients were included in the study sample, the data were examined and episodes of MDD were established for each patient. An MDD episode was defined to begin on a first relevant date and end 120 days after the last relevant date, where the first relevant date was a date of inclusion diagnosis that was not preceded by any inclusion diagnosis or an antidepressant prescription claim in the preceding 120 days. The last relevant date was a date of inclusion diagnosis or antidepressant prescription claim that was followed by a clear period of 120 days without any inclusion diagnosis or antidepressant claim.
We are going to start a study without intervention, just electrophysiological exams.
Do We have to register it at clinical Trials?
thanks
It is, nowadays, very popular to present research in a more objective way (With few statistical analysis) rather than documenting, presenting and interpreting crucial observations, experienced during the research.
Fortunately, in field-based studies, researchers get an opportunity to observe certain things and make notes on them. But the observations remain uninterpreted, even unpublished.
I want to take the help of you, academicians, about (1) How I write a proper observation-based research paper?, (2) What are the systematic ways to do so?, (3) Whom to contact and discuss about the same?
Thanks in advance.
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects "enamel defects". The authors report this as a "cross-sectional study".
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects or dental anomalies. The authors report this as a "retrospective study" (case-control?)
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with dental anomalies or developmental defects. The authors report this as a "cohort study"
My question is why is the study design different, if in all cases we recruit patients and examine clinically / with radiography to assess the outcome.
Dear RG community,
I found it a bit confusing on few terms and aspect of study design.
1. From my understanding, longitudinal study is an observational study consist of cohort (prospective or retrospective) and panel study.
How about diary study, is it considered panel study?
How about repeated cross sectional studies that also often referred as longitudinal study?
2. Experimental study consist of before-after (or known as pre-post) or repeated measure interventional study, either randomized or non-randomized. Am i right?
3. For longitudinal study, especially prospective cohort, most references highlight on two group (exposed vs unexposed) with similar baseline and categorical outcome measure.
How about single group cohort with different baseline and outcome continous measure?
4. For longitudinal study especially cohort study, most references talk about months to years of followup duration.
But, for some psychlogical measure such as stress and fatigue, there should be no problem to conduct shorter duration of cohort study e.g. within 8 hours, am i right? For example, we measure baseline stress level prework, and followup the outcome stress messure postwork, i.e. after 8 hours of continous work. Is it appropriate?
.
Thank you for kind assistance.
Your response is highly appreciated.
Regards,
Fadhli
Dear colleagues,
I have been struggling a lot about Ethic Committee. At the moment, I am working on observational studies and I would like to know if investigating suicidal ideation in a specific timeframe (in the last 12 months), and not "at the moment" would make necessary the approval by the Ethic Committee.
Does anyone know anything about it?
Thanks in advance!
Hi,
For example: If I have 20 eligible studies (Both case-control studies and cross-sectional studies without any control group ) coming out of a systematic search.
Can I do meta-analysis with 10 of the studies (as those 10 studies have presented data of interest and case-control) and systematically review the rest of the 10 studies (without control group or not normally distributed data)?
Regards,
Asiful
Hi,
Currently I am screening articles for my systematic review. Most of the articles are human retrospective cohorts studies and couple of articles have animal (mice) studies. If mice experimental studies meets our interventional criteria, can we include those studies in systematic review?
Your comments will help me a lot.
Thanks.
Reference: https://www.medscape.com/viewarticle/909395?src=wnl_edit_tpal&uac=104521MY&impID=1891813&faf=1
DIET DRINKS are being more and more preferred over energy drinks with the mindset that they are less in caloric intake. This new research challenges the conventional concept by claiming it to be linked with higher incidence of stroke in females.
According to Medscape words "Drinking artificially sweetened beverages is linked to increased risk for ischemic stroke, coronary heart disease, and all-cause mortality in women, new research shows. Among almost 82,000 participants in the Women's Health Initiative Observational Study, risk for fatal and nonfatal stroke was 23% higher among women who self-reported drinking the most diet beverages — two or more per day — compared to the women who consumed the least. The latter group drank none or less than one of these beverages per week. "
Kindly have your say.
Regards
There are available guidelines of systematic reviews of RCT (Cochran) and observational studies (Moose guidelines).
I am conducting a systematic review on predictors of X disease. Please help me on guidelines.
I can not, despite trying for the last three hours, a systematic review and/or meta-analysis of observational studies where the the sysnthesized quantitative evidence has been evaluated by the GRADE approach to assess the confidence in evidence or the strength of it. Could some one help ?
Hello , Researchers
I'm Ahmed , undergraduate medical student and i create new research group with my friends and we are going to start our first research .
We still confuse about The researching as you know , we don't have from where we start .. is the Pubmed and other websites can help us to create new ideas and how ?
I saw so many clinical researches but most of them was for the postgraduate and for those people who work in all the day .
I need help and support
Thank you in advance ..
I am using
Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guidelines for public health. One journal is asking this " Indicate the guideline(s) used. Checklists for reporting guidelines may be uploaded as Reporting Checklist files." So my question is do I have to send them only the points of the MOOSE guidelines that I have followed or I have to elaborate the points? Thanking you in advance
How many people do I need to recruit if I conduct randomized, between subject pilot study using 4 different condition for 4 type of manipulations?
STROBE checklists seem to be used frequently. However, it might not be the most appropriate tool as the original purpose was to serve as a guideline for reporting observational research and not specifically as a methodological quality-assessment tool (da Costa et al., 2011).
Thanks in advance for any recommendations.
da Costa, B. R., Cevallos, M., Altman, D. G., Rutjes, A. W., & Egger, M. (2011). Uses and misuses of the STROBE statement: bibliographic study. BMJ open, 1(1), e000048.
As we know that variation exists between dimensions and morphology of cornea in different strains of mice. Even the susceptibility and resistance to some group of pathogens has been observed (eg. C57BL/6 susceptible to Pseudomonas aeruginosa and resistant to Staphylococcus aureus etc.). Considering different factors (corneal diameter, thickness, cost effectiveness and handling etc) which mice strain do you prefer for studying Keratitis?
i am planning a systematic review of observational study ,the aim is to compare the outcomes of an intervention on different stages of the same disease, so the included population and comparison groups have different stages which will obviously affect the outcomes , now can i assess the outcomes of the intervention in presence of such selection bias ?
research question: what is the role of gender in people's preference of reality-based show and fiction-based show?
I am thinking about a non-experimental design because we can't randomly assign gender to people.So I will have a group of male and a group of female and then give them both types of show and compare the preferences. Two groups will be matched on other demographic characteristics to reduce the influences of confronting variables.
Another opinion I got is 2*2 factorial design which is an experimental design. Basically create two groups of male and two groups of female and give them two types of show separately.
Here is my question: while we can randomly assign male to the two male groups and assign female to the female groups, these male and female groups essentially come from two separate population and are not equivalent. Would 2*2 factorial design really work in this case? How should we analyze the results?
Thank you!
Hy, i've a set of Catphan and ACR CT phantom images at different mAs values (dose). I will prepare a 4 AFC test for radiologists: they must see a sphere of 6mm of Catphan in a four image test. The same test must be done by a Model Observer (CHO or something). I will acquire phantom image from different CT scanners in my hospital and i will evaluate images with: 1)traditional Medical Physicist approach(Catphan or ACR contrast, noise MTF..),2) with dose indexes (CTDI and DLP), 3) With model observer index in order to obtain dose reduction on CT exams and a comparison between different CT scanners/centers. I need a software (macro, matlab os something) to do model observe studies.
Hello, I'd like to know what is a valid tool to appraise for risk of bias for observational studies, mainly cohort and case-control studies, where the exposure is not an intervention, e.g. cigarette smoke, genetic aberrations, etc. Cochrane recommends ACROBAT-NRSI however it is suitable when the exposure is an intervention. Thank you.
Currently doing a Systematic Review of observational studies, Narrative synthesis has been employed, Metanalysis not feasible
For my observation interview I will use a sample of 4 Italian people and 4 English. However, in order to carry on the observation interview I have to provide stimuli. The best option I have is to show a video or some images so that i will be able to provoke some reactions and then analyzing the results. Conditions must be the same for both the Italian and English group. Thank you
appropriate method for the observation of social media upload frequency selfie behavior?
For Example, if we carry out an event and want to see the level of participation of individual person, how can we measure it? Is making a video a good option? are there any other tools to measure the level of participation the person is offering?
I am currently writing a systematic review and the majority if not all my studies are descriptive. I looked for quality assessment tools and found out that the
QAT is widely used: http://www.nccmt.ca/registry/view/eng/14.html but it is somehow applicable to intervention rather than descriptive studies.
I also came across circum which seems appropriate but I haven't seen any review that used circum before http://circum.com/index.cgi?en:appr
Do you think I should be using QAT? what other tools would you suggest?
Thank you
Mohamad
Most observational studies provide adjusted estimates in the form of OR (with 95% CI), however some of them use HR instead that includes the notion of time to event. Is there a way to transform one to the other?
how I can overcome the hawthorne effect during observational study.?
Are there studies comparing structured with unstructured observation methods?
I am unsure as to whether I should use Downs and Black to evaluate all studies or alternatively using Jadad for RCTS, Downs and Black for non- RCTs and observational studies or even Cochrane RoB instead?
MOOSE: Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group (2000)
PRISMA: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement (2009)
My idea is to do a research on the subject and make an observational study on my own, with a few participants due to the limited resources. So I would need to set some questions, which characteristics would allow me to make the study.
Dear All,
I am conducting a systematic review to assess determinants, levels of uptake of IPTp-SP and associated pregnancy outcomes in sub-Saharan Africa in order to evaluate implication of the current World Health Organization (WHO) policy on Intermittent Preventive Treatment of malaria during pregnancy with sulphadoxine-pyrimethamine (IPTp-SP).
All the included studies are observational e.g. cross sectional, prospective cohort and mixed-methods. I have gone through a couple of quality appraisal tools but they almost reviewing RCT studies.
Therefore, I would like to ask for your assistance in identifying suitable tools for these kind of studies.
Thank you in advance,
Steven
The observations were collected to assess the effect of a group art session for people living with dementia
T1=they were observed in a group but before the 12 week programme started =control session,
T2 Individuals were observed at least once but sometimes over two sessions in sessions 1/&2/3/4, this was dependant on their attendance,
T3= Collected in sessions 11&/12 again this varies some observed once other s over two sessions.
It is well known that studies on rare events, the reported risk ratios (RR) are quite similar to odds ratios (OR). And generally the accepted range for the term "rare" is less than 5%. In this context, my primary question is: can I combine RRs and ORs to conduct a meta-analysis of observational studies that explored a rare event (Prevalence < 2%)?
If so: can I transform the RRs and ORs to Cohen's d (according to Borenstein et al.) in order to perform the meta-analysis with effect sizes?
My PhD is a design study of a visual analytics system that visualises text cohesion, designed to help editors make documents more coherent. I am in the process of analysing and writing up the findings of my first user evaluation study (a ‘lab’ one, rather than an ‘in-the-wild’ one, the latter of which is yet to come). My background is as a domain expert (professional editor), so I have minimal experience with HCI methods.
I have the data, in the form of transcripts of sessions where I sat with domain-expert users and had them play with the tool (using their own data as well as several other example sets of data) and discuss their impressions and thoughts. I already know what phenomena I find interesting, but I can't seem to just write the chapter--I keep reorganising and renaming and remixing my structure. I can't seem to get beyond that stage of structuring and restructuring the chapter. I think this is happening because I want to assure myself that my observations are legitimate and relevant, and that they are elicited and expressed in some useful and systematic way. I don't know what the norms are in the way this kind of research is written up, or how to make best use of the data. As I said, I already know what phenomena I personally find interesting in the data, but I haven’t used any particular theory or process to identify those things. I’ve pretty much just used my knowledge/intuition. Is this OK? And if so, how do I organise that? It's just a series of observations right now. For example, should I organise them:
1. by what component of the designed tool I think they relate to (cohesion theory, LSA rendering of cohesion, visualisation, work practices in the domain, individual differences in users?)?
2. By what body of theory I want to use to explain why they happened (Affordances for interface design problems, Gestalt for visual perception problems, lack of connection with linguistic theory in writing/composition instruction for users' difficulties in understanding the theory of cohesion, etc)?
3. Or just put the observed phenomena in there one by one, as is ('users had unexpected ideas about what the system was for', 'users took a long time to learn how to use the system', 'some users found the lack of objective standard of cohesion challenging', etc), and then address the possible reasons for why these phenomena might have happened within the body of each of those sections (because, after all, this part will only be speculation, given that I won't be isolating variables and testing any of these theories--I will just be suggesting them as possible leads for further studies)?
Each of these options has a limitation. I feel that number one, organising by component, is a bit difficult and presumptuous. I don't necessarily know that a user's behaviour is caused by a problem with the visualisation design or by the theory the visualisation is trying to communicate, or an unintuitive interface with which to interact with the visualisation, or a lack of familiarity on the part of the user with the sample text, or the user's individual problems with computers/technology in general, or a limitation in the way I explained how the system works, or an incompatibility with their practice as an editor, or... etc etc. It could be one of those things or several of those things or none of those things, and I won't have enough in the data to prove (or sometimes even guess) which. This same problem plagues the second option--to organise by theory. That presumes that I know what caused the behaviour.
In fact, now that I have typed this out, it seems most sensible to use the third option--to just list out what I noticed and not try to organise it in any way. This to me (and probably to others) looks informal and underprocessed, like undercooked research. It's also just a bit disorganised.
I think looking at other similar theses will help. I have had difficulty locating good examples of design studies with qualitative user evaluations to show me how to organise the information and get a feel for what counts as a research contribution. Even if I find something, it's hard to know how good an example it is (as we all know, some theses scrape in despite major flaws, and others are exemplary).
Can anyone offer some advice, or point me to some good examples? Much appreciated.
My research proposes a quantitative approach using data from previous longitudinal observational studies to answer the following questions.
(sample size is approximately 300 participants)
1. What is the prevalence of a disease?
2. What clinical signs and symptoms were present in patients are associated with this disease?
3. What risk factors were present in those who diagnosed with this disease?
Can someone guide me in the direction of an elaborate paper or literature on a detailed step by step guide of how to conduct a meta analysis, especially if it is for observational studies.
The Modeling Structuring Equations (MEE) is a statistical analysis tool of social research, could be used or adapted to data from observations?
Do we need Ethics committee approval and CTRI registration for Observational studies (Simplified PMS study) for Ayurvedic medicines.?
I'm looking for an appropriate quality appraisal tool (QAT) to assess observational studies that will be included in a systematic review. The QAT will not be used to exclude studies with low quality scores, but it will be reported in the review. STROBE is inappropriate as it was not designed for the purpose of assessing methodological quality in a systematic review. Does anyone have any suggestions for a robust/widely accepted QAT in public health?
Experience in systematic review of observational studies.
case study is useful to understand the problem of why and how. the most common methods are single case study and camparative case study. now i face a problem caused by the data unavalibility. i have collected relatively rich materials of one case, which is not enough to do deep,longitudinal case study. and some related cases which are less detailed. my strategy: on the one hand, i will continue to collect data. on the other hand, i try to find more appropriate method.
i wonder whether there is a case study method wich based on core case and peripheral(subsidiary) cases.
thank you !
We are using the STROBE and CARE checklists to assess data quality for observational studies and case studies respectively. How do we then 'convert' these ticks on the checklists into high, medium or low quality?
Hey
I am about to plan a systematic review with meta-analysis of observational studies and want to find out if age is an exposure variable for a clinical outcome. My outcome is dichotomous as the occurtence of a clinical event or not.
Meanwhile, some observational studies will report age as a continuous variable and some will dichotomise age at 50 years. Is it allrisght to include ORs for the outcome varibale from studies using both age as a continous and as a dichotomous exposure variable?
Best regards,
Daniel
I have completed an observational study with six nominal variables. We have a fully crossed design with 2 coders rating all 162 subjects. I'd like advice on the best method for setting up an SPSS database to compute inter-rater reliability for this data please?
for assessing the qualities of individual studies included in the systematic review
Example cross Sectional survey,
Trial participation rate can give additional informations on patient representativeness. No consensus on this indicator have been published since it seems to have no ideal rate. However, some people says that by convention, a participation rate lower than 80% may increase the risk of selection bias.
In Meta-analyses: Can standardized ratios (SMRs/ SIRs) be pooled with ORs or RRs (not standardised)? Has anyone done this? any supporting reference?
Best wishes, Jonas Ludvigsson
Suppose I have four methods A, B, C, and D, and I want to run a longitudinal study to measure participants' performance with each method.
I choose a within-subject design because I have a few number of volunteers. Since I also want to see how performance changes with practice, the study will have ten sessions. In every session participants will execute some task using each of the four methods.
I have read about counter-balancing using a latin square to avoid any interference effect between the methods. However, as far as I understood, this approach seems to be used in cases where you have only one session. For example, consider the 4x4 latin square:
A B C D
B C D A
C D A B
D A B C
In a single-session study, participant 1 would use the methods in the order defined in row 1, participant 2 in row 2, ..., participant 5 in row 1 and so on (the number of participants must be a multiple of 4).
In the example, interference is then assessed by analyzing the effect of "group" (i.e. each order of the methods), a non-significant effect means no interference.
What about the second, third, ... tenth session? Will the order be the same in every session for each participant? Keeping the same order means that participant 1 will always use method D after he or she used methods A, B, and C, therefore performance could be poor with D due to tiredness.
My question is: how to change the order of the methods from one session to the next to avoid interference between conditions?
I'm interested phenomena of appearance of oscillations when kernel and f(t) are non-periodic functions. This phenomena I observed studying behaviour of a solution of difference equations of Volterra type. And I'd like to get a submission of it through some theoretical continuous-time model.
I am going to conduct a prospective observational study on the attribution of dispositional factors and rehab interventions to the vocational outcomes of people with mental illness. From literature review, there are about 3 to 4 contributing factors and 2 to 3 dispositional factors (confounders). I've read about the "10 cases for each predictor" rule on the internet. Is it valid? Any software or program can help in calculating the minimal sample size?
"Although we defined high quality studies as those that scored full marks on the Newcastle-Ottawa scale, many of these high and medium quality studies probably did not control for all possible confounders. Although we restricted positive cannabis results to drivers that showed the presence of tetrahydrocannabinol in the absence of other drugs or alcohol, other potentially important confounders were probably not controlled for. These hidden confounders, as well as the differing study designs used, might have affected the results of the individual studies and hence the estimates of the pooled odds ratios." M. Asbridge, J. Hayden, J. Cartwright (2012) Acute cannabis consumption and motor vehicle collision risk: systematic review of observational studies and meta-analysis. BMJ 2012;344:e536 doi: 10.1136/bmj.e536
I was hoping someone might be able to recommend some literature that discusses the benefits and limitations of carrying out an task based observation study in a controlled/simulated environment as apposed to observing individuals in their natural environment. My interest is in decision making in a clinical setting, but any suggestions on background literature would be helpful.
Thanks
Mark
A confounding variable should meet the following conditions:
1. It is associated with the outcome
2. It is associated with the outcome, independent of the variable of interest
3. It is associated with the variable of interest
4. It is not in the causal pathway between variable of interest and outcome
How should 'association' be determined? The outcome of a statistical test or size of the effect? Do we need to provide a rationale for confounding variables in our research? If so, what guidelines should we follow? Can you suggest any good papers to guide our decisions on what should and should not be controlled for in observational studies?
I am trying to calculate population attributable fractions from a cohort study that uses hazard ratios adjusted for other covariates as an approximation of relative risk. I have read that one way to do this is to use Levin's classic formula ( PAF = p(RR-1) / p(RR-1)+1) substituting the relative risk estimates for hazard ratio estimates.
I think there are two issues with this:
1) It is not strictly accurate to use adjusted estimates of relative risk in Levin's PAF formula as it cannot account for the adjustment in the estimation of PAF
2) The use of Levin's formula (substituting RR with HR) has been demonstrated in cohort studies (Benichou 2001) though it does not consider how the prevalence of exposed individuals changes during the follow-up (Samuelson & EIde 2008). While this may not be a problem over a short-time time frame, Crowson et al (2009), argue that there is uncertainty around how accurate such measures are in the case of long-term cohort studies.
Can anyone suggest a formula that will accurately account for the adjusted hazard ratio estimates and also how the prevalence of exposed individuals changes during the follow-up period?
Many thanks
The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)statement: guidelines for reporting observational studies.
We are looking for new observational coding software.
Research protocol registration improves the transparency of research as well as allows completeness of information from the publications that ensue. Being essential for interventional trials, there has been a move towards registration of observational studies as well.
I am planning to start a research registry, focussing essentially on observational studies in health care research.
What would be the expectations of a researcher/scholar from such a registry?
What qualities and attributes should I aim for and what are the potential problems that I am likely to encounter?
We are researching on tactical performance in basketball with 4 observers, analysing both nominal/categorical and continuous variables. I have used Randolph's free-marginal multirater kappa (see the link attached, Randolph, 2005; Warrens, 2010) so far, but I would like to know which different kind of Inter-observers reliability measures do you know and why do you think could be more suitable when you have more than two observers. Thank you in advance.
The work of the "observers" involved in a research is really complicated and extremely tedious; usually, they have to spend so many hours with a laptop analysing around 15-20 matches. Moreover, researchers have to expend lot of time on specific training sessions about the variables, the recording instruments, etc. for achieving a good reliability value. However, the average of observers used is hardly ever over two.
Therefore... why don't increase the number of observers? First, the data would be recorded earlier (more number of "workers", the more work done in less time). And second, in my opinion, a reliability value achieved by 5 or 6 people should be stronger that one achieved by only two.
What do you think? Thank you in advance.