# How can I compare two odds ratios extracted from a longitudinal study?

How can I compare two odds ratios extracted from a longitudinal study?

Question

How can I compare two odds ratios extracted from a longitudinal study?

- In a longitudinal study we must use relative risk, because we can compute Incidence rate.And then compare two relative risk.
- Maybe you are talking of case control analysis inside a longitudinal cohort. Although in longitudinal cohorts risk ratio are the main indicators. In the case-control, OR are apropriate and can estimate (if the diseases remain rare). However, in my memory, ln(OR) follow a Normal repartition law, but several statistical online pdf courses describe it. You can study them to foramally calcule the comparaison (if you want a precise p). Otherwise, you get the CI of the OR in every statistical software and you can use it. If not crossing p< 0.05 !
- If you consider odds to be independent, you can use the following:

difference of the log odds, δ. The standard error of δ is sqrt(SE(1st odds)+SE(2nd odds)) . Then you can obtain a p-value for the ratio z=δ/SE(δ) from the standard normal. - I am unsure what you are asking. I disagree with Friba. The Odds Ratio is a valid statistic in itsself. If the disease is rare it approximates the rate ratio. The odds ratio lends itself to further statistical manipulation e.g., Logistic regresion etc. If the disease is not rare then I would use the Rate Ratio but most diseases are rare

J - First, thank you all for your help!

I have two odds ratios in a longitudinal study, the one related baseline and the other one from follow up time. Now I am going to compare these odds ratio. I don’t know after calculate the two odds ratios over two different time, just I can compare them by providing significantly or not significantly the odds ratios over time or is there any statistical test to compare the odds ratios. - Finding the significance would be an appropriate method
- My suggestion is to draw a forest plot and see if there is an overlap between the confidence intervals of the two odds ratios. Thus, by simple visual inspection you can quickly discover whether there is a difference between the two points. If there is overlap, there is no statistically significant difference between the baseline and the later ORs.

Otherwise, if the OR at the baseline is not statisticallly significant, you can be comfortable to describe the later OR by itself. i.e. Its magnitude and direction with tests of statistical significance. - Changes of OR over time can be measured by using logistic regression and comparing two models using likelihood ratio test. First put in your analysis using your baseline value and measure OR by using the coefficient corresponding to exposure at baseline. Then compute the same model but replace the exposure with its value at follow-up. Compare both models using the likelihood ratio test. This will provide you both ORs with 95% CIs and also if the observed ORs are significantly different between both models.

On STATA see help menu for the following commands

xtlogit

est store

lrtest - You really need to have more than one model then you compare them using the likelihood ratio analysis.This will get you onto odd ratio and coenfidence interval. If all models are correlating then you have significant results,If the odd ratio are insignificant in all models then you will be in better position to draw a line by describing the odd ratio down the road of your research.
- Dear Hassan Amini,

if I could understand the main research question and/or hypothesis you try to obtain from this study, plus general informaiton on what you are calling a "longitudinal study" maybe I could attempt to answer your question about "comparing two odds ratio (one at baseline, another measured at Follow up time)." Without this type of information it is impossible to 'comprehensively" and "effetively" answer your question. - Dear Hassan, I am surprised there's no reference to Cohen's d. This measure is used to compare outcomes in statistical meta-analysis. See Wolf, "Meta-analysis. Quantative methods for research and synthesis." Beverly Hills, Sage, 1986. (series: Quantative Applications in the Social Sciences, nr 59)

Or just google "meta-analysis odds ratio" : 174.000 hits. - Eduardo Simoes has given the correct answer: There is no correct answer without further information about the design.

Both Sathish and Elias have offered procedures that are OK if the odds ratios are assumed independent -- checking for overlap of CIs is slightly inferior, since overlap of two 1-alpha CIs does not guarantee that the difference is not itself significant at the alpha level (however, failure to overlap does imply significance at the alpha level).

But since you have mentioned a longitudinal study, the odds ratios are likely not independent -- if you have the same individuals followed over time, they will not be. Only if you have taken a new sample from the population at each time point would the independence assumption be valid.

It would also be helpful to know if the exposure of interest is time-varying or constant. - Ah, yesterday I was thinking of OR's of different studies. It's probably my ignorance, but what's the problem in the same study? Why not compare them like one compares anything else? Larger or smaller by so much. When 95% Cls are given, one can see in a glance whether or not the difference is significant on that level.
- Hi Flip: The problem is the same as "why should I not use an independent samples t-test when I have paired (e.g., pre-post) data?". The information from observing N units 2 times is not the same as observing N units on one occasion and N different units on a different occasion, because usually a unit's response at occasion one carries information about its response at occasion two. The correlation is typically (but not always) positive: A subject tends to resemble itself more than it resembles a randomly selected member of the same population.

Sathis gives a formula for the SE of the difference of log-ORs assuming independence. This is a special case of the general formula, which subtracts a covariance term. When the pre-post correlation is positive, then, the SE for the difference of log-ORs will be smaller than for independent samples. This is a consequence of a unit "serving as its own control" -- within-subject effects such as changes over time are estimated more precisely than under independence. Ignoring the dependence can be expected to produce tests that are too conservative and confidence intervals for the difference that are too wide. - Thanks Jeffrey for taking the trouble of making clear different samples need other computations than the same samples measured at different times and/or conditions.
- Here's how I would model it using logistic regression:

logit=b0+b1E+b2T+b3ExT

Let E be exposure (1: exposed; 0: unexposed); T be index variable for time (0: t0; 1: t1); b0-b3 be regression coefficients.

Thus, OR at t0 is Exp(b1), OR at t1 is Exp(b1+b3), and OR comparing t1 to t0 is Exp(b3). Exp(b3) and its p-value are what you are looking for.

Here's an example how you can set up the data:

E T D COUNT

1 0 0 c1

1 1 0 c2

1 0 1 c3

1 1 1 c4

0 0 0 c5

0 1 0 c6

0 0 1 c7

0 1 1 c8 - Hsin-Yi Weng's logistic regression is correct unless the subjects are measured repeatedly over time. If the same subjects are measured at t0 and t1, set up the data like so (let S=subject).

S T E D

1 1 1 0

1 0 1 1

2 1 1 1

2 0 1 1

3 1 0 0

3 0 0 0

4 1 0 0

4 0 0 1

5 1 1 0

5 0 0 0

...etc...

Then the model is the same as Hsin-Yi's, but incorporates a subject-specific random effect u_i to account for the correlation between measurements on the _ith subject:

logit=u_i + b0+b1E+b2T+b3ExT

The u_i are typically assumed to come from a normal distribution with mean zero and unknown variance.

In SAS, e.g., this model could be estimated by:

PROC GLIMMIX DATA=whatever;

CLASS S T E D;

MODEL D = E T E*T /DIST=BINARY;

RANDOM INT/SUBJECT=S;

RUN;

As Hsin-Yi explained, the interaction term E*T is the effect of interest. The exposure could be time-varying.

This random-effects approach is not the only solution --you could look into generalized estimating equations. In this particular situation [binary outcome with only two observations per unit] I seem to recall that depending on the estimation procedure there may be some bias in the parameter estimates of the random effects model. - Thanks Jeff. I meant to further explain repeated measures analysis. It's very easy to do this (random-effect logistic model) in Stata too.

Already a member? Log in

## Popular Answers

Eduardo J Simoes· University of Missouriif I could understand the main research question and/or hypothesis you try to obtain from this study, plus general informaiton on what you are calling a "longitudinal study" maybe I could attempt to answer your question about "comparing two odds ratio (one at baseline, another measured at Follow up time)." Without this type of information it is impossible to 'comprehensively" and "effetively" answer your question.

Paul Vaucher· University of GenevaOn STATA see help menu for the following commands

xtlogit

est store

lrtest