# P- value VS Trend: Is it plausible that a trend may predict the p-value, if not significant, when the sample size has increased?

In many cases, there is a positive trend trend between two events, but the relationship is not statistically significant. The researcher may think that, let us say p -value slightly higher than 0.05, if the sample size has increased, the relationship may become significant. On the other hand the investigated relationship may become largely insignificant. What do you think about?

## Popular Answers

Jochen Wilhelm· Justus-Liebig-Universität GießenInterpreting the trend (or, more generally, the effect) is not decision-theoretic. It is an "inferential" approach. The effect is estimated from the available data and one can try to judge if the estimated effect is somehow relevant and if the precision of the estimate is sufficient to be confident enough about the usefulness of the model. There is NO distinction between "significant" and "non-significant". The question is about the most likely size of the effect, given the available data. There is NO control of error-rates here. The reasoning is based on logical/resonable arguments and models.

If you go to separate significant from non-significat findings, then any thinking of trends/effects is a waste of time. If you do so, you should define a minimum relevant effect you wish to detect as a "significant result" with a desired power, before you even start the experiment. This, plus knowing the expected variance, enables you to calculate the required sample size. Then, when this planned experiment gives a "non-significant" result, you know that it was unlikely enough (depending on the power) to get this result given there was a relevant effect. This enables you to also keep a maximum false-negative rate. Again, this still does not tell you how likely your decision in this case is right or wrong. You just can control worst-case error-rates.

Jochen Wilhelm· Justus-Liebig-Universität GießenFurther, there are two stumbling blocks in your answer:

1) "According to the Central Limit you will eventually end up with normal curve if you keep increasing your sample size.". This "eventually" is in infinite future, given a finite amount of time required for each measurement. And it is restricted to variables with a finite variance. The mean position of a scattered photon on an X-ray screen, for instance, will never follow a normal distribution.

2) "You will fin[d] your answers already in the literature.". Many subject-specific (non-statistical!) papers do present results like "although the [effect] was non-significant, there was a clear trend... (p=0.061)...". You find it quite frequently, and any poor scientist (non-statistician) can easily become confused. Even stats reviews in biomedical journals are sometimes wrong w.r.t. some statements. For instance read Critical Care, Vol 6 No 3. (http://ccforum.com/content/6/3/222) where the authors state (sereval times!) that "A P value is the probability that an observed effect is simply due to chance". *fail* Textbooks about statistics (at least for bio/medical readers) very often present an unfortunate and wrong conglomeration of Neyman/Pearson's and Fisher's approaches of testing. "Reading the literature", as you suggest, can easily and will likely lead to confusion and misconceptions.

I prefer that people ask when they are unsure insead of being left alone reading books they might not well understand and draw the wrong conclusions. This forum is a place for such questions. And questions being posted here quite often animate others to think about something, getting new ideas, insights, and topics develop beyond the original focus. Answers may found to be wrong or suboptimal by others and can thus be refined. Means to say: we (can) all(!) learn and profit.