# What is 'P' value in any research study? How to determine/calculate it?

Is it really necessary that every study should have its own 'P' value to prove its significance?

Kindly share your expert opinion.

I am eagerly expecting some interesting replies especially from the respected statisticians on this website.

Kindly share your expert opinion.

I am eagerly expecting some interesting replies especially from the respected statisticians on this website.

## Popular Answers

Mauricio Abreu Pinto Peixoto· Núcleo de Tecnologia Educacional para a Saúde"...To understand both the original purpose of the p-value p and the reasons p is so often misinterpreted, it helps to know that p constitutes the main result of statistical significance testing (not to be confused with hypothesis testing), popularized by Ronald A. Fisher. Fisher promoted this testing as a method of statistical inference. To call this testing inferential is misleading, however, since inference makes statements about general hypotheses based on observed data, such as the post-experimental probability a hypothesis is true. As explained above, p is instead a statement about data assuming the null hypothesis; consequently, indiscriminately considering p as an inferential result can lead to confusion, including many of the misinterpretations noted in the next section.

On the other hand, Bayesian inference, the main alternative to significance testing, generates probabilistic statements about hypotheses based on data (and a priori estimates), and therefore truly constitutes inference. Bayesian methods can, for instance, calculate the probability that the null hypothesis H0 above is true assuming an a priori estimate of the probability that a coin is unfair. Since a priori we would be quite surprised that a coin could consistently give 75% heads, a Bayesian analysis would find the null hypothesis (that the coin is fair) quite probable even if a test gave 15 heads out of 20 tries (which as we saw above is considered a "significant" result at the 5% level according to its p-value).

Strictly speaking, then, p is a statement about data rather than about any hypothesis, and hence it is not inferential. This raises the question, though, of how science has been able to advance using significance testing. The reason is that, in many situations, p approximates some useful post-experimental probabilities about hypotheses, such as the post-experimental probability of the null hypothesis. When this approximation holds, it could help a researcher to judge the post-experimental plausibility of a hypothesis.[4][5][6][7] Even so, this approximation does not eliminate the need for caution in interpreting p inferentially, as shown in the Jeffreys–Lindley paradox mentioned below.

[edit]Misunderstandings

The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). A small p-value that indicates statistical significance does not indicate that an alternative hypothesis is ipso facto correct.

Despite the ubiquity of p-value tests, this particular test for statistical significance has come under heavy criticism due both to its inherent shortcomings and the potential for misinterpretation.

There are several common misunderstandings about p-values.[8][9]

The p-value is not the probability that the null hypothesis is true.

In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is the Jeffreys–Lindley paradox.

The p-value is not the probability that a finding is "merely a fluke."

As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is different from the real meaning which is that the p-value is the chance of obtaining such results if the null hypothesis is true.

The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy.

The p-value is not the probability that a replicating experiment would not yield the same conclusion.

1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).

The significance level of the test is not determined by the p-value.

The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed. (However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows the reader to decide for himself whether to consider the results significant.)

The p-value does not indicate the size or importance of the observed effect (compare with effect size). The two do vary together however – the larger the effect, the smaller sample size will be required to get a significant p-value."

## All Answers (21)

Owais Raza· Tehran University of Medical SciencesHere it is worth to mention, before you start we start our research we need to set a criterion that at what level of significance we will say that our observed difference is 'statistically' significant. This criterion is known as ALPHA LEVEL (conventionally, this is set at 0.05). Once our data has generated p value, we see whether this p value from our data is less then the set criterion (read: level of significance)? If yes, we can conclude that the observed difference is statistically significant

BUT, alone p value is not sufficient to judge whether the observed fact is statistically significant or not, we also need to consider Confidence Interval (CI).

Technically, p value is a measure how likely it is that any observed difference between groups is due to chance. Therefore, minimum chance (low p value) means higher are odds that the observed difference really exist.

Mauricio Abreu Pinto Peixoto· Núcleo de Tecnologia Educacional para a Saúde1) If "significance" means "statistical significance" the answer is a big YES

2) Without jargon: Suppose you are testing a difference of a substance “X”, and you find that normal subjects have, let’s say, 56 units less than compromised ones. You may ask: This difference is worth believing? Could not this difference be caused just by random events? Well, if you perform a significance testing, you would find the P-value associated to this difference. If it is big you may think of randomness. If not it´s reasonable to believe that ill subjects really have lower values (please don´t forget to calculate confidence intervals, but this is another subject).

3) But what is big? What is small? Well if you don´t like to stress your brain choose 0,05 and go on. But a clinical decision demands more than this. What is the meaning of 0,05? Again with no jargon, it means that a difference of 56 units could be caused by random factors in 5% of the cases IF you could repeat your experiment a hundred times.

4) And so, 5% is a small difference? I don´t know and probably also you. Big or small value depends upon clinical factors and deleterious consequences of inadequate procedures. And then I return to the meaning of significance: clinical or statistical????

More: See

http://en.wikipedia.org/wiki/P-value

http://galton.uchicago.edu/~thisted/Distribute/pvalue.pdf

http://www.youtube.com/watch?v=ZFXy_UdlQJg

Kavitha Mani· MIOT College of NursingVenkatesh Shanbhag·Lets say we have fixed an alpha risk (agreed risk that a relation between the variables under study exists by chance) of 0.02 or 2%.

A suitable test is then used to determine the P value from the samples.

NOW,

1.) If the calculated p value is 0.1, it means that there is a 1% chance that the relation exists by chance. Hence we conclude that there exists a real relation between 2 variables.

2.) If the calculated p value is 0.5, it means that there is a 5% chance that the relation exists by chance. Since 5% is more than our agreed risk of 2%, we conclude that there is no real relation between the 2 variables.

P value tells us whether any real relation exists between variablesor not. To find out what type of relation exists or why doe the relation exist, we may have to used other methods, including experience, commonsense etc.

Hope this helps - Others, please correct me if I am wrong in my explanation.

Mauricio Abreu Pinto Peixoto· Núcleo de Tecnologia Educacional para a Saúde"...To understand both the original purpose of the p-value p and the reasons p is so often misinterpreted, it helps to know that p constitutes the main result of statistical significance testing (not to be confused with hypothesis testing), popularized by Ronald A. Fisher. Fisher promoted this testing as a method of statistical inference. To call this testing inferential is misleading, however, since inference makes statements about general hypotheses based on observed data, such as the post-experimental probability a hypothesis is true. As explained above, p is instead a statement about data assuming the null hypothesis; consequently, indiscriminately considering p as an inferential result can lead to confusion, including many of the misinterpretations noted in the next section.

On the other hand, Bayesian inference, the main alternative to significance testing, generates probabilistic statements about hypotheses based on data (and a priori estimates), and therefore truly constitutes inference. Bayesian methods can, for instance, calculate the probability that the null hypothesis H0 above is true assuming an a priori estimate of the probability that a coin is unfair. Since a priori we would be quite surprised that a coin could consistently give 75% heads, a Bayesian analysis would find the null hypothesis (that the coin is fair) quite probable even if a test gave 15 heads out of 20 tries (which as we saw above is considered a "significant" result at the 5% level according to its p-value).

Strictly speaking, then, p is a statement about data rather than about any hypothesis, and hence it is not inferential. This raises the question, though, of how science has been able to advance using significance testing. The reason is that, in many situations, p approximates some useful post-experimental probabilities about hypotheses, such as the post-experimental probability of the null hypothesis. When this approximation holds, it could help a researcher to judge the post-experimental plausibility of a hypothesis.[4][5][6][7] Even so, this approximation does not eliminate the need for caution in interpreting p inferentially, as shown in the Jeffreys–Lindley paradox mentioned below.

[edit]Misunderstandings

The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). A small p-value that indicates statistical significance does not indicate that an alternative hypothesis is ipso facto correct.

Despite the ubiquity of p-value tests, this particular test for statistical significance has come under heavy criticism due both to its inherent shortcomings and the potential for misinterpretation.

There are several common misunderstandings about p-values.[8][9]

The p-value is not the probability that the null hypothesis is true.

In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is the Jeffreys–Lindley paradox.

The p-value is not the probability that a finding is "merely a fluke."

As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is different from the real meaning which is that the p-value is the chance of obtaining such results if the null hypothesis is true.

The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy.

The p-value is not the probability that a replicating experiment would not yield the same conclusion.

1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).

The significance level of the test is not determined by the p-value.

The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed. (However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows the reader to decide for himself whether to consider the results significant.)

The p-value does not indicate the size or importance of the observed effect (compare with effect size). The two do vary together however – the larger the effect, the smaller sample size will be required to get a significant p-value."

Sai Kishore· Chettinad Hospital & Research InstituteOwais Raza· Tehran University of Medical SciencesPreeti Bommannavar· KLE College of PharmacySai Kishore· Chettinad Hospital & Research InstituteJános Weltner· Semmelweis UniversityAkhter Ali· Jamia Hamdard UniversityAgustin Oramas· Universidad de OccidenteVenkatesh Shanbhag·Sai Kishore· Chettinad Hospital & Research InstituteOwais Raza· Tehran University of Medical SciencesEn:

In most common quantitative studies to establish the "p" value. Is defined as the probability of obtaining a result as extreme as that obtained in the hypothesis testing.

Sai Kishore· Chettinad Hospital & Research InstituteOwais Raza· Tehran University of Medical SciencesAkhter Ali· Jamia Hamdard UniversitySai Kishore· Chettinad Hospital & Research InstituteThanks everyone for your comments and suggestions..!

Agustin Oramas· Universidad de OccidenteCan you help by adding an answer?