Question

# What is the relationship between R-squared and p-value in a regression?

If you plot x vs y, and all your data lie on a straight line, your p-value is < 0.05 and your R2=1.0. On the other hand, if your data look like a  cloud, your R2 drops to 0.0 and your p-value rises. What is the difference between R2 and p-value in a linear regression ?

14th Jul, 2020
Gang (John) Xie
Charles Sturt University
the dichotomous use of p-value in statistical inference is worse than useless. (1) p-value is a conditional probability, i.e., Pr( Data|H0 is true) but what we want to know is Pr(H0 is true | Data); (2) the calculation of p-value depends on at least three things: the effect size (e.g., difference of two means due to two treatment, odds ratio, etc.), the uncertainty level of the point estimate (e.g., standard deviation), and sample size; (3) it is a standard result in statistics theory that, based on one set of sample data we cannot tell the observed effect size is due to treatment or due to random variation (i.e., sampling distribution requires we repeat the same experiment many times before any confirmatory conclusion can be made); (4) if by nature the H0 is false (in the majority cases of real life data / research questions), p-value =0. In summary, P-value based Null Hypothesis Significance Test (NHST) is logically not defensible, technically flawed and practically damaging.
3 Recommendations

15th Jun, 2016
Faye Anderson
Harrisburg University of Science and Technology
There is no established association/relationship between p-value and R-square. This all depends on the data (i.e.; contextual).
R-square value tells you how much variation is explained by your model. So 0.1 R-square means that your model explains 10% of variation within the data. The greater R-square the better the model. Whereas p-value tells you about the F statistic hypothesis testing of the "fit of the intercept-only model and your model are equal". So if the p-value is less than the significance level (usually 0.05) then your model fits the data well.
Thus you have four scenarios:
1) low R-square and low p-value (p-value <= 0.05)
2) low R-square and high p-value (p-value > 0.05)
3) high R-square and low p-value
4) high R-square and high p-value
Interpretation:
1) means that your model doesn't explain much of variation of the data but it is significant (better than not having a model)
2) means that your model doesn't explain much of variation of the data and it is not significant (worst scenario)
3) means your model explains a lot of variation within the data and is significant (best scenario)
4) means that your model explains a lot of variation within the data but is not significant (model is worthless)
104 Recommendations

15th Jun, 2016
Fabrice Clerot
Orange Labs
.
these are two quantities very different in nature
R2 is just a measure of the goodness of fit of your regression ; a purely "geometrical" quantity
the p-value is a measure of evidence against the hypothesis that the regression coefficient is zero (usually ; nothing prevents from testing another hypothesis for the value of the regression coefficient but usually, this value is zero) : the lowest, the strongest the evidence against the hypothesis ; therefore, a low p-value does not tell you anything about the goodness of fit (that is, about your regression model with the non-zero regression coefficient), it just tells you that a regression model with a coefficient set to zero is extremely unlikely
.
the above is just "handwaving" ; if you want to dive deep into the (existing, even monotonic) relationship between R2 and p-value, look at the second answer in the following link
.
and if you want to play it safe, keep the first answer or the last lines of the second answer in mind : across models with different characteristics, there is no relationship between both quantities !
.
2 Recommendations
15th Jun, 2016
Rita Lima
Italian National Institute of Statistics
Thank you!
Can be better to gauge probability, not just presence of condition?
15th Jun, 2016
Faye Anderson
Harrisburg University of Science and Technology
There is no established association/relationship between p-value and R-square. This all depends on the data (i.e.; contextual).
R-square value tells you how much variation is explained by your model. So 0.1 R-square means that your model explains 10% of variation within the data. The greater R-square the better the model. Whereas p-value tells you about the F statistic hypothesis testing of the "fit of the intercept-only model and your model are equal". So if the p-value is less than the significance level (usually 0.05) then your model fits the data well.
Thus you have four scenarios:
1) low R-square and low p-value (p-value <= 0.05)
2) low R-square and high p-value (p-value > 0.05)
3) high R-square and low p-value
4) high R-square and high p-value
Interpretation:
1) means that your model doesn't explain much of variation of the data but it is significant (better than not having a model)
2) means that your model doesn't explain much of variation of the data and it is not significant (worst scenario)
3) means your model explains a lot of variation within the data and is significant (best scenario)
4) means that your model explains a lot of variation within the data but is not significant (model is worthless)
104 Recommendations
16th Jun, 2016
Ramona L. Paetzold
Texas A&M University
R squared is about explanatory power; the p-value is the "probability" attached to the likelihood of getting your data results (or those more extreme) for the model you have. It is attached to the F statistic that tests the overall explanatory power for a model based on that data (or data more extreme). Neither indicates that the model fits. So, if R-squared is 1, then if you have only one predictor, this is the same as saying that the correlation between x and y is one and the data fall along a straight line with a positive slope. If R-squared is near 0, then the data tend to be distributed in a plot so that the best-fitting line is close to horizontal. In this case, the correlation is zero.  It is possible to have many configurations of x-y data, all with the same R-squared, and all with approximately the same p-value, but with very different plots. That is because "fit" and "explanatory power" are quite different. So, ideally, you would like a model with good fit AND one with reasonable explanatory power. That explanatory power should be significant (p is less than .05) and it should be different enough from 0 so that the "effect size" has practical meaning. With enough power, R-squared values very close to zero can be statistically significant, but that doesn't mean they have practical significance. It is a statistical artifact. So, plot the data, see if there is linear trend in the plot, analyze the residuals (points off the line) to see if underlying assumptions are met, and if so, then determine that the fit is good. Now interpret R squared as I've indicated above.
2 Recommendations
20th Jun, 2016
Salvatore S. Mangiafico
Rutgers, The State University of New Jersey
This link has a fun applet with a linear regression where you can vary the number of observations and variance to see how these affect the p-value and r-squared value.
The p-value is close to the bottom of the screen.
For the r-squared, it's on the right, but you have to click on the Variance Partition tab and watch the green bar.
23rd Jun, 2016
Rita Lima
Italian National Institute of Statistics
Right! This answer  directly deal with my question. Look at the relationship between F and R2 in linear modelling in the following link:
10th Jan, 2017
Minan Al-Ezzi
Queen Mary, University of London
I did one to one correlation between the dependent and predictors and there was no correlation. Then I did linear regression between the dependent and predictors altogether to double check the analysis, and I got the ''worst scenario'' of having very low r-squared and very high p-value, what do i need to do afterwards? Obviously there is no correlation between the dependents and predictors, what is the next step please? and how to interpret these results?
10th Jan, 2017
Ramona L. Paetzold
Texas A&M University
You have no explanatory power in your equation, which could be a real phenomenon, or it could be lack of power (such as too small n), or it could be violations of assumptions (are errors normal? Are the x-y relationships truly linear? Etc.). If you don't have good underlying fit for your model, your predictors may not explain variation in y. I'd check to see if all regression assumptions are reasonably met. Also, do a power analysis. But, since neither predictor seemed to have a relationship with y, it's possible there is no joint relationship either. Then you report that. And report your small effect size (r-squared).
2 Recommendations
10th Jan, 2017
Salvatore S. Mangiafico
Rutgers, The State University of New Jersey
Minan Al-Ezzi,
One thing to check is if there are non-linear correlations.  Plot out the correlations among all your variables and see if there is something interesting.   E.g.
It's possible your dependent variable is related to the predictors but in a non-linear way.
2 Recommendations
10th Jan, 2017
Ramona L. Paetzold
Texas A&M University
Yes, that's what I said (in part). That would mean lack of fit and the need for a higher-order model. But don't forget power and other assumptions.
3 Recommendations
11th Jan, 2017
Minan Al-Ezzi
Queen Mary, University of London
I've just done power analysis and it looks good (1, p = 0.00).
11th Jan, 2017
Minan Al-Ezzi
Queen Mary, University of London
Also here is one of the plots
11th Jan, 2017
Ramona L. Paetzold
Texas A&M University
You should not have a p-value for your power analysis. How much power do you have to detect a small (say, .2) effect size. Your plot shows a U-shaped curvilinear relationship, so you need a quadratic model (what does the other plot look like)? Make sure to include both the linear and quadratic forms of the predictor.
7 Recommendations
3rd Apr, 2018
Hernani Manalo
Higher Colleges of Technology, Dubai, UAE
Anyone can help me explain on this as I am not satisfied with my statistician's explanation:
"Linear regression was used to determine the dimension of OP that has the strongest impact on EE. The result shows that A has significant impact on EE and can be modelled using EE=.529*+2.181. Furthermore, 41.5% of EE is attributed to A [r2=0.415, F(1, 51)=36.144, p=0.000]."
Is it correct to say that 41.5% of EE is attributed to A? - This is what I do not understand.
For your information there are 4 dimensions with the following R squared: B .359, C .30, and D .33
30th Jun, 2018
Deepak Panday
University of Hertfordshire
out of the four conditions:
1) low R-square and low p-value
2) low R-square and high p-value
3) high R-square and low p-value
4) high R-square and high p-value
we have to choose case 3 for maximum correlation and case 2 for low correlation.
1 Recommendation
30th Jun, 2018
Ramona L. Paetzold
Texas A&M University
Again, R-squared is only about explanatory power and not model fit. We should de-emphasize p values, because significance is so easily manipulated—sample size, model change, etc. That’s what p-hacking is about. It’s better to report R-squared, understand it in the context of your model, and then engage in residual analyses to see if the model is appropriate.
2 Recommendations
9th Aug, 2018
Matthew Brenneman
Embry-Riddle Aeronautical University
Low R squared and low p-value are generally signs of a very large sample size. The p-value comes from the t-statistics and their standard error goes to zero as n goes to infinity. The reason why is that you are testing if the coefficients are EXACTLY zero as the sample size increases, you can resolve any difference from zero, no matter how small. In simple layman terms, it is the difference between "practical significance" and "statistical significance".
A practical solution for this type of scenario is to use bootstrapping (taking decent size samples) to get a distribution of test statistic values which you can then get confidence intervals from, especially for estimators like the coefficient of determination, whose distribution is very complex.
1 Recommendation
9th Aug, 2018
Ramona L. Paetzold
Texas A&M University
But the low p-value could also be for the overall F test of all the coefficients simultaneously being equal to zero (which in reality we know they never are). That F test is also a test of the theoretical parameter for R-squared, which is a statistic. If the sample size is too large, it is true that virtually any model will yield either an F test with a low p-value, or individual t tests with low p-values. The low R-squared is saying that a small proportion of variance in y is being explained by the regression model, and the low p-value for the F test is saying that the proportion is significantly different than 0. It may not be practically different than 0, however.
Bootstrapping in order to obtain confidence intervals is a good way to get estimates, as long as you use big enough sample sizes. As Brad Efron cautioned, bootstrapping is not appropriate if sample sizes are too small. In those situations, Monte Carlo simulation provides a better technique.
As I mentioned previously, R-squared is NOT a measure of "fit" of the model. You also need to examine the residuals, in detail, to determine whether the underlying assumptions for the model are met and whether any potential outliers are influential. Curvilinear, circular, and other data patterns can provide identical R-squared values to linear relationships, but obviously the models do not all "fit" well.
1 Recommendation
29th Oct, 2018
EUROVENT
The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that you can reject the null hypothesis. In other words, a predictor that has a low p-value is likely to be a meaningful addition to your model because changes in the predictor's value are related to changes in the response variable.
1 Recommendation
28th Nov, 2018
Mark Ebden
University of Toronto
The relationship between the p-value and R-squared (call it R2 below) is, for a dataset with n points:
p = 2*(1-F(sqrt(R2/(1-R2)*(n-2)))
where F is the CDF of the t-distribution with n-2 degrees of freedom. In agreement with the other answers here, if n increases while R-squared remains constant, p will decrease. So to move among all four conditions described by Deepak, n must change.
The formula above is for a plot of y vs x, as asked in the question. For multiple regression with m predictors, use p = 1-F(R2/(1-R2)*(n-m-1)), where F is the CDF of the F-distribution with (p,n-m-1) degrees of freedom.
Update in April 2020: Just noticed that Deepesh Machiwal asked here last year for my source. I derived the equation myself (I run a data consultancy) for fun in response to the OP. Sorry to say "I leave the proof as an exercise." :) Anyway, you'll find empirically that the equation will always work.
1 Recommendation
1st Apr, 2019
Sardar Vallabhbhai Patel Institute of Technology
In null hypothesis the p-value is equal to zero not effect. Lower p-value (< 0.05) can reject the null hypothesis. The p-value is calculated from t-stats and their standard error goes to zero .
1 Recommendation
29th Jun, 2019
Vinayak Kharche
Reliance Industries Limited
p value is less than 0.05, rejects null hypothesis and we can say that there is significant relationship between and target variable and features.. p value can be calculated from estimators and std error where SE always tends to zero depending on how much noisy your data is.
16th Sep, 2019
Miray Azer
LeMaitre Vascular
so if I am trying to evaluate a six standard calibration curve (triplicate random injections)that has >0.999 R2, but the P-value is <0.05 (0.000984).
does this mean a low P value in my models is a good outcome and meaningful additon to my model?
20th Oct, 2019
Phani Bhusan Ghosh
Institute of Engineering & Management
Effective and important discussions.
17th Nov, 2019
Soukaina Filali Boubrahimi
Utah State University
R-squared is the coefficient of determination that explains the % of variation not described by variation in x, while p-value is mostly to determine the feature/covariate importance.
2 Recommendations
20th Nov, 2019
Priyank J. Sharma
Florida Atlantic University
Pls find this link to calculate the p-value of correlation coefficient (r).
2 Recommendations
13th Apr, 2020
Universiti Utara Malaysia
R2-value and p-value: p = 2*(1-F(sqrt(R2/(1-R2)*(n-2)))
1 Recommendation
13th Apr, 2020
James R Knaub
N/A
The most important information to know about R-squared and p-values is that they are the two most overused, misinterpreted, overvalued statistics one might ever encounter. At least R-squared may have some nearly stand-alone usefulness, but without some graphical evidence, it could easily be misinterpreted - so, actually forget about stand-alone usefulness. But the p-value wins a prize for that category, i.e. most useless as a stand-alone value.
Even together, R-squared and p-values are not very helpful, if anyone is wondering about that.
Best wishes to all.
3 Recommendations
25th Apr, 2020
Renesh Bedre
Texas A&M University
24th Jun, 2020
Muhammed Ashraful Alam
Ministry of Health and Family Welfare, Bangladesh
I do agree with simple words of Soukaina Filali Boubrahimi
1 Recommendation