Depending on your dependent/outcome variable, a negative value for your constant/intercept should not be a cause for concern. This simply means that the expected value on your dependent variable will be less than 0 when all independent/predictor variables are set to 0. For some dependent variables, this would be expected. For example, if the mean value of your dependent variable is negative, it would be no surprise whatsoever that the constant is negative; in fact, if you got a positive value for the constant in this situation, it might be cause for concern (depending on your independent variables).

Even if your dependent variable is typically/always positive (i.e., has a positive mean value), it wouldn't necessarily be surprising to have a negative constant. For example, consider an independent variable that has a strongly positive relationship to a dependent variable. The values of the dependent variable are positive and have a range from 1-5, and the values of the independent variable are positive and have a range from 100-110. In this case, it would not be surprising if the regression line crossed the x-axis somewhere between x=0 and x=100 (i.e., from the first quadrant to the fourth quadrant), which would result in a negative value for the constant.

The bottom line is that you need to have a good sense of your model and the variables within it, and a negative value on the constant should not generally be a cause for concern. Typically, it is the overall relationships between the variables that will be of the most importance in a linear regression model, not the value of the constant.

Not necessarily... but it does mean it is time to re-look at your data, how you are operationally defining variables and so forth. If you keep getting negative coefficients after a good checking then yes, the model as you are using it maybe wrong.

Depending on your dependent/outcome variable, a negative value for your constant/intercept should not be a cause for concern. This simply means that the expected value on your dependent variable will be less than 0 when all independent/predictor variables are set to 0. For some dependent variables, this would be expected. For example, if the mean value of your dependent variable is negative, it would be no surprise whatsoever that the constant is negative; in fact, if you got a positive value for the constant in this situation, it might be cause for concern (depending on your independent variables).

Even if your dependent variable is typically/always positive (i.e., has a positive mean value), it wouldn't necessarily be surprising to have a negative constant. For example, consider an independent variable that has a strongly positive relationship to a dependent variable. The values of the dependent variable are positive and have a range from 1-5, and the values of the independent variable are positive and have a range from 100-110. In this case, it would not be surprising if the regression line crossed the x-axis somewhere between x=0 and x=100 (i.e., from the first quadrant to the fourth quadrant), which would result in a negative value for the constant.

The bottom line is that you need to have a good sense of your model and the variables within it, and a negative value on the constant should not generally be a cause for concern. Typically, it is the overall relationships between the variables that will be of the most importance in a linear regression model, not the value of the constant.

I do not see any problem in this. The coefficient depends on how your dependent and independent variables are measured. For example, if your dependent variable is body mass (kg) and the independent variable is height (cm), your constant will be negative in most cases.
Does this make sense?

Like Tatiana, I do not see any problem in this. If your independent variables are not meaningful in low value range or if it is not possible for the iv to reaching low values at all, you will get a negative coefficient for the constant.

Aug 29, 2012

Besnik A. Krasniqi · University of Prishtina and Staffordshire University

I share same opinon. I don not see probelm with that sign

Constants in a simple regression equation do not always make practical sense. They may be positive or negative but impractical. For example, an illustration in my text, The Statistical Imagination, 2nd ed., pp. 525-526, shows a correlation between height and weight with an equation of Y' = a + bX= -159.31 + 4.62X. As expected, the slope, b, is positive. The Y-intercept, a, however is negative and it is of no practical predictive value. It states that someone who has zero height weighs minus 159.31 pounds. So what good is the intercept? There can be an infinite number of regression equations with a value of the slope, b. The value of the intercept, a, anchors the scatterplot’s regression line relative to the values of X and Y (and their coordinates) for the problem at hand. Oftentimes the intercept is a mathematical abstraction but it serves a function of adjusting predicted values of Y relative to the plot.

A negative estimate for the coefficient associated with a constant is not intrinsically a bad thing. You need to remember that this estimate is the expected mean response when all the explanatory predictors are at zero. I find it helpful to distinguish between extrapolation and interpolation in the context of a concrete example.
Imagine you are modelling house prices in terms of size of rooms. If you use the raw data, the intercept is a rather meaningless extrapolation – the price of a zero -roomed house; you are extrapolating beyond the observed data! However if you transform the predictor so that it is number of rooms minus the average number of rooms (grand mean centering), the intercept is then an interpolation – the price of an average –sized house (the value of the intercept when the transformed variables is zero). Getting a negative value then would certainly cause me to think hard about both data and the models as the result is very implausible.
I recommend students to routinely re- parameterise their regression model (by centering continuous predictors on sample averages, and by choosing an appropriate reference dummy for categorical variables) so that they can get interpretable and meaningful results for the intercept. Such centering will not affect the estimates of the other predictor variables.
This is particularly important in multilevel models where you may have a great number of differential intercepts in a random intercepts model. See
Kelvyn Jones, SV Subramanian (2012) Developing multilevel models for analysing contextuality, heterogeneity and change Volume 1, 1-269.University of Bristol.
Which is downloadable from http://www.mendeley.com/profiles/kelvyn-jones/
You may also be interested in this free online course we have developed that includes training in ordinary regression (you can follow it in R; Stata and MLWin) http://www.bristol.ac.uk/cmm/learning/course.html

I don't need to say much more than the others who have pointed out that the constant just represents the expected value of the outcome when all predictors (independent variables) are equal to zero. As mentioned above "centering" your predictors (subtracting their means) is one strategy that will shift the meaning of the constant to be the expected value of the outcome to be when all variables are at the average. You can combine the approaches, for example centering continuous variables, but leaving dummy variables 0/1 so you don't have the awkward interpretation of someone of average gender, let's say. But you don't always need to center variables on means, either. For example, if you had a reading test where the youngest children taking it are seven years old, you could also subtract seven from the age in years so that the constant would be interpreted as the expected score for a seven year old. The point is that it is worth thinking about your predictors and their scales, in particular the meaning of 0, before tossing them in the model. Consider even something as simple as gender coded 1 for female 2 for male. in that case the constant would be interpreted for a person with a gender 1 unit less masculine than female (i.e., 0), whatever that might be when all other variables in the model are set to 0. Or consider my reading test example, the constant would represent a reading score for a kid of age 0, quite possibly a negtive number. Bob

what if I have the financial wealth as a dependent variable and the constant term beta is estimated to be -53.5. Does this constant have a meaningful interpretation?

Can someone explain me how to interpret the results of a GLMM? I have used "glmer" function, family binomial (package lme4 from R), and I have a lot of confusion because I have read that the interpretation depends on the order of the statements in the output. I have in my model four predictor categorical variables and one predictor variable quantitative and my dependent variable is binary. Can anyone recommend me a reading that can help me with this? Thanks!

In principle a negative coefficient for a constant should be no cause for alarm. Instead you should focus on trying to figure out the interpretation of the negatuve sign in context to the model and proble you are trying to solve. I hope this helps.

Negative value of constant term does not mean the model is wrongly estimated. Interpreting the constant term is in fact not necessary in most cases as it depends on predictor variables entered into the model. It is most important to evaluate the model based on the sign and size of the coefficients of the predictor variables and statistical properties.

## Popular Answers

Zachary Gassoumis· University of Southern CaliforniaEven if your dependent variable is typically/always positive (i.e., has a positive mean value), it wouldn't necessarily be surprising to have a negative constant. For example, consider an independent variable that has a strongly positive relationship to a dependent variable. The values of the dependent variable are positive and have a range from 1-5, and the values of the independent variable are positive and have a range from 100-110. In this case, it would not be surprising if the regression line crossed the x-axis somewhere between x=0 and x=100 (i.e., from the first quadrant to the fourth quadrant), which would result in a negative value for the constant.

The bottom line is that you need to have a good sense of your model and the variables within it, and a negative value on the constant should not generally be a cause for concern. Typically, it is the overall relationships between the variables that will be of the most importance in a linear regression model, not the value of the constant.

## All Answers (16)

Jacob Mack· Keiser UniversityZachary Gassoumis· University of Southern CaliforniaEven if your dependent variable is typically/always positive (i.e., has a positive mean value), it wouldn't necessarily be surprising to have a negative constant. For example, consider an independent variable that has a strongly positive relationship to a dependent variable. The values of the dependent variable are positive and have a range from 1-5, and the values of the independent variable are positive and have a range from 100-110. In this case, it would not be surprising if the regression line crossed the x-axis somewhere between x=0 and x=100 (i.e., from the first quadrant to the fourth quadrant), which would result in a negative value for the constant.

The bottom line is that you need to have a good sense of your model and the variables within it, and a negative value on the constant should not generally be a cause for concern. Typically, it is the overall relationships between the variables that will be of the most importance in a linear regression model, not the value of the constant.

Tatiana AndreevaDoes this make sense?

Harald Lothaller· Karl-Franzens-Universität GrazBesnik A. Krasniqi· University of Prishtina and Staffordshire UniversityFerris J Ritchey· University of Alabama at BirminghamKelvyn Jones· University of BristolImagine you are modelling house prices in terms of size of rooms. If you use the raw data, the intercept is a rather meaningless extrapolation – the price of a zero -roomed house; you are extrapolating beyond the observed data! However if you transform the predictor so that it is number of rooms minus the average number of rooms (grand mean centering), the intercept is then an interpolation – the price of an average –sized house (the value of the intercept when the transformed variables is zero). Getting a negative value then would certainly cause me to think hard about both data and the models as the result is very implausible.

I recommend students to routinely re- parameterise their regression model (by centering continuous predictors on sample averages, and by choosing an appropriate reference dummy for categorical variables) so that they can get interpretable and meaningful results for the intercept. Such centering will not affect the estimates of the other predictor variables.

This is particularly important in multilevel models where you may have a great number of differential intercepts in a random intercepts model. See

Kelvyn Jones, SV Subramanian (2012) Developing multilevel models for analysing contextuality, heterogeneity and change Volume 1, 1-269.University of Bristol.

Which is downloadable from http://www.mendeley.com/profiles/kelvyn-jones/

You may also be interested in this free online course we have developed that includes training in ordinary regression (you can follow it in R; Stata and MLWin)

http://www.bristol.ac.uk/cmm/learning/course.html

Robert Thomas Brennan· Harvard UniversityRashad Allahverdiyev ShadlinskyKelvyn Jones· University of BristolMara Inés Espinosa H.· University of La SerenaKelvyn Jones· University of BristolCan I suggest you post this as a separate question as it is very different , and it is not an answer!

Mara Inés Espinosa H.· University of La SerenaSorry, is the first time that I use this web...so I don't know yet how work.

Thanks!

Ale J. Hejase· Lebanese American UniversityJimmy Omony· University of GroningenShaheen Akter· University of OxfordCan you help by adding an answer?