Science topics: EconEconometrics
Science topic

Econometrics - Science topic

Econometrics has been defined as "the application of mathematics and statistical methods to economic data" and described as the branch of economics "that aims to give empirical content to economic relations."
Questions related to Econometrics
  • asked a question related to Econometrics
Question
2 answers
  • Extension education plays a crucial role in disseminating knowledge and promoting sustainable development in various fields. To enhance the effectiveness and impact of extension programs, the integration of econometric approaches is gaining importance. In this article, we will explore how econometric analysis can be leveraged in extension education to inform evidence-based outreach strategies and facilitate rigorous program evaluation. By harnessing data and statistical techniques, extension professionals can gain valuable insights into the factors influencing outreach outcomes and make informed decisions to improve program effectiveness
Relevant answer
Answer
What is this Question refering to
  • asked a question related to Econometrics
Question
3 answers
Hey there! I would like to conduct an event study with the buy-and-hold approach and that website was suggested to me. It talks about long run event studies but I could not find it on the ARC interface and I was wondering if it is there and I am just missing it. Thank you very much!
Relevant answer
Answer
The website Eventstudytools provides tools and resources to conduct event studies, including the BHAR (Buy-and-Hold Abnormal Return) approach. The BHAR approach is commonly used in event studies to calculate abnormal returns over a specific event period.
Eventstudytools is a comprehensive platform that offers a range of functionalities to analyze the impact of events on financial markets. It provides pre-built event study templates, customizable event windows, and statistical tests to measure abnormal returns. These tools can help researchers and analysts assess the significance of events and their impact on stock prices or other financial variables.
  • asked a question related to Econometrics
Question
6 answers
In 2007 I did an Internet search for others using cutoff sampling, and found a number of examples, noted at the first link below. However, it was not clear that many used regressor data to estimate model-based variance. Even if a cutoff sample has nearly complete 'coverage' for a given attribute, it is best to estimate the remainder and have some measure of accuracy. Coverage could change. (Some definitions are found at the second link.)
Please provide any examples of work in this area that may be of interest to researchers. 
Relevant answer
Answer
I would like to restart this question.
I have noted a few papers on cutoff or quasi-cutoff sampling other than the many I have written, but in general, I do not think those others have had much application. Further, it may be common to ignore the part of the finite population which is not covered, and to only consider the coverage, but I do not see that as satisfactory, so I would like to concentrate on those doing inference. I found one such paper by Guadarrama, Molina, and Tillé which I will mention later below.
Following is a tutorial i wrote on quasi-cutoff (multiple item survey) sampling with ratio modeling for inference, which can be highly useful for repeated official establishment surveys:
"Application of Efficient Sampling with Prediction for Skewed Data," JSM 2022: 
This is what I did for the US Energy Information Administration (EIA) where I led application of this methodology to various establishment surveys which still produce perhaps tens of thousands of aggregate inferences or more each year from monthly and/or weekly quasi-cutoff sample surveys. This also helped in data editing where data collected in the wrong units or provided to the EIA from the wrong files often showed early in the data processing. Various members of the energy data user community have eagerly consumed this information and analyzed it for many years. (You might find the addenda nonfiction short stories to be amusing.)
There is a section in the above paper on an article by Guadarrama, Molina, and Tillé(2020) in Survey Methodology, "Small area estimation methods under cut-off sampling," which might be of interest, where they found that regression modeling appears to perform better than calibration, looking at small domains, for cutoff sampling. Their article, which I recommend in general, is referenced and linked in my paper.
There are researchers looking into inference from nonprobability sampling cases which are not so well-behaved as what I did for the EIA, where multiple covariates may be needed for pseudo-weights, or for modeling, or both. (See Valliant, R.(2019)*.) But when many covariates are needed for modeling, I think the chances of a good result are greatly diminished. (For multiple regression, from an article I wrote, one might not see heteroscedasticity that should theoretically appear, which I attribute to the difficulty in forming a good predicted-y 'formula'. For psuedo-inclusion probabilities, if many covariates are needed, I suspect it may be hard to do this well either, but perhaps that may be more hopeful. However, in Brewer, K.R.W.(2013)**, he noted an early case where failure using what appears to be an early version of that helped convince people that probability sampling was a must.)
At any rate, there is research on inference from nonprobability sampling which would generally be far less accurate than what I led development for at the EIA.
So, the US Energy Information Administration makes a great deal of use of quasi-cutoff sampling with prediction, and I believe other agencies could make good use of this too, but in all my many years of experience and study/exploration, I have not seen much evidence of such applications elsewhere. If you do, please respond to this discussion.
Thank you - Jim Knaub
..........
*Valliant, R.(2019), "Comparing Alternatives for Estimation from Nonprobability Samples," Journal of Survey Statistics and Methodology, Volume 8, Issue 2, April 2020, Pages 231–263, preprint at 
**Brewer, K.R.W.(2013), "Three controversies in the history of survey sampling," Survey Methodology, Dec 2013 -  Ken Brewer - Waksberg Award article: 
  • asked a question related to Econometrics
Question
5 answers
We know that correlation and cointegration are two different things. The question that I want to put here and share with you is whether this is true even when we consider the wavelet concept?
Relevant answer
Answer
It is merely a scale-localized version of the usual cross-correlation between two signals. In cross-correlation, you determine the similarity between two sequences by shifting one relative to the other, multiplying the shifted sequences element by element, and summing the result.
Wavelet cannot be used to test for cointegration
  • asked a question related to Econometrics
Question
9 answers
Most empirical papers use a single econometric method to demonstrate a relationship between two variables. For robustness, is not it safer to use a variety of methods to conclude (cointegration IV models with thresholds, wavelet)?
Relevant answer
Answer
Robustness measures the degree of invariance of some feature F across some class of entities E. Sensitivity analyses are forms of robustness measurement (F = model outcome, E = model inputs/parameters/structure variations).
  • asked a question related to Econometrics
Question
14 answers
asked in Econometrics
Relevant answer
Eleven (11) variables for a small sample period of 27 years (n = 27) are considerably too much and will lead to over-specification of the model.
Based on your research objective(s), I suggest you please revisit your economic theories regarding the relationships your chosen dependent variable has with each of the regressors/explanatory variables, and choose those with highest relevance in line with the underlying theories to be tested.
If the set or number of explanatory variables remains large, I suggest you perform some data reduction of such variables using Principal Component Analysis (PCA) approach, and retain the independent variables that have highest loadings across components - you can use your expert judgement to determine the threshold cut-off of the loadings on the variables to retain.
Given the considerably small sample size (n = 27) of your dataset, I suggest you consider running Cointegration regression - three options you can choose from based on your sample properties being the following:
(a). Dynamic OLS (DOLS)
(b). Fully-Modified OLS (FMOLS)
(c). Canonical cointegrating regression (CCR).
These methods are designed to best handle small samples, and produce reliable estimates.
ALL the best!
  • asked a question related to Econometrics
Question
14 answers
Hey there. I am trying to walk through this concept in my head but unfortunately do not have the name for the type of model I am attempting to create. Let me describe the data:
1) One dependent variable on an hourly interval
2) Multiple exogenous "shock" variables that have positive effects on the dependent variable. 
3) The dependent variable has 0 effect on the exogenous shock variables.
The dependent variable can be modeled by a function of it's own lags and the exogenous shock variables. 
I would like to model the variable with an ARIMA model with exogenous variables that have an immediate impact at time T and lagging effects for a short period of time. (This is similar to a VAR's IRFS, except the exogenous variables are independent of the dependent variable).
The assumption is that without the exogenous shock variables, there is an underlying behavior of the data series. It is this underlying behavior that I would like to capture with the ARIMA. The exogenous shock variables are essentially subtracted from the series in order to predict what the series would look like without exogenous interference. 
The problem:
I am worried that the ARIMA will use information from the "exogenous" shocks within the dependent series in estimating the AR and MA terms. That would mean that there would be positive bias in the terms. For example: If an exogenous shock is estimated to have an effect of a 100 unit increase the dependent variable, then this 100 unit increase should NOT effect the estimation of the AR or MA terms since it is considered to be unrelated to underlying function of the dependent variable. 
I've attempted to write this out mathematically as well. 
Relevant answer
  • asked a question related to Econometrics
Question
5 answers
I need Non-Linear Causality test or model to use in panel data, in R, Stata, Python..
Relevant answer
Answer
In principle, it is impossible to find out causality only with statistical methods. Statistics can only tell you, if there is a more or less good relation between variables. It is on you to assume causality between variables, this is part of the identification of the model. Luckily, it is mostly quite clear in which direction causality runs (using logics, theories or merely common sense). If you formulate a linear equation for a (assumed) causal relation and you get bad statistical results, you can be sure that the relation is either non-linear or does not exist. That is why I think, that one should think about how the relation (function) may be already in the process of identification.
As, in general, causality needs time (i.e. the "caused" variable reacts with some time lag and the data of a sample are for the same period, one cannot find out the "timing", which might be necessary for testing causality (But this is also true for e.g. annual time series, when the reaction lag is short).
  • asked a question related to Econometrics
Question
1 answer
Dear Colleagues,
QQ regression is perhaps one of the latest methods in econometric estimation approaches. In case you have expertise could you please help me by providing useful information as to how to perform QQ regression using R or Stata?
Relevant answer
Answer
Hello,
In my view, you can use package R "quantreg".
And you can use this command:rq(formula, tau=c(0.05, 0.25, 0.5, 0.75, 0.95))
Hope the website following can help you ! ↓
  • asked a question related to Econometrics
Question
10 answers
Dear Scholars, I am measuring the effect of policy on firms' performance. I found a common (in both treatment and control groups) structural break 4 years before the policy intervention. I used the difference-in-difference model to find the impact of the policy. I am using 20 years of firm-level panel data. What are your suggestions?
Relevant answer
Answer
It depends on how the shock influenced the treated and control unit.
If you can argue that the shock influenced all units in the same way, then the inclusion of two way fixed effects (particularly time fixed effects in this case) can help you to rule out the effects of such 'common' shock.
However, if there are reasons to think that the structural shock differently affected treated and control units, then this could result in potential biases in the estimation and identification of the average treatment effect.
There is a novel literature dealing with the correct identification of such effects. I would suggest you reading the following paper:
"VISUALIZATION, IDENTIFICATION, AND ESTIMATION IN THE LINEAR PANEL EVENT-STUDY DESIGN" from Simon Freyaldenhoven, Christian Hansen, Jorge Pérez Pérez and Jesse M. Shapiro. You can find the paper here: http://www.nber.org/papers/w29170
I particularly suggest you to read section 3.1. of the paper as long as accompanying the paper there is the Stata command "xtevent" which is particularly useful to deal with such econometric analyses.
  • asked a question related to Econometrics
Question
3 answers
Hi, data scientists am an aspiring data scientist in the field of econometrics, please I need the book Using Python for Introductory Econometrics, and if anyone can care to share it, I will be grateful. Thanx
Relevant answer
Answer
Hi Comfort Motlhabane , it's soft copy is not available on internet, because its latest publication. Better to search for other alternative or go with paperback.
  • asked a question related to Econometrics
Question
9 answers
Hello, I am looking for an econometric study examining the relationship of the Financial Inclusion Index to post office related variables. can you help me?
Relevant answer
This study can be helpful in this regard. Good luck and success to all.
The Role of Financial Inclusion in Achieving Stability and Economic Growth.
in Arabic.
  • asked a question related to Econometrics
Question
5 answers
I ran the ARDL approach on E-Views 9, and it turns out that the independent variable has a small coefficient, but it appears as zero in the long-term equation as shown in the table, despite having a very small value in the ordinary least squares method. How can I show clearly this low value?
Relevant answer
Answer
Look at a scatterplots which may help but it seems that you don't have much of a relationship here. Best wishes David Booth
  • asked a question related to Econometrics
Question
2 answers
Using E-Views 9, I ran the ARDL test, resulting in an R-Squared value in the initial ARDL model output and an R-Squared value under the Bounds test. so, what is the difference between these two R squared values?
Relevant answer
Answer
The difference between R squared in the ARDL model and R squared in the bounds test is that R squared in the ARDL model is about the rubostness of the overall model result and not about the robustness of the bounds test result only. This means R squared in the ARDL model will tell you how much the variability in Y or Ys (your dependent variable/variables) is/are explained by the ARDL model given Xs variables (your independent variables). On the other hand, R squared in the bounds test is specifically about the bounds test result within the ARDL model and not about the robustness of the overall model result. You need an ARDL model first to have a bounds test but not a bounds test to have the ARDL model. I hope it helps you.
  • asked a question related to Econometrics
Question
5 answers
Hi! I would like to have an opinion on something, rather than a straight-out answer, so to speak. In time-series econometrics, it is common to present both long-term coefficients from the cointegrating equation, as well as the short-term coefficients from the error correction model. Since I have a lot of specifications, and since I'm really only interested in the long-term, I only present the long-term coefficients from a cointegrating equation in a paper I'm writing. Would you say that is feasible? I'm using the Phillips-Oularis singe-equation approach to cointegration.
  • asked a question related to Econometrics
Question
5 answers
I'm implementing a pairs trading strategy.
Relevant answer
  • asked a question related to Econometrics
Question
5 answers
Consider the linear model:
ln(wage)= a + b D + u
where D is a binary variable (say, female); in the texbook, it says that when D changes from 0 to 1, then b (equal to 0.067) has to be interpreted as follows:
bx100 = 6.7 percentage points of change in wage
However, I would say in this case:
6.7 percent change in wage (or a change in wage of 6.7%)
Am I right, or is the textbook right? Many thanks.
Relevant answer
Answer
If b is 0 for women and 1 for men, then exp(a) is the (geometric) mean wage of women, and men earn on average exp(b)-1 percent more than women. b percent is an underestimation (because exp(b)=1+b+b²/2+b³/6+....), which is negligable for low b's, because the significance intervals of such an estimate may likely be considerably higher. In this example, I would say (write), that men earn about 7% more.
By the way, it is better to accept percent simply as a format of presenting a number, i.e. 0,067 can be shown as 67/1000, 67000 ppm, 6,7/100, 6,7% etc. If you write 0,067 into a cell of EXCEL and choose the format %, it will be shown as 6,7%. One should not use percent=value*100, as we have learned in school, because this could make things more complicated than necessary.
  • asked a question related to Econometrics
Question
9 answers
Hi Dear All,
I have a dataset of one country from 2010 to 2021 and the dataset doesn't have regions of the country.
  • asked a question related to Econometrics
Question
14 answers
How can we test for endogeneity problem in cross-sectional data? Are there any tests? I don't know much about econometrics.
Relevant answer
Answer
1. to check the endogeneity run a regression and find residuals.
2. then check the correlation between endogenous variable(s) and residuals.
3. if there is a high correlation endogeneity is present otherwise not.
Alternet method usge ivreg and ivendog with Instrumental Variables
  • asked a question related to Econometrics
Question
14 answers
For a beginner, who wants to try his/her hand in econometrics calculations such as Vector Auto Regression functions, Generalised Additive Models, Spline interpolation, ADRL test, etc, which of the following tools/software will you recommend?
1. Eviews
2. Gretl
3. Stata
4. R
5. Matlab
Relevant answer
Answer
I suggest E-views is better to use. I used E-views for panel data in several occasions and I find it easy and better.
  • asked a question related to Econometrics
Question
15 answers
Dear colleagues:
I would be so grateful if you could provide me an answer or a solution to my problem.
I am using ARDL model and the all pre and post tests are fine and achieve model's assumption. However, the (Error correction form) is positive and significant and based on the econometrics' theory, it has to be in a negative form and between 0 to -1. So, what are your recommendations?
Thank you in advance
Relevant answer
Answer
If you use E-views, let the lag selection criteria be performed by the function, which means that you are not to select the lag selection criteria, but if you do not get it negative and significant, you decrease your variables. It is always advised to have fewer explanatory variables than more explanatory variables. I hope it helps you.
  • asked a question related to Econometrics
Question
8 answers
Sorry for the layman question, but I am currently taking an introductory course to Econometrics in my school. In an exercise I'm currently working on, I have made the following linear model on R : Averagehourlyincome = B0 + B1*age+ B2*female+B3*bachelor +B4*age*female
Where female and bachelor are dummy variables ( 0 if male 1 if female and 1 if has bachelor 0 if not) . One of the questions is that after defining the model and interpreting the coefficients I should test the differences between males and females in this model. But to be frank I don't really know how to proceed, I would be very thankful if someone could tell me what i need to do.
Thanks in advance,
Relevant answer
  • asked a question related to Econometrics
Question
13 answers
Hi,
Could anyone suggest me material (ppt, pdf) related with Applied Econometrics with R (intro level)?
Thanks in advane.
Relevant answer
Answer
I have often been approached with questions such as this. I would have known the person and his background in economics, statistics/econometrics, linear algebra, and mathematics. I would also have known or asked why he wanted to proceed in this way and what he hoped to achieve. The lists below are long but I would hope that you find something there that is useful.
I know of three modern introductions to econometrics that are accompanied by R companions.
  1. Michael R Jonas has already mentioned Wooldridge (2019), Introductory Econometrics: A Modern Approach, 7th ed, South-Western College Publishing, and the R companion available at http://www.urfie.net.
  2. Stock and Watson (2019), Introduction to Econometrics, Pearson, and https://www.econometrics-with-r.org/
  3. Brooks (2019), Introductory econometrics for finance, CUP and the R Guide for Introductory Econometrics for Finance available in kindle format from Amazon or in pdf from the publisher (guide is free from both sources.
  4. https://scpoecon.github.io/ScPoEconometrics/ is an introduction to econometrics with R taught to second-year undergraduates.
  5. Kleiber and Zeileis (2008), Applied Econometrics with R, Springer
The first three books cover an undergraduate introduction to econometrics. I found item 5 very useful but you probably need to have a good knowledge of econometric theory to take full advantage of it.
There are also various graduate textbooks on time series analysis, panel data analysis, and various other aspects of econometrics.
Econometrics is often taught as a series of recipes. e.g. estimate the following expression using OLS, without an explanation of the importance of the underlying economic theory. Causality is often inferred from tests of significance without an understanding of how the statistical tests depend on the underlying economic theory. The ideas of tests of hypotheses and p-values are often not understood. The books below are attempts to explain what can be achieved by statistical/econometric analysis and how to avoid false conclusions.
  1. Huntington Klein (2022), The Edge An Introduction to Research Design and Causality, CRC Press. R code is used throughout the book. An online version is available at https://theeffectbook.net/.
  2. Edge (2019), Statistical Thinking from scratch, Oxford deals with the simple regression model and adds considerably to an understanding of the various tests and recipes that are used in econometrics without resource to advanced mathematics.
  3. Cunningham (2021) Causal Inference: The Mixtape, Yale. has examples in R and Stata. This is a little more advanced than 1 and 2.
  4. Hirschauer, Gruner et al. (2022), Fundamentals of Statistical Inference, Springer. This book should be compulsory reading for all econometricians and applied statisticians (and anyone doing statistical analysis. In particular, the coverage of testing hypotheses and p-values is essential reading. It is written in a very accessible form. (It does not contain any R examples)
  • asked a question related to Econometrics
Question
8 answers
I am using two-step system GMM to find impact of two independent business related variables on governance of an economy.
Dataset size: 8 Year data for 71 countries
I am not using any control variable in the main models but reporting 20 different results for different types of countries using the independent variables and their one year lags. For example, for the full dataset, I checked impact of x1, x2, and L1.x1, L1. x2. Then ran similar regression only for the developed countries and so on.
If I find fairly consistent results, and shed discussion on impact of different control variables only in additional analysis, would it suffice?
(Prior literature used 2-4 control variables but they used only OLS. Many of those papers were published in ABDC A/B ranked journals).
Thanks for your cooperation.
Relevant answer
Answer
As far system Gmm is concerned, Hansen J is superior to sargan test and it will even give a better result if robust option specified.
  • asked a question related to Econometrics
Question
8 answers
Where can I find the heterogeneity of panel data model from EViews.
Relevant answer
Answer
I think R software is better and there are many packages (ex. lme4) linear mixed effect models. You can try. I think it is very useful.
  • asked a question related to Econometrics
Question
3 answers
I'm studying about this issue and would like to modelling it in Econometrics moreover modelling it with various Energy Modeling.
Relevant answer
Answer
You need to take enough data (time period) before and after tax cuts. A good econometric model based on time series or panel data will suffice.
You can have a look at gravity model of trade.
  • asked a question related to Econometrics
Question
2 answers
I have converted a decommissioned PowerEdge R610 server for use in computational econometrics. Any advice for keeping it cool?
Relevant answer
Answer
You can use air or water cooling system. Water cooling sytem will be cumbersome and risky. But air cooling system would be safe involving aluminium or copper heat sinks and cooling fans. Much would depend on the amount of heat generated.
  • asked a question related to Econometrics
Question
5 answers
I am using an ARDL model however I am having some difficulties interpreting the results. I found out that there is a cointegration in the long run. I provided pictures below.
Relevant answer
Answer
Mr a. D.
The ECT(-1)os always the lagged value of your dependent variable.
Regards
  • asked a question related to Econometrics
Question
7 answers
I am using a panel dataset (N=73; T=9). Dataset Timeframe: 2010-2018
In the GMM estimate on the total dataset, the AR(1) and AR(2) values are fine.
But to investigate the impact of the European crisis, I had to split the data (5 Years during and immediately after the crisis, and the subsequent 4 years). But when GMM is run on the second set of data, (2015-2018), in one of the models, AR(1) and AR(2) values were not generated.
Is the result still usable? What are the potential problems of using this specific result?
  • asked a question related to Econometrics
Question
4 answers
I did PCA for some variables which were Likert scaled questions (1-5). After predicting, I get negative for some of the values. Is there something wrong? Do I take absolute values of the predicted variable ?
Note: The Eigen values were positive
The values in the new variable (after predicting) is given as attachments
The stata output are as follows:
pca v1 v2 v3 v4
Principal components/correlation Number of obs = 302
Number of comp. = 4
Trace = 4
Rotation: (unrotated = principal) Rho = 1.0000
-------------------------------------------------------------------------------------------------------------
Component | Eigenvalue Difference Proportion Cumulative
-------------+-----------------------------------------------------------------------------------------------
Comp1 | 1.50368 .51615 0.3759 0.3759
Comp2 | .987535 .16249 0.2469 0.6228
Comp3 | .825045 .141311 0.2063 0.8291
Comp4 | .683735 . 0.1709 1.0000
-------------------------------------------------------------------------------------------------------------
Principal components (eigenvectors)
---------------------------------------------------------------------------------------------------------
Variable | Comp1 Comp2 Comp3 Comp4 | Unexplained
-------------+----------------------------------------+------------------------------------------------
v1 | 0.6017 0.0522 -0.3627 -0.7097 | 0
v2 | 0.5426 -0.4256 -0.3741 0.6200 | 0
v3 | 0.5034 -0.1359 0.8531 -0.0192 | 0
v4 | 0.3001 0.8931 -0.0273 0.3340 | 0
--------------------------------------------------------------------------------------------------------
Relevant answer
Answer
Bonjour mon cher,
Je suppose qu'en réalisant votre acp, c'est dans le but construire un score à partir de vos données recueillies au moyen d'une échelle de likert.
Si c'est le cas, je tiens dans un premier temps à vous rassurer que les valeurs obtenues en prédisant le score peuvent être négatives ou positives (comme expliqué plus haut). Cependant, si vous souhaiter avoir des valeurs comprises entre 0 et 1 afin de pouvoir les comparer, vous devez normaliser le score obtenu. L'une des méthodes (il y en a plusieurs) les plus utilisées est celle du min max qui consiste à soustraire le minimum à chaque valeur, puis diviser le tout par l'étendue (max - min). Ainsi, tous vos scores varient entre 0 et 1.
J'ai cependant une réserve sur le choix d'une acp. Les données obtenues à l'aide de l'échelle de likert, bien que codées en chiffres, ne restent pas moins des données catégorielles. C'est a dire qualitatives. De mon point de vue, une amc (analyse en correspondances multiples) serait plus appropriée.
Merci et bonne suite dans vos travaux.
  • asked a question related to Econometrics
Question
16 answers
Economics has treated econometrics as a universal solvent, a technique that can be applied to any economic question, which is sufficient and, therefore, makes other applied techniques redundant.
Peter Swann in his book indicates the place of econometrics and argues against this notion and even takes this as a severe error. He advises fellow economists that they learn to respect and assimilate what he calls vernacular knowledge of the economy. His top message to economists is what the great French composer, Paul Dukas, advised his pupil: “Listen to the birds, they are great masters.” If any fellow economist asks: “don’t most economists do this already?” Then the answer by Swann is clear: “… some economists do use vernacular knowledge some of the time to underpin what they do … incidentally to make a piece of high technique more approachable … outside this limited context, economists do not tend to take the vernacular seriously."
Any argument for or against it?
Relevant answer
Answer
Dear Simon, you confirm my contribution. Statstistics was applied by many sciences long before the term econometrics was invented. Like mathematics (one could, of course, say that statistics is part of mathematics), it is universal in the sense, that it can be applied for rahter different scientific studies. Nevertheless, it can only be applied, if there is a theory of the respective science.
  • asked a question related to Econometrics
Question
6 answers
dear all, i conducted my VAR. And i tested it's stationary through Inverse Roots and it is stationary.
Also, I checked the autocorrelation of the residuals for 7th lags (I am conducting my var for 6 lags) and there is no autocorrelation for the sixth lag. (there is autocorrelation for 2nd and 5th lag, is it also a problem? As I checked the only one is matter the 6th lag.
But my residuals are not normal. Can I apply CLT and say that it is asymptotically normal?
Also, my residuals are heteroskedastic, can I continue with this model? What are my limitations?
Thanks in advance!
Relevant answer
Answer
A VAR model is often mixed up with an economic model and used instead of carefully thinking about the specification. That seems to be also the case here. One just takes time series with enough data (I think quarterly) for inflation and unemployment and "throughs" them into a VAR estimation program. That has nothing to do with economics, it is rather the dontrary of good econometrics (neither economics nor sensible statistics).
After putting more brain effort into a good specication (considering other influences, too; see John's advice), you should also look to find out the main lags (see Joel's answer) and whether there are unlagged (in the same quarter) effects, too. You must also take attention that unemployment is a stock (measured at which point of time?), but inflation is a flow.
  • asked a question related to Econometrics
Question
8 answers
What is the most efficient method of measuring the impact of volatility of a variable on another variable??
Relevant answer
Answer
Standard deviation is a measurement of investment volatility. It measures the performance variation from the average.
  • asked a question related to Econometrics
Question
5 answers
In the book 'Mastering 'Metrics', Joshua D. Angrist and Jörn-Steffen Pischke, in an example of the causal effect of having health insurance or not on health levels, they describe "... ...This in turn leads to a simple but important conclusion about the difference in average health by insurance status:
Difference in group means = Avgn[Yi|Di=1]-Avgn[Yi|Di=0]
=Avgn[Y1i|Di=1]-Avgn[Y0i|Di=0], (1.2)"
Next, they prove that Difference in group means differs from causal effects "The constant-effects assumption allows us to write:
Y1i=Y0i+k, (1.3)
or, equivalently, Y1i - Y0i = κ. In other words, κ is both the individual and average causal effect of insurance on health. using the constant- effects model (equation (1.3)) to substitute for
Avgn[Y1i|Di = 1] in equation (1.2), we have:
Avgn[Y1i|Di = 1] - Avgn[Y0i|Di = 0]
={k+ Avgn[Y0i|Di = 1]}- Avgn[Y0i|Di = 0]
=k+ {Avgn[Y0i|Di = 1]-Avgn[Y0i|Di = 0]},”
From this they obtain that: "Difference in group means = Average casual effect + Selection bias."
I think there are some confusing aspects of this process. The difference between Difference in group means and ATE should be described in more detail as,
Difference in group means
= Avgn[Yi|Di=1] - Avgn[Yi|Di=0] = Avgn[Y1i|Di=1] - Avgn[Y0i|Di=0]
=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+{1-Pr[Di=1]}*{Avgn[Y1i-Y0i|Di=1]-Avgn[Y1i-Y0i|Di=0]}
where Avgn[Y1i - Y0i] is the ATE
or,
= Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
where Avgn[Y1i - Y0i|Di=1] is the ATT
or,
= Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
where Avgn[Y1i - Y0i|Di=0] is the ATC
Under the assumption of constant-effects (Y1i-Y0i=k), Avgn[Y1i-Y0i] = Avgn[Y1i-Y0i|Di=1] = Avgn[Y1i-Y0i|Di=0].
So,
Difference in group means
= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+0
= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
= k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].
or, = Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
=k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].
or, = Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
= Avgn[Y1i-Y0i]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
= k+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0].
I am not sure if my derivation process above is correct.
Relevant answer
Answer
I agree the error term, which represents omitted variables or other variables that are not incorporated in regression model.
  • asked a question related to Econometrics
Question
12 answers
At the moment, I have one explanation in mind. If labor productivity is high, firms may focus more on reinvestment prospects instead of repaying loan installments. But I cannot find any prior literature in support of this claim. I would appreciate it if you can provide any other explanations or refer to any relevant literature.
Relevant answer
Answer
Dear Atiqur Rahman, I would like to give you an example from my Polish market and my experience as a business analyst. Well, at the turn of the 20th and 21st centuries in my country, you could see a lot of small and medium-sized companies with a very high rate of growth in revenues and market shares. In many cases in small family businesses, as a result of the investment process and implementation of new technologies, work efficiency increased quickly, but what I found many times when analyzing these "star" companies, such rapid successes in terms of the dynamics of revenue growth, market share and EBIT operating profit caused new challenges and the emergence of internal problems in companies as well as reactions / counter-actions of stronger competition on the part of large international corporations. Well, the best employees who contributed to the increase in the labor productivity index and were directly involved in departments using modern technologies, immediately demanded or expected a salary increase or promotion to a higher position or left the competition. On the other hand, the management of such small family businesses, stating that the company is developing very well because its work productivity increases, immediately tried to use these positive signals to win new orders and to workload its experienced and efficient employees to an even greater extent. Therefore, according to the theory of neoclassical microeconomics, small, fast-growing companies in Poland tried to expand the use of the labor factor, the productivity of which was increasing and overburdened employees with work duties (one of the highest rates in the world of the number of hours worked per month and year per employee according to ILO). This means the macroeconomic environment, that is: high unemployment on the market, low level of social benefits, no possibility (before joining the European Union in 2004) to emigrate to another labor market with better social protection and higher salaries per hour. Entrepreneurs who started using strategies to maximize the use of the labor factor accepted too many new orders and borrowed from banks, taking high-interest loans (high loan costs) and were doing worse and worse in rebuilding the company and adapting to the new conditions of a higher level of production. The news about "squeezing" out of employees as much as possible, while the employer did not raise wages and bonuses, meant that employment in such a company was avoided (employment barrier) and such a company began to lose the most ambitious and experienced people and problems with handling large orders, based on deferred payments in time (typical are construction and assembly works, infrastructure projects). Problems with financial solvency also emerged quickly, as the focus on maximizing the use of the labor factor with high indebtedness in terms of investment loans led to problems with paying the loan installments on time.
Best regards DG
  • asked a question related to Econometrics
Question
8 answers
Econometrics regression relationship between the amounts daily income and daily expenditure of a market vender in Lae vegetable market
Relevant answer
Answer
I will add to @Anton Rainer's answer some questions that you should consider.
1 Are there day of the week effects?
2 Are there holiday effects?
3 are the goods perishable or can unsold goods be sold on the following days?
4 Are there any other variables (correlated with an included explanatory variable) that might effect the relationship?
  • asked a question related to Econometrics
Question
6 answers
Hey folks,
New Year's Greetings!
I have 2019 undergraduate entrance exam data (i.e. 7,891 obs), and I am interested in using regression methods to study relationships among the limited data variables. I have data on the following:
math score, english score, average score (i.e. math+engl)/2, age, sex, nationality, college program, marital status, school type, and college major.
This is what I started doing, first I began with multiple linear regression where I took englishscore as Y and math and age as the main Xs vars, and all the other categorical vars as some control vars.
I got some interesting results from the MLR approach, but I am afraid if math does not interplay with english score which could make my analysis spurious.
Second, I am thinking about using quantile regression, but I am not so sure about this approach.
Finally, I am contemplating if SURE model would be applicable for such a situation like this.
Please let me know which methods will be feasible to explore.
Thank you for taking the time to read this.
All the best,
JG
Relevant answer
Answer
Hie guys...i have N=9 and T=10, which method should I use??
  • asked a question related to Econometrics
Question
11 answers
Econometric analysis is a common way to answer the economic questions in research. and various software like "Stata", "Eviews", "R", "Python" and etc are used in this aim. hence, which source do you suggest to economic students to learn econometrics by software?
Your answer could help many students who want to start learning. so, I will appreciate you, if share your experience or share this question with other researchers and teachers.
Relevant answer
Answer
Shahin Behdarvand, there are several books and online resources for undergraduate and graduate students. Please visit https://www.econometricsbooks.com/ for information on these.
  • asked a question related to Econometrics
Question
11 answers
there are many statistics software that researchers usually use in their works. In your opinion, which one is better? which one do you offer to start?
Your opinions and experience can help others in particular younger researchers in selection.
Sincerely
Relevant answer
Answer
Shahin Behdarvand , in my case I simply love MatLab. I have been using R, Python, Eviews, Stata and MatLab for the last 15 years, and I could say that when one wants to estimate somebody else models, R, Python, Eviews, Stata come handy; but in my case when I want to build and estimate my own models (this is, models that I create myself), MatLab is my tool of choice. MatLab is losing popularity in econometrics, and because it is not open as R or Python, most data science laboratories and statistical units that used to apply MatLab are migrating to open tools. Stata keeps being popular despite the fact that you have to pay for use it; Stata has improved a lot true to be said.
  • asked a question related to Econometrics
Question
7 answers
The various literature regarding oil price impact on oil and gas company stock uses a multi-factor asset pricing model, that includes the interest rate, a market index rate, and oil price. (Sadorsky, 2001) (Mohanty and Nandha, 2011) (Cardoso, 2014) (Boyer and Fillion, 2007)
The literature surrounding oil price impact on an index mostly uses a VAR model. (Abhyankar and Wang, 2008) (Papapetrou, 2001) (Park and Ratti, 2008)
Rarely a VECM model is used. (Lanza et al, 2004) (Hammoudeh et al, 2004)
I have once seen a regression calculation but can't find the explanation for it (Gupta, 2016)
Due to renewable investment of individual companies only being available in news events or annual reports I am unsure which method to follow.
Is using a VAR model for monthly Oil returns, monthly O&G company index returns, and a dummy variable for renewable investment events in the news a possibility?
I would also be open to splitting my question into 3 sub-questions if this makes it easier to apply methodology.
How does oil price impact UK and US market index?
How does oil price impact the stock price of public oil and gas companies?
How does renewable investment impact the stock price of oil and gas majors?
I would also like to ask if a correlation coefficient can be used or simple regression analysis to establish a relationship?
Any help and advice would be highly appreciated.
Relevant answer
Answer
I would prefer ind of ANOVA statistical analysis
  • asked a question related to Econometrics
Question
7 answers
Dear All,
I’m conducting an event study for the yearly inclusion and exclusion of some stocks (from different industry sectors) in an index.
I need to calculate the abnormal return per each stock upon inclusion or exclusion from the index.
I have some questions:
1- How to decide upon the length of backward time to consider for the “Estimation Window” and how to justify ?
2- Stock return is calculated by:
(price today – price yesterday)/(price yesterday)
OR
LN(price today/price yesterday)?
I see both ways are used, although they give different results.
Can any of them be used to calculate CAR?
3- When calculating the Abnormal return as the difference between stock return and a Benchmark Return (market return), The market (benchmark) return should be the index itself (on which stock are included or excluded) ? Or the sector index related to the stock?
Appreciate your advice with justification.
Many thanks in advance.
Relevant answer
Answer
Hi there, I am using Eventus software and I am wondering how the software computes the Market index in order to calculate abnormal returns?
  • asked a question related to Econometrics
Question
26 answers
Some people say it's not necessary to do unit root test for under 15 years in panel data, when i ask for reference they introduce me Baltagi book but i didn't find anything in this book, can someone please tell me which book confirm this claim?
if you tell exact page i'll be thankful, i need it for my thesis.
by the way, i have a panel data with 83 countries and 7 time periods (2011-2017)
Relevant answer
Answer
Can somebody share some papers/literature that state that stationarity is not to be tested when T is less than a certain amount (say 30 or 15, as being discussed above)? It would be extremely helpful for my thesis.
  • asked a question related to Econometrics
Question
13 answers
Hi colleagues,
I use Stata13 and I want to run panel ARDL on the impact of institutional quality on inequality for 20 SSA countries. I have never used the technique so I am reading up available articles that used it. But I need help with a Stata do-file because I still don't know what codes to apply, how to arrange my variables in the model, and what diagnostics to conduct.
Any help or suggestion will do....thanks in anticipation!!! 
Relevant answer
Answer
*Panel ARDL
*PMG
xtpmg d.output d.fcapinf d.bankcr d.pfolio d.equity, lr(l.output fcapinf bankcr pfolio equity) ec(ECT) replace pmg
xtpmg d.output d.fcapinf d.bankcr d.pfolio d.equity, lr(l.output fcapinf bankcr pfolio equity) ec(ECT) replace full pmg
*MG
xtpmg d.output d.fcapinf d.bankcr d.pfolio d.equity, lr(l.output fcapinf bankcr pfolio equity) ec(ECT) replace mg
xtpmg d.output d.fcapinf d.bankcr d.pfolio d.equity, lr(l.output fcapinf bankcr pfolio equity) ec(ECT) replace full mg
*Hausman Test to Know Which to Choose (Significant prob. will mean choosing MG and insignificant prob. will mean choosing PMG)
hausman mg pmg, sigmamore
  • asked a question related to Econometrics
Question
15 answers
I am writing an article where I am determining the effect of import substitution, should I include the diagnostic tests results. Or only the short run ECM and long run bound tests results are enough to interpret?
Relevant answer
Answer
Another way to look at ARDL is that the software can produce nonsense results even after you have done everything correctly. That is, all your variables may be statistically insignificant. It is these diagnostic tests plus most importantly, your eyes that will show you that something else is needed. Always run ARDL as an OLS model before accepting what software generates.
  • asked a question related to Econometrics
Question
18 answers
Hello everyone
I have doubts about the interpretation of the following cases, please help with that.
1-Dependent variable is infant mortality rate (per 1000 live births) and independent variable is health expenditure (%GDP) with -0.39 coefficient.
2- Dependent variable is natural logarithm of infant mortality rate (per 1000 live births) and independent variable is urbanization (% of total population) with -0.42 coefficient (independent variable is not in logarithm form).
Relevant answer
Answer
For a 1 unit increase in Health expenditure, the Infant mortality rate decreases by -0.39.
  • asked a question related to Econometrics
Question
12 answers
Can I use Granger Causality test on a monetary variable only? or do I need non-monetary variables?
Also Do I need to do any test before Granger, like a unit root test, or just use raw data?
What free programs can I use to compute the data?
Relevant answer
Answer
You can yes. You can also use Eviews or Stata softwares
  • asked a question related to Econometrics
Question
11 answers
i'm having problem in deciding how to run the regression.
i want to see whether before and after covid, banks look at different characteristics when pricing loans. If for example in the period before covid loan-specific characteristicss were the ones driving the interest rate charged, I would like to understand wheter the same relationship still holds after covid, or whether other variables become more relevant, in my case whether firm-specific characteristics weight more in the period after covid
i created a dummy variable for covid period (2020-2021) and my dataset goes from 2011 to 2021.
To sum up, I would like to understand whether and how the indipendent variables of my model changed their relevance after the event covid-19
how can i investigate this hypothesis?
Relevant answer
Answer
I would say:
  • Include time-fixed effects (month or year dummies).
  • Include the covariates of interest.
  • Include the covariates of interest interacted with a dummy variable capturing the period after COVID. You do no have to include the dummy variable alone, as you already include month fixed effects.
Regarding the interpretation. Let's assume that b0 is the coefficient of "Years to maturity" and b1 is the coefficient of "Years to maturity x COVID".
b0 is the effect of years to maturity in the pre-covid period.
b0 + b1 is the effect of years to maturity in the post-covid period.
b1 captures the differential effect of years to maturity in the second period compared to the first period.
I hope it helps,
José-Ignacio
  • asked a question related to Econometrics
Question
10 answers
This is for personal academic benefit.
Relevant answer
Answer
My specific area is Panel Data.
The software I am using Is EVIEWS9.
I am doing both I will give guidance to learners as well as sometimes i ask for guidance
  • asked a question related to Econometrics
Question
8 answers
I am looking to learn econometrics.
Relevant answer
Answer
I Hope this book will help you
  • asked a question related to Econometrics
Question
9 answers
Angrist s new paradigm to deal with experimental data is very efficient, i would like to verify is this could be the same for topics dealing with macroeconomic data
Relevant answer
Answer
  • asked a question related to Econometrics
Question
4 answers
In time series modeling and volatility estimation it is necessary, first remove autocorrelation of time series and after that estimate the volatility model (like GARCH).
the autocorrelation estimate by ACF test, but in some situations (like a low sample data or noise,...) maybe this procedure causes bad estimation of autocorrelation.
for example the true model is AR(3)-GARCH(1,1) but we used AR(1)-GARCH(1,1)
are the GARCH parameters biased in this situation?
Thanks in advance.
Relevant answer
Answer
The estimated model AR(1)-GARCH(1,1) is different from the true model AR(3)-GARCH(1,1). The estimates must therefore be biased.
  • asked a question related to Econometrics
Question
4 answers
What if I wanted to match 2 individuals based on their Likert scores in a survey?
Example: Imagine a 3 question dating app where each respondent chooses one from the following choices in response to 3 different statements about themselves:
Strongly Disagree - Disagree - Neutral - Agree - Strongly Agree
1) I like long walks on the beach.
2) I always know where I want to eat.
3) I will be 100% faithful.
Assuming both subjects answer truthfully and that the 3 questions have equal weights, What is their % match for each question and overall? How would I calculate it for the following answers?
Example Answers:
Lucy's answers:
1) Strongly Agree
2) Strongly Disagree
3) Agree
Ricky's answers:
1) Agree
2) Strongly Agree
3) Strongly Disagree
What if I want to change the weight of each question?
Thanks!
Terry
Relevant answer
Answer
Daniel Wright and Remal Al-Gounmeein thanks for the links, I will take a look. We are matching respondents based on their likert scale (5) responses to 16 partisan political positions.
  • asked a question related to Econometrics
Question
14 answers
What is the most acceptable method to measure the impact of regulation/policy so far?
I only know the Difference-in-Difference (DID), Propensity Score Matching (PSM), Two-Step System GMM (for dynamic) are common methods. Expecting your opinion for 20 years long panel for firm-level data.
Relevant answer
Answer
recent development
(1) Wooldridge two-way Mundlak regression and fixed effects and dif-in-dif
(2) synthetic control
(3) Cerulli, G. 2015. Econometric Evaluation of Socio-Economic Programs: Theory and Applications.
(4) Pesaran (2015) Time Series and Panel Data Econometrics
  • asked a question related to Econometrics
Question
8 answers
Hello everyone.
I'm working on my Masters work in the first years of studying. And I think these topics might be interesting and inspiring enough:
  • economic benefits from the introduction of renewable systems
  • dependence of the organization of the "Smart" city on factors, that is, what factors influence the implementation of the system - here I can conduct an econometric study
Also, attracted by topics:
optimization of energy resources and smart energy system as part of the Industry 4.0 direction.
What do you think, which topics are the best?
On which topics it's easy enough to find adequat number of information?
P.S. As I understood, this work shouldn't be super complex, but enough smart and elaborated.
Relevant answer
Answer
In fact, please note that your topics are mutuall inter-related and inter-connected. These are all based on the fundamental theme of sustainability. The concept of 'Smart' / 'Smartness' arises because of the maximum use of renewable sources of energy, minimising the carbon footprint, ensuring minimum harm to the environment, strict adherence to Green Building norms, effective and efficient use of ICT, and so on. These ensure the fundamental concept of Sustainable Development (SD). In this regard, you may just go through the SDGs and MDGs too, please.
With regards
  • asked a question related to Econometrics
Question
5 answers
I am a master's student of statistics. I have been in the field of econometrics and have taken projects on machine learning. However, I wish to change field. Can I have a supervisor who will be willing to mentor me through bioinformatics, taking my previous and current research areas into consideration? Or do I need another master's degree in bioinformatics or a related field before I can proceed to Phd?
Relevant answer
Answer
Try PhD in Bioinformatics
  • asked a question related to Econometrics
Question
8 answers
Dear everyone,
I am in great distress and desperately need your advice. I have the cumulated (disaggregated) data of a survey of an industry (total export, total labour costs etc.) of 380 firms. The original paper is using a Two-stage least square (TSLS) model in oder to analyze several industries with one Independent variable having a relationship with the dependent variable, which was the limitation not to use an OLS method, according to the author. However, i want to conduct a single industry analysis and exclude the variable with the relationship, BUT instead analyze the model over 3 years. What is the best econometric model to use? Can is use an OLS regression over period of 3 years? if yes, what tests are applicable then?
Thank you so much for your help, you are helping me out so much !!!!!!!
Relevant answer
Answer
Dear, conducting any standard model depends on an important factor, namely the number of observations included in the model, for example, if the observations are small, the Phelps-Peron test can be conducted to test the stability, and if the observations are large, the whooping-full test can be conducted, and in light of the stability results, we can determine the model that can be conducted Julius Hogan
  • asked a question related to Econometrics
Question
6 answers
Hello everyone,
i would like to analyze the effect of innovation in 1 industry over a time period of 10 years. the dependent variable is export and the Independent variables are R&D and Labour costs.
What is the best model to use? i am planning to do a Log-linear model.
Thank you very much for your greatly needed help!
Relevant answer
Answer
Before deciding on the econometric model, you should go through the stationarity test (ADF test). If the data are stationary, OLS Regression with a log-linear model would be fine. But, if not, you may go for VAR or ARDL. You should also check the robustness of the model by going through residual tests such as Autocorrelation LM Test.
  • asked a question related to Econometrics
Question
7 answers
Dear colleagues,
I am planning to investigate the panel data set containing three countries and 10 variables. The time frame is a bit short that concerns me (between 2011-2020 for each country). What should be the sample size in this case? Can I apply fixed effects, random effects, or pooled OLS?
Thank you for your responses beforehand.
Best
Ibrahim
Relevant answer
Answer
It seems a very small sample to apply microeconometric techniques. Having 27 seven observations and 10 covariates, at most, you will have 27 - 1 - 10 = 16 degrees of freedom. This is pretty low. If I had to decide to pursue a project based on that, I would try to avoid it.
It is really closer to multiple times series than panel data. Have a look at this link:
  • asked a question related to Econometrics
Question
8 answers
Hi Everyone,
I am investigating the change of a dependent variable (Y) over time (Years). I have plotted the dependent variable across time as a line graph and it seems to be correlated with time (i.e. Y increases over time but not for all years).
I was wondering if there is a formal statistical test to determine if this relationship exists between the time variable and Y?
Any help would be greatly appreciated!
Relevant answer
Answer
Just perform a regression of the variable on time or a simple correlation.
Neverthless, usually, what we do is to carry out a test for mean differences at two different points of time.
  • asked a question related to Econometrics
Question
2 answers
Ml estimation is one of the methods for estimating autoregressive parameter in univariate and multivariate time series.
Relevant answer
Answer
Did you manage to get it? If so, please help me
  • asked a question related to Econometrics
Question
6 answers
My research is to find out the determinants of FDI. I am doing bound test to see the long run relationship, cointegration test and other diagnostic tests.
Relevant answer
Answer
  1. Thanks a lot everyone for your insights and advices. :)
  • asked a question related to Econometrics
Question
3 answers
I am to write an econometric research report, i've tried to get some relevant and contributing references but it seem i cannot find any. Attached is the criterial of what should be included. Guys i need some references
Relevant answer
Answer
  • asked a question related to Econometrics
Question
9 answers
Dear Colleagues,
I ran an Error Correction Model, obtaining the results depicted below. The model comes from the literature, where Dutch disease effects were tested in the case of Russia. My dependent variable was the real effective exchange rate, while oil prices (OIL_Prices), terms of trade (TOT), public deficit (GOV), industrial productivity (PR) were independent variables. My main concern is that only the Error Correction Term, the dummy variable, and the intercept are statistically significant. Moreover, residuals are not normally distributed, while also the residuals are heteroscedasdic. There is no serial correlation issue according to the LM test. How can I improve my findings? Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
I notice the following about your specification. (1) Your inclusion of a constant (and its subsequent significance) means you allow for (and find) a trend in the real exchange rate independent of any trends in the other variables. Is that economically reasonable? (2) I assume the CRISIS variable is a zero-one dummy for time periods with a "crisis" of some sort. Apparently it is not in the cointegration vector. Why not? If it were, then I'd expect to find CRISIS differences in the error correction equation. Instead, you have it in levels. Thus you specify that a temporary crisis has a permanent effect of the level of the real exchange rate independent of the other variables. Is that what you intend? (3) You do not include the lagged difference of the real exchange rate in the error correction equation. Why not? Normally it would be there.
  • asked a question related to Econometrics
Question
7 answers
Hello All,
Wooldridge's Introductory Econometrics (5th ed.) states that "Because maximum likelihood estimation is based on the distribution of y given x, the heteroskedasticity in Var(y|x) is automatically accounted for."
Does this hold also for bias-corrected or penalized maximum likelihood estimation under Firth logistic regression?
Please advise.
Thanks in advance!
Relevant answer
Answer
I may be misunderstanding your question, but there is no constant variance assumption with logistic regression, so you do not need to worry about heteroskedasticity. In fact, heteroskedasticity is almost guaranteed with logistic regression since the variance of a binomial random variable is a function of the probability of the event happening and the probability of the event not happening, which will usually differ between observations.
  • asked a question related to Econometrics
Question
10 answers
I have non-stationary time-series data for variables such as Energy Consumption, Trade, Oil Prices, etc and I want to study the impact of these variables on the growth in electricity generation from renewable sources (I have taken the natural logarithms for all the variables).
I performed a linear regression which gave me spurious results (r-squared >0.9)
After testing these time series for unit roots using Augmented Dickey- Fuller test all of them were found to be non-stationary and hence the spurious regression. However their first differences for some of them, and second differences for the others, were found to be stationary.
Now when I test the new linear regressions with the proper order of integration for each variables (in order to have a stationary model) the statistical results are not good (high p-value for some variables and low r-squared (0.25))
My question is how should I proceed now? Should i change my variables?
Relevant answer
Please note that transforming variable(s) does NOT make the series stationary, but rather makes the distribution(s) symmetrical. Application of logarithmic transformation needs to be exercised with extreme caution regarding properties of the series, underlying theory and the implied logical/correct interpretation of the relationships between the dep variable and associated selected regressors.
Reverting to your question, the proposed solution would be to use the Autoregressive Distributed Lag (ARDL) model approach, which is suitable for datasets containing a mixture of variables with different orders of integration. Kindly read the manuscripts attached for your info.
All the best!
  • asked a question related to Econometrics
Question
7 answers
Hi! I have a model for a panel data and my teacher told me to do an estimation of the model with different coefficients for one of the explicative variables. She gave me an example:
lpop @expand(@crossid)    linv(-1) lvab lcost (for different coefficients for intercept)
or
lnpop c  lninv   @expand(@crossid)*lnvab  lncost (for different coefficients for this variable).
Can someone explain me how to do that? I tried but it didn't work..
Relevant answer
Answer
It is shown as lnpop, so i think you need to transform the variables into log. Simply follow this steps if it is the case,
quick<generate series<lnpop= log(pop). and log the rest of the variables in the same way.
after that put lnpop in the estimate equation.
Best wishes for you.
  • asked a question related to Econometrics
Question
10 answers
Dear colleagues,
I applied the Granger Causality test in my paper and the reviewer wrote me the following: the statistical analysis was a bit short – usually the Granger-causality is followed by some vector autoregressive modeling...
What can I respond in this case?
P.S. I had a small sample size and serious data limitation.
Best
Ibrahim
Relevant answer
Answer
Ibrahim Niftiyev , probably the reviewer wants to see not only whether a variable affects or not the other (i.e. the results of the Granger causality tests), but also to which extent (the magnitude and temporality of the dynamic relationship, something you can obtain from the IRFs of a VAR model). If you want to apply a VAR but you have a small sample size/data limitations, you want to consider a Bayesian VAR. Bayesian VARs are very popular and Bayesian methods are valid in small samples.
  • asked a question related to Econometrics
Question
5 answers
In my work, I use the multivariate GARH model (DCC-GARCH). I am testing the existence of autocorrelation in the variance model. Ljung-Box tests (Q) for standardized residuals and square standardized residuals give different results.
Should I choose the Ljung-Box or Ljung-Box square test?
N=1500
Relevant answer
Answer
The Ljung-Box test is aimed at testing the independance of errors using residuals of an ARMA model estimated on the same data. But it makes use of autocorrelations so it is not powerful when the errors are uncorrelated but not independent. When applied to squared residuals, it can reveal ARCH and GARCH effects. Note that the errors of a ARCH-GARCH model are uncorrelated but not independent. Have a look at the excellent book by Francq and Zakoian entitled "GARCH Models: Structure, Statistical Inference and Financial Applications" published by Wiley in 2010.
  • asked a question related to Econometrics
Question
4 answers
Hello, dear network. I need some help.
I'm working on research, using the Event Study Approach. I have a couple of doubts about the significance of the treatment variable leads and lag coefficients.
I'm not sure to be satisfying the pre-treatment Parallel Trends Assumption: all the lags are not statistically significant and are around the 0 line. Is that enough to accomplish the identification assumption?
Also, I'm not sure about the leads coefficient's significance and their interpretation. The table with the coefficients is attached.
Thank you so much for your help.
Relevant answer
Answer
You may find this attached paper helpful.
Best wishes, David Booth
  • asked a question related to Econometrics
Question
3 answers
Hi everyone! I am wirting a paper about economic growth in pandemic. I have a panel data with 24 countrys and 4 periods. I received a recomandetion to use EGLS method. I used this method in Eviews, but I don't know much about this method to justify choosing it and interpreting it. I know it is used when we have heteroschedasticity and autocorrelation.
Thank you, all!
Relevant answer
Answer
Robert Traistaru , GLS was proposed as a solution to the presence of autocorrelation or heteroscedasticity, which violates the OLS assumption that the error terms are uncorrelated and with constant variance, meaning that the Gauss Markov theorem does not apply and hence the OLS estimators are no longer BLUE, i.e. the estimates are inefficient. In short, the justification for using GLS is the presence of some form of autocorrelation or heteroscedasticity in the errors.
However, when you do not know the value of rho (needed for GLS), and you need to estimate the value (which tends to be the case), then you apply in practice Estimated GLS (EGLS), aka FGLS. While GLS is more efficient than OLS under heteroscedasticity or autocorrelation, EGLS is only asymptotically more efficient, but in small or medium sample sizes, it can be less efficient than OLS. So, if you have a moderate to small sample size, using EGLS is actually worst than using OLS.
  • asked a question related to Econometrics
Question
2 answers
I am currently working on my master thesis and have faced some problems in designing a survey. The goal is to analyze a transition from ordinary offline retailing towards physical showrooms effectuating fulfillment of products through an online shop. I use as dependent variable customer satisfaction (reaching from 1-10) and as independent variables the following ones: F= fulfillment (1/0) 1=now 0=in 3 days A=assortment (from 10 to 20 units per shop) P=price (from 25 to 25*0,7discount->17,5) Is it possible to design a survey/ experiment in a way to get the needed data for this equation?
Relevant answer
Answer
I am also interested to the answers of this topic.