Science topic

Time Series Econometrics - Science topic

Explore the latest questions and answers in Time Series Econometrics, and find Time Series Econometrics experts.
Questions related to Time Series Econometrics
  • asked a question related to Time Series Econometrics
Question
7 answers
I've read Box & Jenkins 5th edn, there is mention of ARCH and GARCH in the later chapters but they are mentioned in the model checking. Then in Hamilton, there is no mention at all. If I use an ARIMA model for hypothesis tests (assuming conditional LS and MLE for estimating the coefficients), then would heteroskedasticity be a problem, as in resulting in incorrect standard errors of the coefficients?
Relevant answer
Answer
Yes it is
  • asked a question related to Time Series Econometrics
Question
4 answers
1. For lag length selection:Should I test lag length with differenced values or at level? I run VAR model and then do lag length selection criteria. 
2. Cointegration rank: Should I test Johansen with differenced values or at level? 
3. There is a debate about whether to do VAR with differenced values or at level even with the nonstationary data.  
Relevant answer
Answer
For selecting lag length: Use information criteria like AIC or BIC to determine the optimal lag length in your VAR/VECM model.
For testing co-integration: Employ tests such as the Johansen test or Engle-Granger test to assess co-integration between variables.
For VAR/VECM with non-stationary I(1) data: Implement the VAR or VECM model directly on the non-stationary data, as these models can handle I(1) variables without differencing.
  • asked a question related to Time Series Econometrics
Question
5 answers
i am currently working on factors that affect capital structures of firms in Tanzania, one of the factors that i am working on is diversification.
Relevant answer
Answer
To calculate the entropy measure of product/business diversification in STATA, you can use the trade entropy index. The trade entropy index is defined as the sum across destinations of the squared export shares for the region under study to all destinations. You can also calculate geographical sales dispersion by using entropy index of company sales per country.
  • asked a question related to Time Series Econometrics
Question
5 answers
I'm using stock price data to model the volatility and forecast volatility using EViews. In forecasting when I use the static forecast, the mean absolute percentage error is high but it is low when I use dynamic forecast. What is the difference between these two methods? and which is more accurate?
Relevant answer
Answer
In EViews, a static forecast is a one-step ahead forecast that uses the actual value for each subsequent forecast. A dynamic forecast is a multi-step ahead forecast that uses the previous forecasted value of the dependent variable to compute the next one.
As for which gives the most accurate forecasts, it depends on the data and the model used. In general, dynamic forecasts are more accurate than static forecasts. However, it is always important to evaluate the accuracy of both methods before making any conclusions.
  • asked a question related to Time Series Econometrics
Question
2 answers
In many articles I saw that ARDL model can be used when there is only 1 cointegration relationship. Therefore, to check the number of cointegration relationships I used Johansen cointegration test and found there is only 1 cointegration relationship. But in theory, ARDL model uses bound F statistic test to check whether there are cointegration relationships exist. How do we identify number of cointegration relationships using bound test. Is it unnecessary to use bound test if I already used the Johansen cointegration test?
Relevant answer
Answer
The ARDL bounds test assumes at most one long run levels relationship. It does not test for multiple long run relations. The Johansen test can test for multiple cointegrating relationships. The difference is that the Johansen procedure assumes all levels variables are I(1) whereas the ARDL procedure assumes the dependent levels variable is I(1) and the independent variables can be I(1) or I(0). Using both tests may be useful if there is uncertainty over the orders of integration of the levels variables and the number of long run relations. If you are certain all levels variables are I(1) you only need to use Johansen. If you are certain there is one long run levels relationship and uncertain over the independent variables' orders of integration you only need to use the ARDL test.
  • asked a question related to Time Series Econometrics
Question
16 answers
I am analyzing the Sri Lankan stock market performance during COVID-19. I considered daily stock prices from 2015 to 2021. The total number of days in the series is 2555, but 905 of them are missing records due to the stock exchange holidays. Is imputing those values based on the available data okay, or does that affect the result's accuracy to impute a large number of observations? Please give me a solution. This is the only data available for the stock market.
Relevant answer
  • asked a question related to Time Series Econometrics
Question
3 answers
I am currently trying to analyze the impact of high levels of geopolitical uncertainty (GPR) on monetary connectivity using a quantile regression model with a quantile set at 0.95. My current concern is whether the coefficients are positive or negative and whether they are significant. However, almost all coefficients are not significant. Four variables in the current data are unstable (including independent variables and dependent variables), and all time series have serial autocorrelation. At present, I does not know how to deal with them, no matter it is logarithm, difference or logarithmic difference, it will change the economic meaning of coefficients.
Relevant answer
Answer
Dear Professor, I know how to run the quantile regression. You can read the detail of my problem. I'm wondering if I can improve the performance of the model or alternatively change another model
  • asked a question related to Time Series Econometrics
Question
3 answers
I have few time series which are cointegrated. First I differenced the non-stationary series and fitted VAR after making them stationary. Later I fitted vector error correction model separately and obtained the long run and short run relationship. As I know taking difference removes the long run relationship of variables. So does that mean I can use VAR to explain the short run relationship even the series are cointegrated? If so can I compare the results of VAR model and short run results of VECM model?
Relevant answer
Answer
The comparison between the two models may provide insights into how the variables respond to short-term shocks and how they adjust back to their long-run equilibrium relationships I think.
  • asked a question related to Time Series Econometrics
Question
6 answers
Dear colleagues,
i am looking for the package or command, which performs times series decomposition in STATA. So far I did not find anything. Example can be found here: https://towardsdatascience.com/an-end-to-end-project-on-time-series-analysis-and-forecasting-with-python-4835e6bf050b at figure 5.
Look forward to valuable comments)
Relevant answer
Answer
sir, i am doing research on labour market discrimination. help me to perform decomposition in unite level cross-section data.
  • asked a question related to Time Series Econometrics
Question
3 answers
I want to know that how can I calculate bt (dependent variable) in equation 1b?
If anyone has a code of this methodology, kindly send me?
Whether these three equations need to estimate simultaneously, if yes then how?
Paper: Baur, D. G., & McDermott, T. K. (2010). Is gold a safe haven? International evidence. Journal of Banking & Finance, 34(8), 1886-1898.
Relevant answer
Answer
Hello Dear Imran,
Were you able to find the code of steps for this methodology? Could you please share if you did?
  • asked a question related to Time Series Econometrics
Question
2 answers
Running a time series model with a dependent variable and an endogenous regressor via 2SLS. Both variables expressed as % growth rates. Cumby - Huizinga is rejected at 5% so I am looking for possible causes of autocorrelation. The regressor clearly changes behavior after a specific period and there's good evidence that this change is policy related. Adding a dummy that becomes 1 after the policy changes became effective seems to improve serial correlation. Cumby - Huizinga p value is at 0.12. Dummy coefficient is very small but negative as expected. I know that omitted variables can potentially cause autocorrelation but does this also hold if omitted variable is a dummy or is this likely to be an artifact?
Relevant answer
Answer
Dear Stelios Xanthopoulos,
Interesting question.
The dummy variable reflects non-quantifiable effect that cannot be captured by the model. In your case policy changes can be addressed through dummy variable. Equating dummy variable with omitted variable may not be in order - the observation you got may not be generalizable.
  • asked a question related to Time Series Econometrics
Question
6 answers
I have seen papers where PSM has been performed using cross sectional and panel data. I want to know if PSM can be used for time series data too.
I also have a question that which quantitative method should one use for analysing the impact of a policy intervention. The dataset is time series in nature.
Relevant answer
Answer
A way to deal with this problem is finding the trends before and after by fitting to a linear, quadratic, ... curves as well. However it is enough to confirm existence of change point using a test like Mann-Kendall.
  • asked a question related to Time Series Econometrics
Question
8 answers
Hello, I am new to VARs and currently building a SVAR with the following variables to analyse monetary policy shocks and their affects on house prices: House prices, GDP Growth, Inflation, Mortgage Rate first differenced, Central Bank Base Rate First Differenced and M4 Growth Rate. The aim of the VAR analysis is to generate impulse responses of house prices to monetary policy shocks, and understand the forecast error variance decomposition.
I'm planning on using the Base Rate and the M4 Growth Rate as policy instruments, for a time period spanning 1995 to 2015. Whilst all variables are reject the null hypothesis of non stationarity in an Augmented Dickey Fuller test, with 4 lags, the M4 growth rate fails to reject the null hypothesis. 
Now, if I go ahead anyway and create a SVAR via recursive identification, the VAR is stable (eigenvalues within the unit circle), and the LM test states no autocorrelation at the 4 lag order.
Is my nonstationarity of M4 Growth Rate an issue here? As I am not interested in parameter estimates, but rather just impulse responses and the forecast error variance decomposition, there is no need for adjusting M4 Growth rate. Is that correct?
Alternatively I could first difference the M4 growth rate to get a measure of M4 acceleration, but I'm not sure what intuitive value that would add as a policy instrument.
Many thanks in advance for your help, please let me know if anything is unclear.
KB
Relevant answer
Answer
I will answer this question by suggesting that you consider your basic model.
1. You have a strange mixture of nominal, real variables and growth rates.
2. Interest rates are the equivalent of log differences
3 If the growth rate of M4 is non stationary this implies that the log of M 4 is I(3). I find this hard to believe.
4. Are there seasonal effects in your data?
5. I would be very doubtful if ARDL is valid in your case.
6. If I understand correctly you intend to identify your system by using a particular ordering of your variables. It will be difficult to justify any particular order
7.I would be concerned that you might have trouble accounting for bubbles in house prices and subsequent falls. The 20 years that you are using was a very difficult time for house prices and they were out of equilibrium for a lot of this period. Your project is very ambitious.
8. I would recommend that you talk to your supervisor and agree an analysis that might be more feasible..
  • asked a question related to Time Series Econometrics
Question
5 answers
I am using an ARDL model however I am having some difficulties interpreting the results. I found out that there is a cointegration in the long run. I provided pictures below.
Relevant answer
Answer
Mr a. D.
The ECT(-1)os always the lagged value of your dependent variable.
Regards
  • asked a question related to Time Series Econometrics
Question
9 answers
Hi 
I have Two series based on annual obs. (13, 13). One series is normal, another is not normal. Based on K-S & W-S, Q-Q Plot etc. My objective is to analyse the significant difference b/w these two series. At this time I am not able to find the next step. Kindly, make me understand if possible.......
Relevant answer
Answer
The Students T-test (or t-test for short) is the most commonly used test to determine if two sets of data are significantly different from each other.
  • asked a question related to Time Series Econometrics
Question
1 answer
Dear all, I want to replicate an Eview plot (attached as Plot 1) in STATA after performing a time series regression. I made an effort to produce this STATA plot (attached as Plot 2). However, I want Plot 2 to be exactly the same thing as Plot 1.
Please, kindly help me out. Below are the STATA codes I run to produce Plot 2. What exactly did I need to include?
The codes:
twoway (tsline Residual, yaxis(1) ylabel(-0.3(0.1)0.3)) (tsline Actual, yaxis(2)) (tsline Fitted, yaxis(2)),legend(on)
  • asked a question related to Time Series Econometrics
Question
4 answers
Hello everyone,
I have data with outliers and without outliers of same financial series. Suppose I estimate the AR(1)-GARCH(1,1) separately for both series and come with different values of Log Likelihood values of both models. I want to apply Likelihood ratio test to compare the models and select the best one. I am confused with the degree of freedom of the test. Is the degree of freedom is 1 or 4 in this case?
Thanks,
Irfan Awan
Relevant answer
Answer
Dear Irfan sahib, the models under null and alternative should be nested to apply the Likelihood Ratio test and both should be based on same number of observations. In your case the number of observations are different so this issue cannot be solved by a hypothesis test. Rather you can use other ways to solve this. For example check if the value of parameters are drastically changed by including outliers or check if the the persistence (alpha +beta) of GARCH variance is changed significantly. If no drastic change occurs, outliers are not to be worried.
  • asked a question related to Time Series Econometrics
Question
2 answers
Hello,
I'm using garch code,where data is a file with 204 values,train is a test sample size 50 with a shifting +1 each next step (25 columns and 178 rows).
To make a prediction,I'm using mod_fitting=ugarchfit(train[(90:96),],spec=mode_specify,out.sample=20)
forc=ugarchforecast(fitORspec = mod_fitting,n.ahead=20),but I've got only one column as a output and where train[(90:96),I would like to get 7 columns as a result.
So I need to shift and change a number manually train[(9),train[(8),train[(20),...
Could you tell me please,is it possible to create a dataframe or something to get a result with multiple columns?
Thank you very much for your elp
Code is below:
library(forecast)
library(fGarch)
library(timeDate)
library(timeSeries)
library(fBasics)
library(quantmod)
library(astsa)
library(tseries)
library(quadprog)
library(zoo)
library(rugarch)
library(dplyr)
library(tidyverse)
library(xts)
#y <- read.csv('lop.txt', header =TRUE)
data <- read.csv('k.csv')
a <- data[,1]
mi<-a
shift <- 50
S <- c()
for (i in 1:(length(mi)-shift+1))
{
s <- mi[i:(i+shift-1)]
S <- rbind(S,s)
}
train<-S
mode_specify=ugarchspec(mean.model=list(armaOrder=c(0,0)),variance.model = list(model="gjrGARCH",garchOrder=c(1,0)),distribution.model='sstd')
mod_fitting=ugarchfit(train[(90:96),],spec=mode_specify,out.sample=20)
mod_fitting
train[(2),]
forc=ugarchforecast(fitORspec = mod_fitting,n.ahead=20)
Relevant answer
Answer
Sorry Valeriia, can you better explain the data and goal? Does the "train" contain a single time series (i.e. it is a vector) or many (i.e. it is a matrix)? For example, I see that you select 6 rows and all the columns (hence I think that train is a matrix with many time series) for fitting the GARCH model. First, I am worried that 6 rows are not enough to obtain an accurate estimation. Second, only one column (a single time series) should be selected if you fit a univariate GARCH model . Third (maybe it is my fault), I did not get what you mean by "I need to shift and change a number manually train[(9),train[(8),train[(20),...".
  • asked a question related to Time Series Econometrics
Question
7 answers
Hi, I'm to do a multivariate regression analysis using TSCS data in R, and I'm a bit lost as to where I should start. My research question is how ALMP's have affected unemployment in four European countries over a 20 year time period, using expenditure data on subprograms from OECD. The resources I have from class don't show how to do a TSCS analysis so I'm wondering if people have tips on where I should start and what to do.
Relevant answer
Answer
Unemployment is the dependent variable and ALMP's are the independent variables... Try to execute a simple regression model on R then Interprete how the independent variable affects the dependent variable.
Best!!
  • asked a question related to Time Series Econometrics
Question
20 answers
1. Each time series variable is log-transformed and stationary at I(1), not at I(0). I intend to employ the ARDL method.
2. I have run ADF and PP tests, without considering lags in the command (STATA).
That is, dfuller [varname]; pperron [varname]. Is the syntax OK?
3. How to make sure that the variables are NOT I(2)? Do I have to run the ADF, PP tests at the second difference? or I(1) means THE SERIES is NOT integrated in order of 2.
  • asked a question related to Time Series Econometrics
Question
2 answers
Does anyone have the codes (written on RATS/MATLAB/any other platform) for Rolling Hinich Bicorrelation test and Rolling Hurst Exponent test? Would greatly appreciate if you could share them.
Relevant answer
  • asked a question related to Time Series Econometrics
Question
7 answers
Could anyone suggest a package that implements the Lumsdaine and Pampell (1997) unit root test in R?
Relevant answer
Answer
@Fady M. A Hassouna and Kehinde Mary Bello, The augmented dickey fuller test is a conventional unit root test which does not account for structural breaks. Unlike, Lumsdane and pappel, Narayan and popp, Lee and stracizich and other unit root test that account for structural breaks.
  • asked a question related to Time Series Econometrics
Question
8 answers
i estimated Autoregressive model in eview. I got parameter estimation for one additional variabel which i have not included in the model. the variable is labelled as ' SIGMASQ '.
what is that variable and how to interpret it?
i am attaching the results of the autoregressive model.
thanks in advace.
Relevant answer
Answer
Sigmasq is sigma square of the distribution of residuals which is considered as a proxy of the variance of the distribution of the dependent variable. Indeed this distribution is necessary to maximum liklihood method
Sigmasq is estimated in the second step after estimating the parameter related to estimators
Otherwise, SE is the standard error of regression which is the average of differencesome between actual values and fitted values of the dependent variable
  • asked a question related to Time Series Econometrics
Question
10 answers
Dear colleagues,
I applied the Granger Causality test in my paper and the reviewer wrote me the following: the statistical analysis was a bit short – usually the Granger-causality is followed by some vector autoregressive modeling...
What can I respond in this case?
P.S. I had a small sample size and serious data limitation.
Best
Ibrahim
Relevant answer
Answer
Ibrahim Niftiyev , probably the reviewer wants to see not only whether a variable affects or not the other (i.e. the results of the Granger causality tests), but also to which extent (the magnitude and temporality of the dynamic relationship, something you can obtain from the IRFs of a VAR model). If you want to apply a VAR but you have a small sample size/data limitations, you want to consider a Bayesian VAR. Bayesian VARs are very popular and Bayesian methods are valid in small samples.
  • asked a question related to Time Series Econometrics
Question
4 answers
I am currently trying to estimate the long memory parameter of various time series with several versions of "whittle estimators" à la Robinson, Shimotsu, and others.
However, the estimated value depends crucially on the bandwidth parameter.
Is there any rule on choosing it, or is there any literature about choosing this parameter adequately?
I really appreciate any help you can provide.
Relevant answer
Answer
check Wilfredo Palma's book
  • asked a question related to Time Series Econometrics
Question
11 answers
Dear colleagues,
From the screenshot, you can see my OLS estimations between institutional variables and oil-related predictor variables. My main hypothesis was that oil-related variables have a negative impact on institutional quality (according to Resource curse theory); however, my estimations produced mixed results, giving both positive and negative coefficients. In this case, what should I do? How do I accept or reject the alternative hypothesis that I have already mentioned.? Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
As you can see from the screenshot, I have mixed results which is the exact reason that makes me confused. Which means that some of my independent variables have positive, some negative impact.
Best
Ibrahim
  • asked a question related to Time Series Econometrics
Question
12 answers
I am working on my thesis methodology. My sample is 21 observations, when I ran the vecm model there are some of issues came up due to the small sample. And I can't find more than 21 observations, also there is an article using just 15 observation annula time series and it is dealt with its issues by using ARDL model or bounding test.
  • asked a question related to Time Series Econometrics
Question
25 answers
Dear colleagues,
I ran several models in OLS and found these results (see the attached screenshot please). My main concern is that some coefficients are extremely small, yet statistically significant. Is it a problem? Can it be that my dependent variables are index values that ranged between -2.5 and +2.5 while I have explanatory variables that have, i.e the level of measurement in Thousand tons? Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
Dear Mr. Niftiyev, your dependent variable varies betweem -2,5 and +2,5. Hence, it is better to employ a Tobit, Probit or Logit approaches if possible. The choice between these three approaches depends mostly on the distribution patterns of the variables.
  • asked a question related to Time Series Econometrics
Question
3 answers
There is a general discussion that dependent variables used in the ARDL method should be stationary at level I [1]. However, in some studies, the ARDL method was used, although the dependent variable was stationary at the I [0] level. Can it be said that these analyses are not accurate? In addition, since NARDL methods are ARDLbased estimators, do dependent variables have to be stationary at level I [1]?
Relevant answer
Answer
ARDL/ NARDL QARDL and some other versions of ARDL usually have the preconditions of cointegarion which can be achieved when the dependent variable is I(1) and at least one of the independent variables too is I(1). You can read the original papers the established these techniques, you will observe the model used for testing the method is a contigerated one .
  • asked a question related to Time Series Econometrics
Question
14 answers
Hello everyone!
I’m doing a time series analysis of the relationship between high valuable patents and economic growth for six countries. (Growth measured in GDP and GDP per capita). Sample size: 20 years.
To check for stationarity and cointegration, I want to do ADF test and Engle-Granger test.
For both tests, when do I have to choose:
- test without constant
- test with constant
- or test with constant and trend
Second question is how do I identify the optimal lag length?
Thanks in advance.
Relevant answer
Answer
As a practitioner, I am sympathetic with the advice to "try all possible options to be on the safe side for unit root tests". However, if you obtain contradictory results, then you need to think theoretically about the time series being modelled. The ADF (etc.) with no constant (drift term) and no deterministic time trend is testing for a pure random walk (i.e. a model in which the variable is purely stochastic and can thus go anywhere, in which case the best predictor of the future value is the current value). Some variables, e.g. currency values, are like this. However, real economic variables are not purely stochastic but also respond to fundamentals. In this case, a random walk with drift is an appropriate univariate model: the drift term captures the deterministic impact of economic fundamentals (giving the variable a tendency to move up or down in every period); while the unit root captures the stochastic component (which may either reinforce or offset the deterministic drift term in any given period). Philosophically, this is appealing: we live in a universe that is neither completely deterministic nor completely random. Finally, it is possible that the stochastic component of the evolution of a variable may unfold around not only a drift term but also a deterministic time trend. So test for this as well. In conclusion, these possibilities should be considered in advance of running tests. And in all cases, look at the graph of the variable to be tested!
  • asked a question related to Time Series Econometrics
Question
4 answers
Dear colleagues, how do you think about applying Pearson's R or Spearman's Rho correlation analysis on the panel data? Is possible to meaningfully interpret the results? Do you know any study or research that would fit my question? I highly appreciate your help.
Best
Ibrahim
Relevant answer
Answer
Zeynep Köylü Thank you!
Best
Ibrahim
  • asked a question related to Time Series Econometrics
Question
17 answers
Dear Colleagues,
I estimated the OLS models and checked them for several tests; however, instability in CUSUMSQ persists as described in the photo. What should I do in this case?
Best
Ibrahim
Relevant answer
Answer
I presume that your data is quarterly or monthly as otherwise, you have too few observations to make any reasonable inferences.
If you are trying to make causal inferences (e.g. you have an economic model that implies that x causes y and you wish to measure that effect). the CUMSUMSQ is one test that indicates that your model is not stable. Either the coefficients or the variance of the residuals is not stable. You have indicated that there is no heteroskedasticity so it is possible that the model coefficients are the problem. The test itself only indicates that there's instability and does not say what the instability is or what causes it. There are many possible causes of instability, (omitted variables, functional form, heteroskedasticity, autocorrelation, varying coefficients etc.) Your best procedure is to return to your economics and work out how your theory might lead to stability problems. Are there possible breaks in your data caused by policy changes, strikes, technological innovations, and similar. that might be covered with a dummy variable or a step dummy.
If you are doing forecasting (or projections) I would not be too concerned about specification tests. It is very unlikely that an unstable model will forecast well. You may achieve good forecasting results with a very simple model that need not be fully theory compliant.
  • asked a question related to Time Series Econometrics
Question
14 answers
Dear Colleagues,
I paid attention to that, when I estimate an equation by Least Squares in Eviews, under the options tab we have a tick mark for degrees of freedom (d.f.) Adjustment. What is the importance and its role? Because, when I estimate an equation without d.f. Adjustment, I get two statistically significant relationship coefficients out of five explanatory variables; however, when I estimate with d.f. Adjustment, I do not get any significant results.
Thank you beforehand.
Relevant answer
Answer
Are you attempting prediction or are you trying to do some form of “causal” relationship. If you are estimating a ”causal“ model you conclusions are conditional on the model estimated. Strictly speaking, it would be better to use the adjusted degrees of freedom - particularly with your small sample. In this case, a non-significant coefficient does not necessarily imply that the coefficient is truly zero. It is more likely that your sample is too small t establish a significant result. Its p-value must not be very far from your significance level. If the estimate is of the sign and magnitude expected by theory I would accept the value and report the p-value. Esimating 5 coefficients is a big demand from a sample of 23 obs.
If you are simply doing prediction or forecasting and are not attributing explanatory power to your coefficients you might be better with a simpler model which might have better predictive ability.
  • asked a question related to Time Series Econometrics
Question
19 answers
Dear Colleagues,
If I have 10 variables in my dataset (time series) out of which 9 is explanatory and 1 dependent, and if I clarify that all the variables are non-stationary, should I take the first difference of the dependent variable as well?
Best
Ibrahim
Relevant answer
Econometric models estimated with non-stationary data are profoundly invalid and misleading (Greene, 2002). An example of a simple scenario: - in a regression with one regressor, there are three variables that could be stationary or non-stationary; namely the dependent variable (Y), the regressor (X), and the disturbance term (u). A suitable econometric treatment of such a model depends critically on the pattern of stationarity and non-stationarity of these three variables (incl. the dependant variable). Since quite often variables can be non-stationary at I(0), it is important to understand the forces behind such non-stationarity, which largely include structural breaks, deterministic trends, and stochastic trends. Differencing (including the explained variable as in your case) is a common appropriate in nonstationary models, and this is often correct (Granger & Newbold, 1974; Green, 2002; and Stock, James & Watson, 2011).
Granger, C. W. J., and Paul Newbold. 1974. Spurious Regressions in Econometrics. Journal of Econometrics, 2(2):111-120.
Greene, W. 2002. Time series Models. (pp. 608-662) In Econometric Analysis, 5th edition. Prentice Hall, Upper Saddle Rive, NJ.
Stock, James H., and Mark Watson. 2011. Introduction to Econometrics. 3rd ed. Boston: Pearson Education/Addison Wesley.
  • asked a question related to Time Series Econometrics
Question
4 answers
Dear Experts,
I am trying to use xtabond2 command in Stata14. The data I will use in my thesis is dynamic panel data model. Time is from 2005QI to 2016QIV for 24 banks. Regarding this, I intend to use two step GMM estimator and used the below command;
xtabond2 LiqC_TA L.LiqC_TA L.(Equity_TA earningvol MSHARE npl_ratio loan_depo) if new==0, gmmstyle (L.(LiqC_TA earningvol npl_ratio loan_depo), lag(1 1) collapse) gmmstyle (L.Equity_TA, lag(2 3) collapse) ivstyle (L.MSHARE) robust twostep nodiffsargan
xtabond2 Equity_TA L.Equity_TA L.(LiqC_TA earningvol MSHARE npl_ratio loan_depo) if new==0, gmmstyle (L.(Equity_TA earningvol npl_ratio loan_depo), lag(1 1) collapse) gmmstyle (L.LiqC_TA, lag(2 6) collapse) ivstyle (L.MSHARE) robust twostep nodiffsargan
I got the results that I want. If I use time dummies, my independent variables and control variables are omitted. Therefore, is it correct if I do not consider time dummies in the above commands. 
Thank you very much in advance for any response.
Best Regards,
Khine
Relevant answer
Answer
The instruments used for the level equation are the lagged first-differences of the time-varying variables. These instruments are valid as long as
a) the idiosyncratic shocks are not serially correlated
b) the correlation between these variables and the individual effects is constant, i.e. a stationarity assumption is met
  • asked a question related to Time Series Econometrics
Question
16 answers
Dear colleagues,
Is it ok to include 2 or 3 dummy variables in the regression equation? Or should I rotate the dummy variables in different models? The thing is, I have never come up with the model examples with more than 2 dummy variables in economics so far. Do you know any serious shortcomings of using more than 1 dummy variable in the same equation? Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
Dear all,
you can use as much as you whish but be carefull to the multicolinearity
best
  • asked a question related to Time Series Econometrics
Question
12 answers
Context:
  • I have two series, price and sales.
  • Sales is mean-reverting stationery but price is stationary only after controlling for an intercept break.
  • I want to set up a 2-equation VAR model and the research interest is to estimate the cumulative effect of price on sales through impulse response function.
My question: Is the irf of a price shock on sales still biased even after I include the break dummy as a regressor in the two equations? Say the var model is:
Salest = β0+ β1Salest-1 + β2Salest-2 + β3Pricet-1 + β4Pricet-2 + β5Dt + et
Pricet = β0+ β1Salest-1 + β2Salest-2 + β3Pricet-1 + β4Pricet-2 + β5Dt + et
My answer is yes, irf is still biased because the regressors Pricet-1 & Pricet-2 are still nonstationary.
My solution: include both equations the interaction terms: β6Pricet-1*Dt and β7Pricet-2*Dt?
Would you please assess the above my question, my answer, and my solution?
Thank you very much!
Relevant answer
Answer
If you provide a plot with the time series you have and illustrate how the break occurs, it will be easier to answer. The assumptions about et are not really plausible. I would apply log to both sales and price first.
But the major issue here is in how you understand the intercept break. So the first step is visual analysis.
Also, one approach to test your hypothesis is to conduct an MC experiment where you not the data generation process and then check if your estimates really correspond to the true parameters you used to generate data (this approach has been successful in crashing the belief in a number of well-known methods). You can google for it, but it is known that replacing missing data with dummies leads to biases, and it was demonstrated using an MC experiment.
  • asked a question related to Time Series Econometrics
Question
9 answers
I am interested to know about the difference between 1st and 2nd and 3rd generation panel data techniques.....
Relevant answer
Answer
First generation panel data analysis often assume cross sectional independence, i.e the shock to a variable in a country will not have any effect on the other countries variables. However, as a result of globalization and other related cross nation interlinks, it is apparent that a problem in country A can affect country B. Most of the conventional panel analysis like fixed effect, Random effect, Panel ols, among others fall into this category. In order to correct the bias in the estimate of 1st generational panel analysis, the 2nd generational panel analysis was invented. This methods appropriately incorporate the cross sectional dependence in the modeling. This includes methods like ccemg, cs-ardl, cup-FM and so on..
  • asked a question related to Time Series Econometrics
Question
8 answers
Hi everyone! I have a question relating to the specification form of the ARDL regression (which includes both long-run and short-run dynamics). In most of the research articles I reviewed, the specification of the regression (assuming y is dependent, and x and z are explanatory) takes the form shown in the attached file. That is: the first difference of y is regressed on y(-1); x(-1), z(-1), as well as on the first difference of the lagged variables (both explanatory and dependent) based on the optimal number of lags.
I have two related questions: Firstly, why in some of articles the symbol (p) which represents the upper limit of the summation operator (∑) in the regression, is defined as the optimal lag? While in fact, it should be the optimal lag minus 1? Because if the optimal lag for x is 3 for example, the first difference of this variable should be represented by the three terms:
Δx, Δ[x(-1)]; and Δ[x(-2)];
the last term in fact is [x(-2) - x(-3)]. So, the upper limit of the ( ) symbol should be 2 which is the optimal lag minus 1.
Secondly, relating to the first question, why in most of articles, the lower limit of (∑) associated with the first difference of lagged explanatory variables is (i=1) while it should be (i=0)? The regression results show that the first difference of the level explanatory variable is included as well as the first difference of its lagged value. So, why do not we say that (i=0) for the first difference of the explanatory variables? The current expression used in literature excludes the first difference of the level explanatory variable while it is included in the regression results!
I really appreciate your kind response!
Relevant answer
Answer
To justify the employabilty of ARDL-Bounds tests approach,the results from ADF and PP stationarity tests should indicate the variables were not integrated of the same order.
  • asked a question related to Time Series Econometrics
Question
24 answers
Dear Colleagues,
I am running a time series regression (OLS) based on stationary dependent variables and log form of explanatory variables. Very few of logged exp. variables are stationary. When I took 1st difference, I could not get any significant results, plus, also the 1st difference did not make much sense to me when I analyzed it graphically. My question is, will my regression results be untrusted if I report such an analysis. I asked a similar question and got some replies that even if you log your variables, still you have to test it for a unit root; however, I am observing several papers with log variables where stationarity was not taken into account. Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
To answer to criticism of spurious regression, if it is time trended. Causality is still a bigger issue than just uncovering stationarity of time dated series. We need the theory to guide us and the use of statistics in experimental research design to handle causality is still not resolve.
  • asked a question related to Time Series Econometrics
Question
9 answers
Dear RG colleagues,
I applied OLS regression analysis and usually, I report CUSUM and CUSUMSQ stability tests. But this time, I have to report more stability tests and I also included heteroscedasticity tests. My question is, are these two enough, or should I incorporate additional stability tests of coefficients or residuals? What are the most popular stability tests of the models? Thank you beforehand.
Best
Ibrahim
Relevant answer
In standard applied econometric modelling practice, it is advisable to run as much diagnostic tests as possible to optimise model performance/estimates, but being cautious about running tests that are applicable/relevant.
  • asked a question related to Time Series Econometrics
Question
7 answers
Dear colleagues,
I am capable of using linear estimations between X and Y variables via OLS or 2SLS (on Eviews, for example); however, I need to study how to estimate/model non-linear relationships as well. If you know any source which can explain it in a simple language based on time series, your recommendations are well-welcomed. Thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
Dear Ibrahim,
I also recommend Greg. N Gregorios and Raven Pascalau @Hamid Muili and google for more materials on non-linear relationship.
Regards
  • asked a question related to Time Series Econometrics
Question
36 answers
I am solving some regression equations based on the OLS method in Eviews software. I have overall 12 variables and 11 of them are non-stationary. I am planning to use Log forms. If so, do I need to report the ADF or Philips-Perron unit root tests on the paper? Shouldn't the Log form of variables become stationary? Your answers are highly appreciated, thank you beforehand.
Best
Ibrahim
Relevant answer
Answer
logging variables does not remove unit root from a series, it only smoothens the line a bit and make interpretation easier as it will be interpreted in percentages.
Reporting either ADF or PP is at your own discretion. But for confirmatory purpose you can report both ADF and PP unit root test as well as KPSS stationarity test provided your variable does not have a structural breaks.
Furthermore, Using 11 variables might be too much except you have a very large dataset say 100 or more. Because using a large variables on s sample size shrinkens the degree of freedom which might affect the true inference made from the estimation
Best Regards
  • asked a question related to Time Series Econometrics
Question
6 answers
Hello Everyone
I am running a VECM, and the error correction term equals -0.90. I have four variables, one of my speed of adjustment parameters equals -0.55, and the other one equals -0.20; however, the other two variables are unresponsive and insignificant. I know that there is nothing wrong with the results theoretically, but I am afraid of the fact that it might be overestimated. I have run several residual and stability diagnostic tests for the model and the results were just fine. I have also checked the literature but could find not an answer as I am using the data of a different country
Do you think that the results might be overestimated?
PS I am using annual data 1991-2018
Relevant answer
Hi Sarah,
Estimates from the VEC model are presented in two components. The first component of the model contains results of the long-run epoch, while second segment of the model contains results computed based on the second step of VAR in first/and or also second differences, including the error correction term (ECT) estimated from the first-step Johansen procedure. The error correction or cointegration term measures/shows the speed of adjustment at which a deviation by the specified endogenous variables from the long-run equilibrium path gradually corrects through a sequence of partial short-run adjustments. In other words, the error correction term shows the long-run behaviour of specified endogenous variables towards convergence to their long-run cointegrating relationship. The occurrence of steady adjustment to the long-run equilibrium through the short-run partial adjustment mechanism by examining the statistical significance of the respective variables to deviations from the long-run equilibrium path.
In my view, if you conducted all essential residual and stability diagnostic tests (including the roots characteristic polynomial VEC model stability test) and they seem fine, then I don't think your results are over-estimated, as long as your results are also consistent with the relevant economic theory/theories underpinning your empirical research study.
All the best!
  • asked a question related to Time Series Econometrics
Question
10 answers
I am currently analysing the relationship between monetary base and Unemployment and have constructed an ARDL model. When I use the BIC to determine the optimal lag length of my independent variable (monetary base), the model that is suggested only has one lag. I have a feeling this doesn't make very much economic sense. In the model with one lag, the independent variable isn't significant.
When I include 12 lags, the 5th lag of the independent variable is significant.
I have read that with monthly data, including 12 lags is reasonable.
Could I just include more lags than are suggested by the BIC in order to get a significant variable?
Relevant answer
Answer
Schwarz-Bayesian, (SBC), vector Hannan-Quinn, (HQC) and the likelihood ratio (LR) can also be used.
  • asked a question related to Time Series Econometrics
Question
9 answers
I have looked at Econometrics books such as Brooks time series Econometrics but i'm looking for books and articles that specialize in GARCH models.
  • asked a question related to Time Series Econometrics
Question
3 answers
I have an ARDL error correction model, in the CUSUM chart as far as I diagnosed there is a negative trend, moving means. Before that, I found that the explanatory variable and the dependent variable are not integrated to the same degree. I know I could not use ECM when there is no cointegration, I don't know is there any other model that gives more consistent results.
Could anyone offer a remedy?
Relevant answer
Answer
Well, Your CUSUM results indicate that the linear model is inappropriate to be estimated as this indicates some nonlinear behaviour. I would suggest the the use of nonliner model. I f you want to test for cointegration, use nonlinear cointegration analysis of Seo 2006 or even some new developed nonlinear cointegration analysis.
Regards,
Dr Katleho Makatjane
Business and Risk Analyst
Basetsana Consultants
  • asked a question related to Time Series Econometrics
Question
9 answers
Dear all, I would like to start a discussion here on the use of generalised mixed effect (or additive) models to analyse count data over time. I reported here the "few" analyses I know in R for which I found GOOD (things) and LIMITS /DOUBTS. Please feel free to add/ comment further information and additional approaches to analyse such a dataset. Said that, generalised mixed effect modelling still requires further understanding (at least from me) and that my knowledge is limited, I would like to start here a fruitful discussion including both people which would like to know more about this topic, and people who knows more.
About my specific case: I have counted data (i.e., taxa richness of fish) collected over 30 years in multiple sites (each site collected multiple times). Therefore my idea is to fit a model to predict trends in richness over years using generalised (Poisson) mixed effect models with fixed factor "Year" (plus another couple of environmental factors such as elevation and catchment area) and random factor "Site". I also believe that since I am dealing with data collected over time I would need to account for potential serial autocorrelation (let us leave the spatial correlation aside for the moment!). So here some GOOD (things) and LIMITS I found in using the different approaches:
glmer (lme4):
GOOD: good model residual validation plot (fitted values vs residuals) and good estimation of the richness over years, at least based on the model plot produced.
LIMITS: i) it is not possible to include correction factor (e.g., corARMA) for autocorrelation.
glmmPQL(MASS):
GOOD: possible to include corARMA in the model
LIMITS: i) bad final residual vs fitted validation plot and completely different estimation of the richness over years compared to glmer; ii) How to compare different models e.g., to find the best autocorrelation structure (as far as I know, no AIC or BIC are produced)? iii) I read that glmmPQL it is not recommended for Poisson distributions (?).
gamm (mgcv):
GOOD: Possible to include corARMA, and smoothers for specific dependent variables (e.g., years) to add the non-linear component.
LIMITS (DOUBTS): i) How to obtain residual validation plot (residuals vs fitted)? ii) double output summary ($gam; $lme): which one to report? iii) in $gam output, variables with smoothers are not estimated (only degree of freedom and significance is given)? Is this reported somewhere else?
If you have any comment, please feel free to answer to this question. Also, feel free to suggest different methodologies.
Just try to keep the discussion at a level which is understandable for most of the readers, including not experts.
Thank you and best regards
Relevant answer
Answer
You haven't mentioned glmmTMB in your pro/con description, though you mention it in the title, so I'll add some pros for this package. The advantage of glmmTMB is that you can easily model complex / nested / cross classified random effects structures and you have different correlation options (like AR1 etc.).
Furthermore, you have various diagnostic options for glmmTMB as well, to quote from another thread:
You could use the "performance" package to calculate indices like r2() or icc() [1]. You can also check your model for overdispersion or zero-inflation with the "performance" package (check_overdispersion() or check_zeroinflation()). Furthermore, it can create diagnostic plots (which requires the "see" package for plotting capabilities [2]). Another, imho important, package in this context is "DHARMa" for residual diagnostic plots [3].
Finally, a comment on your model design: when analysing longitudinal data / repeated measurements, I'd suggest adding the time variable both as fixed effect and random slope. And I would treat "year" as continuous, maybe using a quadratic / cubic or spline to model time variation, so you would have something like:
model <- glmmTMB(y ~ x1 + year + (1 + year | site))
or
library(splines)
model <- glmmTMB(y ~ x1 + bs(year) + (1 + year | site))
As estimates for splines / cubic / quadratic trends are somewhat difficult to interpret, I suggest effects plots ("estimated marginal means"), which you can easily produce with the "ggeffects" package [4].
  • asked a question related to Time Series Econometrics
Question
26 answers
I am Finding Volatility of Stock Returns, i Took daily data, and the data is stationary but there is no ARCH effect. can i proceed to GARCH, TGARCH? 
Relevant answer
Answer
Farooq Shah I do not think you are able to use GARCH if there is no ARCH effects on your data. However, first try to estimate an OLS and check for the ARCH effects even though is statistically wrong, because you use OLS which assumes constant variance and test for ARCH effects.
Try the Box Test too to check whether you have autocorrelation in the residuals.
Begin with a simple model AR(1)-ARCH(X) and than see if you can estimate a GARCH.
  • asked a question related to Time Series Econometrics
Question
9 answers
While running a gravity model through OLS or PPML technique for panel data (aggregated) of 60 countries for say 10 years, I get single table showing estimates of all regressors.
I don't understand how to interpret it. For which country pair does the coefficients belong?
My dependant variable is exports from i to J while independent includes distance, contagion, gdp etc.
Relevant answer
Answer
These are the estimates of parameter for the independent variable. So it is mean effect for each variable. Not County specific. You can use this for all country. For country specific intercepts for example in a fixed effect model, you can search using fixef command. fixef(your model name)
  • asked a question related to Time Series Econometrics
Question
3 answers
I am currently investigating the presence of bubbles in a particular financial market.
I have implemented a right-tailed ADF test to check for "mild-explosiveness" of my time series, along with a supremum ADF test to check for the presence of a bubble, as proposed by Phillips et al. in their 2011 paper.
The issue I am currently running into is an apparent contradiction in my results. While the initial ADF test fails to reject the null hypothesis of a unit root, the SADF rejects this same null in favour of the alternative hypothesis; the existence of a price bubble. 
Is this is to be expected? It seems unusual that one would both reject, and fail to reject the same null hypothesis when using two relatively similar tests.
Relevant answer
Answer
There exists very many unit root tests, and sometimes these tests can give conflicting results, for instance, of whether a particular time series is a I(0) or I(1) process [or I(1) / I(2) process]. In this case a researcher should either have an a-priory preference for a particular unit root test (say, DF with a trend or without a trend) or use an expert judgement on whether a particular time series is or not stationary.
But, of course, a conclusion of whether there is a bubble in a particular financial market is also subjective on its own, and cannot be strictly based on a result of one particular unit root test or even combination of these tests.
  • asked a question related to Time Series Econometrics
Question
8 answers
dear all
As per the theory, when all the variables are not integrated at the same order, ARDL is applied to test the relationship between the variables.
my doubt is whether the relationship tested in ARDL indicates the long run cointegration relationship or not .
thanks in advance
Relevant answer
Answer
You have to perform bound test which indicates if you have a long-run or a short-run of the variables, if there is only short-run you perform the ARDL for the short run , if there is co integration you should proceed with VECM or ECM depends on the results that bound test shows Srikanth Potharla
  • asked a question related to Time Series Econometrics
Question
4 answers
I would appreciate it very much if you could recommend the newest and most applied Non-linear Time series Econometrics Techniques and some articles. Many thanks. Kind regards, Sule
Relevant answer
Answer
I can recommend "Elements of Nonlinear Time Series Analysis and Forecasting" by Jan De Gooijer. It was published by Springer in 2017. It provides a description of the different nonlinear time series models with many references, examples and also pieces of code. It covers parametric and non parametric approaches for univariate models and even a few multivariate models.
  • asked a question related to Time Series Econometrics
Question
4 answers
Hi,
There are arguments for and against adjusting data for seasonality before estimating a VAR model (and then Granger causality). I've monthly tourist arrival data for three countries (for 18 years) and interested in spill-over effects or causality among the arrivals. I expect your views on the following.
1. Is seasonal adjustment is compulsory before estimating VAR?
2. If I take 12-month seasonal differenced data without adjusting for seasonality, will it be okay?
Kind regards
Thushara
Relevant answer
Answer
Hi, my answers to your questions:
1. No, it is not compulsory but...if you do not do something to deal with the seasonal behavior of your data, your estimates will be reflecting two effects: those coming from common seasonality and any other effect that is not seasonal and that could be linked to your tourists arrivals. So, if you want to get a deeper understanding of the spillovers in tourist arrivals, beyond those contained in seasonal effects, you will have to deal with this seasonality in some way: one way is to estimate your VAR with and without seasonal adjustment in your data. That will help to see how the seasonal patterns affect your results. Another avenue would be to include seasonal dummies as exogenous variables in your VAR. I would suggest to follow all these three approaches to have a more comprehensive view of what is going on.
2. Taking 12-month seasonal differences is one of the simplest and most intuitive ways to try remove the seasonal pattern in your data. Sometimes it helps a lot, some other times it does not help. Let me explain one of the problems with this approach. Most of our times series can be thought of the sum of at least two components: a seasonal componen s(t) and a non seasonal component x(t). So lets say that your series looks like this: y(t)=x(t)+s(t). Once you take the 12 month difference you have y(t)-y(t-12)=x(t)-x(t-12)+s(t)-s(t-12). The good thing is that now you have removed the seasonal component from s(t)-s(t-12), nevertheless, the cost you pay is to GENERATE a previously nonexistent seasonal component in the term x(t)-x(t-12). So, you try to solve a problem just to generate another one.
I agree with the previous response, this is probably one of the reasons why the airline model does a good job when forecasting series in 12 month differences.
Hope it helps!
  • asked a question related to Time Series Econometrics
Question
2 answers
Consider the long-run covariance matrix estimator for time series as proposed in Newey and West (1987). Ledoit and Wolf (2004) proposed an estimator for the covariance matrix (but not the long-run covariance matrix) based on shrinkage that is well-conditioned, and I would like a similar estimator for the long-run covariance matrix. Anyone aware of any papers addressing this?
Relevant answer
Answer
Analysis of mixed variance and variance using SPSS by Dr. Samir Khaled Safi on 7/7/2019
  • asked a question related to Time Series Econometrics
Question
4 answers
Hello dears,
In regression modelling process, somtimes we deal to make a categorization of a continuos variable ( DVs or IDVs), What are really the potential problems inherent of such transformation, on:
  1. Estimation results
  2. Precesion and accuracy
  3. Hypothesis tests...
Thank you so much for any response and clarification
Relevant answer
Answer
Hi,
Please see these attached links:
Best
  • asked a question related to Time Series Econometrics
Question
2 answers
Metric Modeling and Representation of 4D Data through 3D Vector Tensoring
In representational, metric analytics, it is often made use of binary vectors, classically drafted in two-dimensional (2D) grids. However, these do not reflect complexity in manifest, part-observable reality. ​
Tensors be 4D-computed in order to be 3D-geometrically represented. This can be done by applying data to a three-dimensional grid containing dynamic, non-binary-graph and 3D vector objects changing, in different scenarios, on the time axes, generating multiple grids. Scenarios can be limited by applying probability weighing or targeting specific results, by eliminating transitions and breaks, or else intermittent vector changes.​
If one attempted to simplify the Tensor representation neglecting the shift representations in time, one would replace many 3D, non-binary-graph vector objects with fewer complex, but 4D-grafically representable objects in time, resulting in vector folding. This is of benefit in posterio, for typical aggregates. In practical applications, and in comparative settings, one will rather apply unfolded 3D objects in stage representations, adhering to 3D Vector Tensoring, or Plurigrafity.
Relevant answer
Answer
Dear Richard Collins,
Indeed, my discussion note is rather general. I am using a new term, 3D Vector Plurigrafity, in order to promote a new quality of graphic modeling based on statistical analysis. I am referring in my discussion note to the challenge of using form, color, and changes (or shifts) in vector graphics to portray complex phenomena as they evolve or, more specifically, as their analyses evolve. 3D is not a term I endorse for most anything plastic, since a 3D grid without the time factor, not to mention color, is already 3D, however static. 3D Vector Plurigrafity is a process, not an object.
If you conduct a search for my name, you will find a comment to one of my articles, on Brexit, saying that it is dated. In fact, the paper was a short synthesis paper, graded summa cum laude by University of Mainz, uniting much of the available research on the linkage between Brexit and migration before the popular EU referendum. And I changed my mind about the UK leaving the EU.
The Internet, contrary to your allusion, is not an absolute reflection of the world as a whole, and when it comes to the Brexit discussion, which has nothing much to do with my discussion note, it is more question of the National Health System (NHS) and other sectors than the online world.
I disgree with your judgement that ResearchGate is disorganized. Since you provided a factual comment, I am quite sure you are an academic capable of drawing conclusions as to my discussion note, which you did. I grant you that. And I grant the ResearchGate algorithm that you were selected.
Composite words as in German are rare in English, and are often reduced to parts of their original elements, take 'Edutainment,' for instance. As long as a new term is defined, which I think is sufficiently the case with 'Plurigrafity,' or more fully 3D Vector Plurigrafity, given my note along with my comment, I do not see why, per se, it should not be a lasting word. Of course, you have the right, as everyone, to forget, Richard Collins. I thank you for your reply.
Sincerely
Thorsten Koch, MA, PgDip
  • asked a question related to Time Series Econometrics
Question
3 answers
Hi.
I am doing two time-series regressions, both regressions contain 3 mutual variables with several other variables in them.
Set 1: A B C D E F G
Set 2: A B C H I J K L M
I am running covariance matrix (looking at the correlation amongst variables for both regressions). However, since the 3 mutual variables are present for both regressions, do I still include these 3 variables in the covariance matrices for both regression? I am not sure because the correlation coefficients for the same mutual variables are different.
For instance,
Set 1: A-B = 0.5678
Set 2: A-B = -0.4892
Any helps will be greatly appreciated.
Thank you!
Relevant answer
Answer
You need the correlation matrix to check the relation between the response variable and explanatory variables and investigate the multicollinearity among explanatory variables. If your data is time series I think you need to check ARIMA with exogenous variables.
  • asked a question related to Time Series Econometrics
Question
6 answers
I estimated ARIMA model with daily gold time series. The residuals' corelogram is flat but its squared is not flat. Already I tried eVİEWS heterodasticity >> arch effect and ı found prob value 0.00 so there is heterodasticity. Can ı continue with ARIMA? (Flat correlogram is residuals other is squared residuals)(Are there permission to continue with ARIMA in this case?)
Relevant answer
Answer
Dear Murat Tuna
If Arch-Garch test appear significance where p-value =. 000 that is mean presence of heteroskedasticity problem you should examine residuals series if there is a period of high volatality followed by period of low volatality and so on. In this case you should estimate GARCH famlily model.
  • asked a question related to Time Series Econometrics
Question
6 answers
I recently spoke to an econometrics professor in my department and he said that in certain cases you can ignore the time dimension in long t panels (specifically this referred to a probit model for working out winners of tennis matches but I would like this discussion to be more general).
He suggested that this was possible so long as:
1. I controlled for the time dimension by including "lags" of the dependent variable (obviously these are not lags in the usual "subscript t case" but rather cross-sectional variables that state whether player i won their previous game(s))
2. Use cluster-robust standard errors, to take into account the correlation in the residuals for each player.
I'd be interested to hear whether:
1. You agree with this approach; and
2. In what circumstances you think this is appropriate and when it is inappropriate.
Thanks in advance!
Relevant answer
Answer
When N > T then cross sectional issues such as fixed effects and random effects become more important because cross-sectional variation in the data dominates than within variation (time series). Moreover, usually study average the data over 5 year and time dimension becomes less important.
Yes it is appropriate to ignore the time dimension in unbalanced panel with cross-sectional units greater than time period.
  • asked a question related to Time Series Econometrics
Question
11 answers
I am using time series for my research.
Relevant answer
Answer
Hi, I have a related question to this topic. Currently I'm working in a time series anylsis, the first ADF test with raw data I didn't consider any lag and all the results where nonstationarity. After this I runt eADF test for the first difference and the are stationarity, but I also run the test with lags according to the varsoc test and the first difference and they also non stationarity. Wich should be the right process. Thanks
  • asked a question related to Time Series Econometrics
Question
7 answers
I am working on testing the Arbitrage Pricing Theory. So, I have 5 independent variables, or factors (oil price, inflation, industrial production, FX, stock index) that affect the dependent variable (stock return). So, I have a raw data on theses factors for the last 7 years to be processed. I need to find the betas regressing these factors on the computer software (maybe, STATA). But I don't have any idea, how to start this process. Should I create my own .dta format file to regress it on STATA if I don't have such ready to use data file? Is it possible at all?
p/s: I am considering the US stock market, actually S&P500 index. Thanks a lot.
Relevant answer
Answer
Please from your data collected, it is a time series data. Instead of performing statistical test using Stata, SPSS, SAS, R, etc, i will strongly recommend that you perform econometric test such as Unit root test, Johansen Cointegration test, Granger tests, and Ordinary Least Square regression. The most suitable software that i will recommend is the Eviews software.
Time series data are often known to be non-stationary, thus a unit root test should be performed to turn such non-stationary data into stationary at first difference or in rare cases, second difference. The reason is that data for oil price, inflation, industrial production, FX, stock index and so on, were collected from secondary sources and must have been measured in different unit.
Use Eviews (Econometric views) software for your analysis.
Best regards
Owan
  • asked a question related to Time Series Econometrics
Question
10 answers
I'm testing the Fama&French three factor model in the italian stock market. After having done my 16 time series regressions i applied the usual diagnostic tests for error terms and what i see is that errors are not normally distributed. I checked for arch effect but there isn't. What can i do (i can't use dummy variables in this context) taking into account that each of the 16 time series is made up of 96 monthly obs?
Relevant answer
Answer
Normality is assumed for convenience in linear regression models, but it is not necessary.
The Gauss-Markov Theorem, which is a sufficient condition for OLS to be BLUE, does not assume normality.
Kruskal's Theorem, which is a necessary and sufficient condition for OLS to be BLUE, does not rely on normality.
If one is concerned about non-normality, the Box-Cox transformation can be used as a filter to transform both non-normal and heteroskedastic errors.
  • asked a question related to Time Series Econometrics
Question
4 answers
Suppose i have regression model as-
gdp = a+ bx+ c(z/gdp)+ e
Here x is a vector of explanatory variables. z/gdp is one of my control variables. In this regression equation, GDP lies in both side and my data is in Time Series. There is obvious issue is estimating this equation. How to rectify it for estimation ? Can i use one period lag of z/gdp to overcome this issue. ?
Relevant answer
Answer
I don't think your model makes sense. Why is z/gdp in the model instead of just z? If you do a little algebra on your model the actual model has a function of gdp on the lhs. I don't know what function because you give no information on your variables, If say your eqn is z=ax +by nobody can really help because there is no real information. David Booth.
  • asked a question related to Time Series Econometrics
Question
3 answers
Could someone help me to explain parameter estimation method of quasi maximum likelihood for univariate GARCH model ?
If you have a reference about it, please give me the link/ pdf about it. Maybe it'll help me.
Thanks :)
Relevant answer
Answer
Thank you for the references Mariel Gullian Klanian , finally I've finished my research project for undergraduate thesis
  • asked a question related to Time Series Econometrics
Question
10 answers
I want to use the GMM technique to estimate parameters of Fama Fench three factor model. I don't have stata license so how can i do with Eviews? I saw there is GMM between the methods available but i don't understand what to write in the field "instruments". Has anyone ever used Eviews to apply GMM estimation technique? Thank You in advance.
Relevant answer
Answer
You might try R. Tutorial here: https://www.youtube.com/watch?v=dc1L3zjsgiY
R, of course, is free.
  • asked a question related to Time Series Econometrics
Question
4 answers
I am working on a project that uses ECM model to inspect the short-run dynamics of money supply (m(t)) to loans (l(t)) since both variables are I(1). Excluding the error correction term, is it appropriate to aggregate the coefficients for l(t) to obtain its overall short-run effects? i.e. for al(t-2)+bl(t-1)+cl(t), I obtain total short-run coefficient by a+b+c. I am not sure if this is an appropriate way to follow.
Please, I really need technical assistance from experts here.
Regards,
Eli
Relevant answer
Answer
Guechari Yasmina , Anton Rainer and Salman Haider thank you all, for your valuable feedback. I will try and apply each recommendation.
  • asked a question related to Time Series Econometrics
Question
1 answer
Hi,
When estimated the DCC-GARCH in stata at the end of the output pairwise quasi correlations are given. What does it mean in practice? is it the mean value of dynamic correlations or something else?
Much appreciated if anybody could clarify this.
Kind regards
Thushara
Relevant answer
  • asked a question related to Time Series Econometrics
Question
8 answers
I am estimating the relationships between Economic Growth (GDP), Public Debt and Private Debt through a PVAR model in which my panel data consists of 20 countries across 22 years.
First of all, how can I know what is the optimal lag length I should be using for such an analysis?
Then using IPS test for stationarity, two of my variables turn out to have unit roots in the panels and that's why I would think of taking the first differences in both of them to use in the big PVAR model. Would you think this is correct to follow?
Given I mentioned that, then this means I should be worrying about Co-integration? If yes, what do you advise me to do?
Finally, when analyzing my estimations after different changes, I cannot compare the models since there is no R-Squared, do you think there is a specific test I can run to compare models?
Thanks a lot!
Relevant answer
Answer
Dear Ahmed, for your optimal lag length, you should first run a PVAR and from there you go to lag structure to see the optimal lag length chosen by the following information criteria: AIC, SIC, HIC, BIC. But I would suggest that you set a maximum lag length within which any of the aforementioned information criteria can choose the optimal lag automatically.
For your panel unit root test, there are various procedures that come together when you run the test. The LLC, IPS, Fisher-ADF, PP tests. If there is a mixed result of unit root test among those procedures, you can go with the outcome of the majority.
If your variables have unit root or they are non-stationary and require differencing, then you need to run a panel cointegration test to find out if long-run relations exist. The procedure for cointegration test is the Johansen-Fisher combined test and the Kao residual-based test.
For model comparison between fixed and random effects, you may have to run the Hausman model selection test.
I hope this helps...
  • asked a question related to Time Series Econometrics
Question
5 answers
I know that the series is stationary and has long memory property if the fractional parameter d is between 0 and 0.5. But is it for stationarity or for testing that the series has a long memory? Moreover,  if we estimate d by GPH test and the estimated d=0.20, can we say that the series has long memory? I just have doubts if the estimated value of d can be use for stationarity and hypothesis testing as well.
Relevant answer
Answer
in the case of GPH test, H0: d=0 short memory
if0<d<0.5 it indicates long memory, stationary .
so d=0,2 mplies long memory and stationary
  • asked a question related to Time Series Econometrics