Science topics: EconEconometrics
Science topic
Econometrics - Science topic
Econometrics has been defined as "the application of mathematics and statistical methods to economic data" and described as the branch of economics "that aims to give empirical content to economic relations."
Questions related to Econometrics
- Extension education plays a crucial role in disseminating knowledge and promoting sustainable development in various fields. To enhance the effectiveness and impact of extension programs, the integration of econometric approaches is gaining importance. In this article, we will explore how econometric analysis can be leveraged in extension education to inform evidence-based outreach strategies and facilitate rigorous program evaluation. By harnessing data and statistical techniques, extension professionals can gain valuable insights into the factors influencing outreach outcomes and make informed decisions to improve program effectiveness
Hey there! I would like to conduct an event study with the buy-and-hold approach and that website was suggested to me. It talks about long run event studies but I could not find it on the ARC interface and I was wondering if it is there and I am just missing it. Thank you very much!
In 2007 I did an Internet search for others using cutoff sampling, and found a number of examples, noted at the first link below. However, it was not clear that many used regressor data to estimate model-based variance. Even if a cutoff sample has nearly complete 'coverage' for a given attribute, it is best to estimate the remainder and have some measure of accuracy. Coverage could change. (Some definitions are found at the second link.)
Please provide any examples of work in this area that may be of interest to researchers.
We know that correlation and cointegration are two different things. The question that I want to put here and share with you is whether this is true even when we consider the wavelet concept?
Most empirical papers use a single econometric method to demonstrate a relationship between two variables. For robustness, is not it safer to use a variety of methods to conclude (cointegration IV models with thresholds, wavelet)?
Hey there. I am trying to walk through this concept in my head but unfortunately do not have the name for the type of model I am attempting to create. Let me describe the data:
1) One dependent variable on an hourly interval
2) Multiple exogenous "shock" variables that have positive effects on the dependent variable.
3) The dependent variable has 0 effect on the exogenous shock variables.
The dependent variable can be modeled by a function of it's own lags and the exogenous shock variables.
I would like to model the variable with an ARIMA model with exogenous variables that have an immediate impact at time T and lagging effects for a short period of time. (This is similar to a VAR's IRFS, except the exogenous variables are independent of the dependent variable).
The assumption is that without the exogenous shock variables, there is an underlying behavior of the data series. It is this underlying behavior that I would like to capture with the ARIMA. The exogenous shock variables are essentially subtracted from the series in order to predict what the series would look like without exogenous interference.
The problem:
I am worried that the ARIMA will use information from the "exogenous" shocks within the dependent series in estimating the AR and MA terms. That would mean that there would be positive bias in the terms. For example: If an exogenous shock is estimated to have an effect of a 100 unit increase the dependent variable, then this 100 unit increase should NOT effect the estimation of the AR or MA terms since it is considered to be unrelated to underlying function of the dependent variable.
I've attempted to write this out mathematically as well.
I need Non-Linear Causality test or model to use in panel data, in R, Stata, Python..
Dear Colleagues,
QQ regression is perhaps one of the latest methods in econometric estimation approaches. In case you have expertise could you please help me by providing useful information as to how to perform QQ regression using R or Stata?
Dear Scholars, I am measuring the effect of policy on firms' performance. I found a common (in both treatment and control groups) structural break 4 years before the policy intervention. I used the difference-in-difference model to find the impact of the policy. I am using 20 years of firm-level panel data. What are your suggestions?
Hi, data scientists am an aspiring data scientist in the field of econometrics, please I need the book Using Python for Introductory Econometrics, and if anyone can care to share it, I will be grateful. Thanx
Hello, I am looking for an econometric study examining the relationship of the Financial Inclusion Index to post office related variables. can you help me?
I ran the ARDL approach on E-Views 9, and it turns out that the independent variable has a small coefficient, but it appears as zero in the long-term equation as shown in the table, despite having a very small value in the ordinary least squares method. How can I show clearly this low value?

Using E-Views 9, I ran the ARDL test, resulting in an R-Squared value in the initial ARDL model output and an R-Squared value under the Bounds test. so, what is the difference between these two R squared values?
Hi! I would like to have an opinion on something, rather than a straight-out answer, so to speak. In time-series econometrics, it is common to present both long-term coefficients from the cointegrating equation, as well as the short-term coefficients from the error correction model. Since I have a lot of specifications, and since I'm really only interested in the long-term, I only present the long-term coefficients from a cointegrating equation in a paper I'm writing. Would you say that is feasible? I'm using the Phillips-Oularis singe-equation approach to cointegration.
I'm implementing a pairs trading strategy.
Consider the linear model:
ln(wage)= a + b D + u
where D is a binary variable (say, female); in the texbook, it says that when D changes from 0 to 1, then b (equal to 0.067) has to be interpreted as follows:
bx100 = 6.7 percentage points of change in wage
However, I would say in this case:
6.7 percent change in wage (or a change in wage of 6.7%)
Am I right, or is the textbook right? Many thanks.
Hi Dear All,
I have a dataset of one country from 2010 to 2021 and the dataset doesn't have regions of the country.
How can we test for endogeneity problem in cross-sectional data? Are there any tests? I don't know much about econometrics.
For a beginner, who wants to try his/her hand in econometrics calculations such as Vector Auto Regression functions, Generalised Additive Models, Spline interpolation, ADRL test, etc, which of the following tools/software will you recommend?
1. Eviews
2. Gretl
3. Stata
4. R
5. Matlab
Dear colleagues:
I would be so grateful if you could provide me an answer or a solution to my problem.
I am using ARDL model and the all pre and post tests are fine and achieve model's assumption. However, the (Error correction form) is positive and significant and based on the econometrics' theory, it has to be in a negative form and between 0 to -1. So, what are your recommendations?
Thank you in advance
Sorry for the layman question, but I am currently taking an introductory course to Econometrics in my school. In an exercise I'm currently working on, I have made the following linear model on R : Averagehourlyincome = B0 + B1*age+ B2*female+B3*bachelor +B4*age*female
Where female and bachelor are dummy variables ( 0 if male 1 if female and 1 if has bachelor 0 if not) . One of the questions is that after defining the model and interpreting the coefficients I should test the differences between males and females in this model. But to be frank I don't really know how to proceed, I would be very thankful if someone could tell me what i need to do.
Thanks in advance,
Hi,
Could anyone suggest me material (ppt, pdf) related with Applied Econometrics with R (intro level)?
Thanks in advane.
I am using two-step system GMM to find impact of two independent business related variables on governance of an economy.
Dataset size: 8 Year data for 71 countries
I am not using any control variable in the main models but reporting 20 different results for different types of countries using the independent variables and their one year lags. For example, for the full dataset, I checked impact of x1, x2, and L1.x1, L1. x2. Then ran similar regression only for the developed countries and so on.
If I find fairly consistent results, and shed discussion on impact of different control variables only in additional analysis, would it suffice?
(Prior literature used 2-4 control variables but they used only OLS. Many of those papers were published in ABDC A/B ranked journals).
Thanks for your cooperation.
Where can I find the heterogeneity of panel data model from EViews.
I'm studying about this issue and would like to modelling it in Econometrics moreover modelling it with various Energy Modeling.
I have converted a decommissioned PowerEdge R610 server for use in computational econometrics. Any advice for keeping it cool?
I am using an ARDL model however I am having some difficulties interpreting the results. I found out that there is a cointegration in the long run. I provided pictures below.
I am using a panel dataset (N=73; T=9). Dataset Timeframe: 2010-2018
In the GMM estimate on the total dataset, the AR(1) and AR(2) values are fine.
But to investigate the impact of the European crisis, I had to split the data (5 Years during and immediately after the crisis, and the subsequent 4 years). But when GMM is run on the second set of data, (2015-2018), in one of the models, AR(1) and AR(2) values were not generated.
Is the result still usable? What are the potential problems of using this specific result?
I did PCA for some variables which were Likert scaled questions (1-5). After predicting, I get negative for some of the values. Is there something wrong? Do I take absolute values of the predicted variable ?
Note: The Eigen values were positive
The values in the new variable (after predicting) is given as attachments
The stata output are as follows:
pca v1 v2 v3 v4
Principal components/correlation Number of obs = 302
Number of comp. = 4
Trace = 4
Rotation: (unrotated = principal) Rho = 1.0000
-------------------------------------------------------------------------------------------------------------
Component | Eigenvalue Difference Proportion Cumulative
-------------+-----------------------------------------------------------------------------------------------
Comp1 | 1.50368 .51615 0.3759 0.3759
Comp2 | .987535 .16249 0.2469 0.6228
Comp3 | .825045 .141311 0.2063 0.8291
Comp4 | .683735 . 0.1709 1.0000
-------------------------------------------------------------------------------------------------------------
Principal components (eigenvectors)
---------------------------------------------------------------------------------------------------------
Variable | Comp1 Comp2 Comp3 Comp4 | Unexplained
-------------+----------------------------------------+------------------------------------------------
v1 | 0.6017 0.0522 -0.3627 -0.7097 | 0
v2 | 0.5426 -0.4256 -0.3741 0.6200 | 0
v3 | 0.5034 -0.1359 0.8531 -0.0192 | 0
v4 | 0.3001 0.8931 -0.0273 0.3340 | 0
--------------------------------------------------------------------------------------------------------

Economics has treated econometrics as a universal solvent, a technique that can be applied to any economic question, which is sufficient and, therefore, makes other applied techniques redundant.
Peter Swann in his book indicates the place of econometrics and argues against this notion and even takes this as a severe error. He advises fellow economists that they learn to respect and assimilate what he calls vernacular knowledge of the economy. His top message to economists is what the great French composer, Paul Dukas, advised his pupil: “Listen to the birds, they are great masters.” If any fellow economist asks: “don’t most economists do this already?” Then the answer by Swann is clear: “… some economists do use vernacular knowledge some of the time to underpin what they do … incidentally to make a piece of high technique more approachable … outside this limited context, economists do not tend to take the vernacular seriously."
Any argument for or against it?
dear all, i conducted my VAR. And i tested it's stationary through Inverse Roots and it is stationary.
Also, I checked the autocorrelation of the residuals for 7th lags (I am conducting my var for 6 lags) and there is no autocorrelation for the sixth lag. (there is autocorrelation for 2nd and 5th lag, is it also a problem? As I checked the only one is matter the 6th lag.
But my residuals are not normal. Can I apply CLT and say that it is asymptotically normal?
Also, my residuals are heteroskedastic, can I continue with this model? What are my limitations?
Thanks in advance!
What is the most efficient method of measuring the impact of volatility of a variable on another variable??
In the book 'Mastering 'Metrics', Joshua D. Angrist and Jörn-Steffen Pischke, in an example of the causal effect of having health insurance or not on health levels, they describe "... ...This in turn leads to a simple but important conclusion about the difference in average health by insurance status:
Difference in group means = Avgn[Yi|Di=1]-Avgn[Yi|Di=0]
=Avgn[Y1i|Di=1]-Avgn[Y0i|Di=0], (1.2)"
Next, they prove that Difference in group means differs from causal effects "The constant-effects assumption allows us to write:
Y1i=Y0i+k, (1.3)
or, equivalently, Y1i - Y0i = κ. In other words, κ is both the individual and average causal effect of insurance on health. using the constant- effects model (equation (1.3)) to substitute for
Avgn[Y1i|Di = 1] in equation (1.2), we have:
Avgn[Y1i|Di = 1] - Avgn[Y0i|Di = 0]
={k+ Avgn[Y0i|Di = 1]}- Avgn[Y0i|Di = 0]
=k+ {Avgn[Y0i|Di = 1]-Avgn[Y0i|Di = 0]},”
From this they obtain that: "Difference in group means = Average casual effect + Selection bias."
I think there are some confusing aspects of this process. The difference between Difference in group means and ATE should be described in more detail as,
Difference in group means
= Avgn[Yi|Di=1] - Avgn[Yi|Di=0] = Avgn[Y1i|Di=1] - Avgn[Y0i|Di=0]
=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+{1-Pr[Di=1]}*{Avgn[Y1i-Y0i|Di=1]-Avgn[Y1i-Y0i|Di=0]}
where Avgn[Y1i - Y0i] is the ATE
or,
= Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
where Avgn[Y1i - Y0i|Di=1] is the ATT
or,
= Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
where Avgn[Y1i - Y0i|Di=0] is the ATC
Under the assumption of constant-effects (Y1i-Y0i=k), Avgn[Y1i-Y0i] = Avgn[Y1i-Y0i|Di=1] = Avgn[Y1i-Y0i|Di=0].
So,
Difference in group means
= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+0
= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
= k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].
or, = Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]
=k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].
or, = Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
= Avgn[Y1i-Y0i]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]
= k+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0].
I am not sure if my derivation process above is correct.
At the moment, I have one explanation in mind. If labor productivity is high, firms may focus more on reinvestment prospects instead of repaying loan installments. But I cannot find any prior literature in support of this claim. I would appreciate it if you can provide any other explanations or refer to any relevant literature.
Econometrics regression relationship between the amounts daily income and daily expenditure of a market vender in Lae vegetable market
Hey folks,
New Year's Greetings!
I have 2019 undergraduate entrance exam data (i.e. 7,891 obs), and I am interested in using regression methods to study relationships among the limited data variables. I have data on the following:
math score, english score, average score (i.e. math+engl)/2, age, sex, nationality, college program, marital status, school type, and college major.
This is what I started doing, first I began with multiple linear regression where I took englishscore as Y and math and age as the main Xs vars, and all the other categorical vars as some control vars.
I got some interesting results from the MLR approach, but I am afraid if math does not interplay with english score which could make my analysis spurious.
Second, I am thinking about using quantile regression, but I am not so sure about this approach.
Finally, I am contemplating if SURE model would be applicable for such a situation like this.
Please let me know which methods will be feasible to explore.
Thank you for taking the time to read this.
All the best,
JG
Econometric analysis is a common way to answer the economic questions in research. and various software like "Stata", "Eviews", "R", "Python" and etc are used in this aim. hence, which source do you suggest to economic students to learn econometrics by software?
Your answer could help many students who want to start learning. so, I will appreciate you, if share your experience or share this question with other researchers and teachers.
there are many statistics software that researchers usually use in their works. In your opinion, which one is better? which one do you offer to start?
Your opinions and experience can help others in particular younger researchers in selection.
Sincerely
The various literature regarding oil price impact on oil and gas company stock uses a multi-factor asset pricing model, that includes the interest rate, a market index rate, and oil price. (Sadorsky, 2001) (Mohanty and Nandha, 2011) (Cardoso, 2014) (Boyer and Fillion, 2007)
The literature surrounding oil price impact on an index mostly uses a VAR model. (Abhyankar and Wang, 2008) (Papapetrou, 2001) (Park and Ratti, 2008)
Rarely a VECM model is used. (Lanza et al, 2004) (Hammoudeh et al, 2004)
I have once seen a regression calculation but can't find the explanation for it (Gupta, 2016)
Due to renewable investment of individual companies only being available in news events or annual reports I am unsure which method to follow.
Is using a VAR model for monthly Oil returns, monthly O&G company index returns, and a dummy variable for renewable investment events in the news a possibility?
I would also be open to splitting my question into 3 sub-questions if this makes it easier to apply methodology.
How does oil price impact UK and US market index?
How does oil price impact the stock price of public oil and gas companies?
How does renewable investment impact the stock price of oil and gas majors?
I would also like to ask if a correlation coefficient can be used or simple regression analysis to establish a relationship?
Any help and advice would be highly appreciated.
Dear All,
I’m conducting an event study for the yearly inclusion and exclusion of some stocks (from different industry sectors) in an index.
I need to calculate the abnormal return per each stock upon inclusion or exclusion from the index.
I have some questions:
1- How to decide upon the length of backward time to consider for the “Estimation Window” and how to justify ?
2- Stock return is calculated by:
(price today – price yesterday)/(price yesterday)
OR
LN(price today/price yesterday)?
I see both ways are used, although they give different results.
Can any of them be used to calculate CAR?
3- When calculating the Abnormal return as the difference between stock return and a Benchmark Return (market return), The market (benchmark) return should be the index itself (on which stock are included or excluded) ? Or the sector index related to the stock?
Appreciate your advice with justification.
Many thanks in advance.
Some people say it's not necessary to do unit root test for under 15 years in panel data, when i ask for reference they introduce me Baltagi book but i didn't find anything in this book, can someone please tell me which book confirm this claim?
if you tell exact page i'll be thankful, i need it for my thesis.
by the way, i have a panel data with 83 countries and 7 time periods (2011-2017)
Hi colleagues,
I use Stata13 and I want to run panel ARDL on the impact of institutional quality on inequality for 20 SSA countries. I have never used the technique so I am reading up available articles that used it. But I need help with a Stata do-file because I still don't know what codes to apply, how to arrange my variables in the model, and what diagnostics to conduct.
Any help or suggestion will do....thanks in anticipation!!!
I am writing an article where I am determining the effect of import substitution, should I include the diagnostic tests results. Or only the short run ECM and long run bound tests results are enough to interpret?
Hello everyone
I have doubts about the interpretation of the following cases, please help with that.
1-Dependent variable is infant mortality rate (per 1000 live births) and independent variable is health expenditure (%GDP) with -0.39 coefficient.
2- Dependent variable is natural logarithm of infant mortality rate (per 1000 live births) and independent variable is urbanization (% of total population) with -0.42 coefficient (independent variable is not in logarithm form).
Can I use Granger Causality test on a monetary variable only? or do I need non-monetary variables?
Also Do I need to do any test before Granger, like a unit root test, or just use raw data?
What free programs can I use to compute the data?
i'm having problem in deciding how to run the regression.
i want to see whether before and after covid, banks look at different characteristics when pricing loans. If for example in the period before covid loan-specific characteristicss were the ones driving the interest rate charged, I would like to understand wheter the same relationship still holds after covid, or whether other variables become more relevant, in my case whether firm-specific characteristics weight more in the period after covid
i created a dummy variable for covid period (2020-2021) and my dataset goes from 2011 to 2021.
To sum up, I would like to understand whether and how the indipendent variables of my model changed their relevance after the event covid-19
how can i investigate this hypothesis?
This is for personal academic benefit.
I am looking to learn econometrics.
Angrist s new paradigm to deal with experimental data is very efficient, i would like to verify is this could be the same for topics dealing with macroeconomic data
In time series modeling and volatility estimation it is necessary, first remove autocorrelation of time series and after that estimate the volatility model (like GARCH).
the autocorrelation estimate by ACF test, but in some situations (like a low sample data or noise,...) maybe this procedure causes bad estimation of autocorrelation.
for example the true model is AR(3)-GARCH(1,1) but we used AR(1)-GARCH(1,1)
are the GARCH parameters biased in this situation?
Thanks in advance.
What if I wanted to match 2 individuals based on their Likert scores in a survey?
Example: Imagine a 3 question dating app where each respondent chooses one from the following choices in response to 3 different statements about themselves:
Strongly Disagree - Disagree - Neutral - Agree - Strongly Agree
1) I like long walks on the beach.
2) I always know where I want to eat.
3) I will be 100% faithful.
Assuming both subjects answer truthfully and that the 3 questions have equal weights, What is their % match for each question and overall? How would I calculate it for the following answers?
Example Answers:
Lucy's answers:
1) Strongly Agree
2) Strongly Disagree
3) Agree
Ricky's answers:
1) Agree
2) Strongly Agree
3) Strongly Disagree
What if I want to change the weight of each question?
Thanks!
Terry
What is the most acceptable method to measure the impact of regulation/policy so far?
I only know the Difference-in-Difference (DID), Propensity Score Matching (PSM), Two-Step System GMM (for dynamic) are common methods. Expecting your opinion for 20 years long panel for firm-level data.
Hello everyone.
I'm working on my Masters work in the first years of studying. And I think these topics might be interesting and inspiring enough:
- economic benefits from the introduction of renewable systems
- dependence of the organization of the "Smart" city on factors, that is, what factors influence the implementation of the system - here I can conduct an econometric study
Also, attracted by topics:
optimization of energy resources and smart energy system as part of the Industry 4.0 direction.
What do you think, which topics are the best?
On which topics it's easy enough to find adequat number of information?
P.S. As I understood, this work shouldn't be super complex, but enough smart and elaborated.
I am a master's student of statistics. I have been in the field of econometrics and have taken projects on machine learning. However, I wish to change field. Can I have a supervisor who will be willing to mentor me through bioinformatics, taking my previous and current research areas into consideration? Or do I need another master's degree in bioinformatics or a related field before I can proceed to Phd?
Dear everyone,
I am in great distress and desperately need your advice. I have the cumulated (disaggregated) data of a survey of an industry (total export, total labour costs etc.) of 380 firms. The original paper is using a Two-stage least square (TSLS) model in oder to analyze several industries with one Independent variable having a relationship with the dependent variable, which was the limitation not to use an OLS method, according to the author. However, i want to conduct a single industry analysis and exclude the variable with the relationship, BUT instead analyze the model over 3 years. What is the best econometric model to use? Can is use an OLS regression over period of 3 years? if yes, what tests are applicable then?
Thank you so much for your help, you are helping me out so much !!!!!!!
Hello everyone,
i would like to analyze the effect of innovation in 1 industry over a time period of 10 years. the dependent variable is export and the Independent variables are R&D and Labour costs.
What is the best model to use? i am planning to do a Log-linear model.
Thank you very much for your greatly needed help!
Dear colleagues,
I am planning to investigate the panel data set containing three countries and 10 variables. The time frame is a bit short that concerns me (between 2011-2020 for each country). What should be the sample size in this case? Can I apply fixed effects, random effects, or pooled OLS?
Thank you for your responses beforehand.
Best
Ibrahim
Hi Everyone,
I am investigating the change of a dependent variable (Y) over time (Years). I have plotted the dependent variable across time as a line graph and it seems to be correlated with time (i.e. Y increases over time but not for all years).
I was wondering if there is a formal statistical test to determine if this relationship exists between the time variable and Y?
Any help would be greatly appreciated!
Ml estimation is one of the methods for estimating autoregressive parameter in univariate and multivariate time series.
My research is to find out the determinants of FDI. I am doing bound test to see the long run relationship, cointegration test and other diagnostic tests.
I am to write an econometric research report, i've tried to get some relevant and contributing references but it seem i cannot find any. Attached is the criterial of what should be included. Guys i need some references


Dear Colleagues,
I ran an Error Correction Model, obtaining the results depicted below. The model comes from the literature, where Dutch disease effects were tested in the case of Russia. My dependent variable was the real effective exchange rate, while oil prices (OIL_Prices), terms of trade (TOT), public deficit (GOV), industrial productivity (PR) were independent variables. My main concern is that only the Error Correction Term, the dummy variable, and the intercept are statistically significant. Moreover, residuals are not normally distributed, while also the residuals are heteroscedasdic. There is no serial correlation issue according to the LM test. How can I improve my findings? Thank you beforehand.
Best
Ibrahim

Hello All,
Wooldridge's Introductory Econometrics (5th ed.) states that "Because maximum likelihood estimation is based on the distribution of y given x, the heteroskedasticity in Var(y|x) is automatically accounted for."
Does this hold also for bias-corrected or penalized maximum likelihood estimation under Firth logistic regression?
Please advise.
Thanks in advance!
I have non-stationary time-series data for variables such as Energy Consumption, Trade, Oil Prices, etc and I want to study the impact of these variables on the growth in electricity generation from renewable sources (I have taken the natural logarithms for all the variables).
I performed a linear regression which gave me spurious results (r-squared >0.9)
After testing these time series for unit roots using Augmented Dickey- Fuller test all of them were found to be non-stationary and hence the spurious regression. However their first differences for some of them, and second differences for the others, were found to be stationary.
Now when I test the new linear regressions with the proper order of integration for each variables (in order to have a stationary model) the statistical results are not good (high p-value for some variables and low r-squared (0.25))
My question is how should I proceed now? Should i change my variables?
Hi! I have a model for a panel data and my teacher told me to do an estimation of the model with different coefficients for one of the explicative variables. She gave me an example:
lpop @expand(@crossid) linv(-1) lvab lcost (for different coefficients for intercept)
or
lnpop c lninv @expand(@crossid)*lnvab lncost (for different coefficients for this variable).
Can someone explain me how to do that? I tried but it didn't work..
Dear colleagues,
I applied the Granger Causality test in my paper and the reviewer wrote me the following: the statistical analysis was a bit short – usually the Granger-causality is followed by some vector autoregressive modeling...
What can I respond in this case?
P.S. I had a small sample size and serious data limitation.
Best
Ibrahim
In my work, I use the multivariate GARH model (DCC-GARCH). I am testing the existence of autocorrelation in the variance model. Ljung-Box tests (Q) for standardized residuals and square standardized residuals give different results.
Should I choose the Ljung-Box or Ljung-Box square test?
N=1500
Hello, dear network. I need some help.
I'm working on research, using the Event Study Approach. I have a couple of doubts about the significance of the treatment variable leads and lag coefficients.
I'm not sure to be satisfying the pre-treatment Parallel Trends Assumption: all the lags are not statistically significant and are around the 0 line. Is that enough to accomplish the identification assumption?
Also, I'm not sure about the leads coefficient's significance and their interpretation. The table with the coefficients is attached.
Thank you so much for your help.
Hi everyone! I am wirting a paper about economic growth in pandemic. I have a panel data with 24 countrys and 4 periods. I received a recomandetion to use EGLS method. I used this method in Eviews, but I don't know much about this method to justify choosing it and interpreting it. I know it is used when we have heteroschedasticity and autocorrelation.
Thank you, all!
I am currently working on my master thesis and have faced some problems in designing a survey. The goal is to analyze a transition from ordinary offline retailing towards physical showrooms effectuating fulfillment of products through an online shop.
I use as dependent variable customer satisfaction (reaching from 1-10) and as independent variables the following ones:
F= fulfillment (1/0) 1=now 0=in 3 days
A=assortment (from 10 to 20 units per shop)
P=price (from 25 to 25*0,7discount->17,5)
Is it possible to design a survey/ experiment in a way to get the needed data for this equation?