Science topics: EconomicsEconometrics

Science topic

# Econometrics - Science topic

Econometrics has been defined as "the application of mathematics and statistical methods to economic data" and described as the branch of economics "that aims to give empirical content to economic relations."

Questions related to Econometrics

Greetings to everyone

I would appreciate it if you could help me

**According to the articles and econometrics book of the Panel; Including Professor Gujarati's book and other books, can I conclude that in estimating the panel data model for my own research; If the number of research stages (186 countries) is more than the number of years of research (24 years), is there no need to perform stationary test?**

**I would be grateful if you could introduce me a phrase to prove that there is no need for stationary test in such cases, referring to the phrases, page, text, and name of the book (article).**

Dear academician friends, I have a question about econometrics. I evaluated the relationship between the number of researchers in the health field and patents with a balanced panel analysis over 11 countries and 10 years. The data is regular; I evaluated the model with least squares and then performed causality and co-integration analyses. However, one peer reviewer insists that the data should be counted and recommends counted panel analysis. I looked at the subject, but there was no need for such an analysis, so I proceeded according to the suitability of econometric evaluations and diagnostic tests. How can I make such an analysis (counted data) on Eviews? Thanks.

with secondary data collected from central bank to know the impact of digitalization in banks profitability, how can its impact be more precisely measured through econometric tools.

The purpose of this question is to find out how far the science of statistics and econometrics has progressed in the world, so that it can make it easier for researchers to explore ideas.

I am currently exploring advanced mathematical techniques within the field of econometrics and am particularly interested in understanding how third-order differential equations, commonly referred to as "jerk equations," may be applied in time series forecasting.

Has anyone come across any scientific papers or research where jerk equations have been applied or considered for econometric time series forecasting? Any insight into the methodologies employed, advantages, disadvantages, or applications in various economic contexts would be greatly appreciated. If available, please provide the references or links to the relevant literature.

Thank you in advance for your assistance!

Are there new projects and studies considering recursive approaches, in energy transition and resources, being empirically tested with regression analysis and models?

Here is the case, as I said, I am working on how Macroeconomic variables affect REIT Index Return. To understand how macroeconomic variables affect REIT which tests or estimation method should I use.

I know I can use OLS but

**is there any other method to use?**All my time series are stationary at I(0).I need Non-Linear Causality test or model to use in panel data, in R, Stata, Python..

I have panel data with two waves, where the same individuals answer in wave 1 and wave 2. In my regression model with individual fixed effects, I want to add a time trend dummy variable that increase by one for each day since the start of the survey. Does the time demeaning method of fixed effects ruin this time trend variable. I am using Panel OLS from linearmodels.panel (python package) to implement the fixed effects model

Dear research community,

I was wondering whether there is some empirical literature on how to distinguish between different forms of competition, e.g. Bertrand, Cournot, Bertrand-Edgeworth, Cournot-Bertrand, etc. The majority of papers seems to center around the airline industry, e.g. Brander and Zhang (1990, 1993) and mainly is from the 1990s (see e.g. Haskel and Martin, 1994, for another empirical contribution). However, as the econometric toolkit evolves, I guess there must be more empirical research out there, e.g. evidence of regime-switching between different forms of competition. I would be greatful for any hints to the empirical literature/econometric approaches.

Best,

Volker

References:

Brander, J.A. and Zhang, A. (1990): Market conduct in the airline industry: An empirical investigation, in: RAND Journal of Economics, Vol. 21 (4), 567-583

Brander, J.A. and Zhang, A. (1993): Dynamic oligopoly behaviour in the airline industry, in: International Journal of Industrial Organization, Vol. 11, 407-435

Haskel, J. and Martin, C. (1994): Capacity and competition: Empirical evidence on UK panel data, in: Journal of Industrial Economics, Vol. 42 (1), 23-44

There are two ways we can go about testing the moderating effect of a variable (assuming the moderating variable is a dummy variable). One is to add an interaction term to the regression equation, Y=b0+b1*D+b2*M+b3*D*M+u, to test whether the coefficient of the interaction term is significant; an alternative approach could also be to equate the interaction term model to a grouped regression (assuming that the moderating variables are dummy variables), which has the advantage of directly showing the causal effects of the two groups. However, we still need to test the statistical significance of the estimated D*M coefficients of the interaction terms by means of an interaction term model. Such tests are always necessary because between-group heterogeneity cannot be resorted to intuitive judgement.

One of the technical details is that if the group regression model includes control variables, the corresponding interaction term model must include all the interaction terms between the control variables and the moderator variables in order to ensure the equivalence of the two estimates.

If in equation Y=b0+b1*D+b2*M+b3*D*M+u I do not add the cross-multiplication terms of the moderator and control variables, but only the control variables alone, is the estimate of the coefficient on the interaction term still accurate at this point? At this point, can b1 still be interpreted as the average effect of D on Y when M = 0?

**In other words, when I want to test the moderating effect of M in the causal effect of D on Y, should I use Y=b0+b1*D+b2*M+b3*D*M+b4*C+u or should I use Y=b0+b1*D+b2*M+b3*D*M+b4*C+b5*M*C+u?**

Reference: 江艇.因果推断经验研究中的中介效应与调节效应[J].中国工业经济,2022(05):100-120.DOI:10.19581/j.cnki.ciejournal.2022.05.005.

I decided to learn about such an area as the use of neural networks in econometrics, regardless of subsequent employment. One PhD explained to me that:

"In econometric research, the explainability of models is important; neural networks do not provide this. For time series, neural networks can be used, but only with a special architecture, for example, LSTM. For macroeconomic forecasting tasks, as a rule, neural networks are not used. ARIMA/SARIMA, VAR, ECM are used."

But on one forum they explained to me that

"A typical task in the field of time series analysis is to predict, from a sequence of previous values of a time series, the most likely next/future value. The Large Language Model (LLM), which underlies the same ChatGPT, predicts which word or phrase will be next in a sentence or phrase, i.e. in a sequence of words in natural language. The current ChatGPT is implemented using so-called transformers - neural networks, which after 2017 began to actively replace the older, but also neural network and also sequence-oriented LSTM (long short-term memory networks) architecture, and not only in text processing tasks, but also in other areas."

That is, the use of transformers in time series forecasting may seem promising? It seems that now this is a relatively young industry, still little studied?

I did PCA for some variables which were Likert scaled questions (1-5). After predicting, I get negative for some of the values. Is there something wrong? Do I take absolute values of the predicted variable ?

Note: The Eigen values were positive

The values in the new variable (after predicting) is given as attachments

The stata output are as follows:

pca v1 v2 v3 v4

Principal components/correlation Number of obs = 302

Number of comp. = 4

Trace = 4

Rotation: (unrotated = principal) Rho = 1.0000

-------------------------------------------------------------------------------------------------------------

Component | Eigenvalue Difference Proportion Cumulative

-------------+-----------------------------------------------------------------------------------------------

Comp1 | 1.50368 .51615 0.3759 0.3759

Comp2 | .987535 .16249 0.2469 0.6228

Comp3 | .825045 .141311 0.2063 0.8291

Comp4 | .683735 . 0.1709 1.0000

-------------------------------------------------------------------------------------------------------------

Principal components (eigenvectors)

---------------------------------------------------------------------------------------------------------

Variable | Comp1 Comp2 Comp3 Comp4 | Unexplained

-------------+----------------------------------------+------------------------------------------------

v1 | 0.6017 0.0522 -0.3627 -0.7097 | 0

v2 | 0.5426 -0.4256 -0.3741 0.6200 | 0

v3 | 0.5034 -0.1359 0.8531 -0.0192 | 0

v4 | 0.3001 0.8931 -0.0273 0.3340 | 0

--------------------------------------------------------------------------------------------------------

Dear researchers,

Is it possible to consider Cost-Benefit Analysis (CBA) as a suitable

**form**of robustness check in econometric analysis, taking into account its effectiveness in assessing the resilience and reliability of the findings?There are lots of methods to check the robustness in a regression, like changing the key variables, using another econometric methods, etc. However, is it possible to use the results calculated by CBA to prove the results induced by econometric analysis? The results of CBA could be some additional evidence to defend the conclusion of econometrics.

What is the best Basic Econometrics textbook for undergraduate Level students?

Usually to know the financial performance of the co-operative society, we use ratio analysis and more than that any other tools available to find the financial performance or growth of the organization suggest me.

Dear colleagues. I need to build a matrix A,where are some values and a lot of zero.

Thanks a lot for your help

There are the elements ,matrix rows:

a_1=1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 and after 91 zeros

a_2=0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 after 90 zeros

a_3=0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 89 zeros

a_4=0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 88 z

a_5=0,46 0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 87 z

a_6=0,35 0,46 0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 86 z

a_7=0,28 0,35 0,46 0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 85 z

a_8=0,18 0,28 0,35 0,46 0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 84 zeros

a_9=0,08 0,18 0,28 0,35 0,46 0,61 0,75 0,89 1 0,89 0,75 0,61 0,46 0,35 0,28 0,18 0,08 83 zeros

The code includes an interaction term between X1 and Z.

there are many statistics software that researchers usually use in their works.

**In your opinion, which one is better? which one do you offer to start?****Your opinions and experience can help others in particular younger researchers in selection.**

Sincerely

*Extension education plays a crucial role in disseminating knowledge and promoting sustainable development in various fields. To enhance the effectiveness and impact of extension programs, the integration of econometric approaches is gaining importance. In this article, we will explore how econometric analysis can be leveraged in extension education to inform evidence-based outreach strategies and facilitate rigorous program evaluation. By harnessing data and statistical techniques, extension professionals can gain valuable insights into the factors influencing outreach outcomes and make informed decisions to improve program effectiveness*

Hey there! I would like to conduct an event study with the buy-and-hold approach and that website was suggested to me. It talks about long run event studies but I could not find it on the ARC interface and I was wondering if it is there and I am just missing it. Thank you very much!

In 2007 I did an Internet search for others using cutoff sampling, and found a number of examples, noted at the first link below. However, it was not clear that many used regressor data to estimate model-based variance. Even if a cutoff sample has nearly complete 'coverage' for a given attribute, it is best to estimate the remainder and have some measure of accuracy. Coverage could change. (Some definitions are found at the second link.)

Please provide any examples of work in this area that may be of interest to researchers.

We know that correlation and cointegration are two diﬀerent things. The question that I want to put here and share with you is whether this is true even when we consider the wavelet concept?

Most empirical papers use a single econometric method to demonstrate a relationship between two variables. For robustness, is not it safer to use a variety of methods to conclude (cointegration IV models with thresholds, wavelet)?

Hey there. I am trying to walk through this concept in my head but unfortunately do not have the name for the type of model I am attempting to create. Let me describe the data:

1) One dependent variable on an hourly interval

2) Multiple exogenous "shock" variables that have positive effects on the dependent variable.

3) The dependent variable has 0 effect on the exogenous shock variables.

The dependent variable can be modeled by a function of it's own lags and the exogenous shock variables.

I would like to model the variable with an ARIMA model with exogenous variables that have an immediate impact at time T and lagging effects for a short period of time. (This is similar to a VAR's IRFS, except the exogenous variables are independent of the dependent variable).

The assumption is that without the exogenous shock variables, there is an underlying behavior of the data series. It is this underlying behavior that I would like to capture with the ARIMA. The exogenous shock variables are essentially subtracted from the series in order to predict what the series would look like without exogenous interference.

The problem:

I am worried that the ARIMA will use information from the "exogenous" shocks within the dependent series in estimating the AR and MA terms. That would mean that there would be positive bias in the terms. For example: If an exogenous shock is estimated to have an effect of a 100 unit increase the dependent variable, then this 100 unit increase should NOT effect the estimation of the AR or MA terms since it is considered to be unrelated to underlying function of the dependent variable.

I've attempted to write this out mathematically as well.

Dear Colleagues,

QQ regression is perhaps one of the latest methods in econometric estimation approaches. In case you have expertise could you please help me by providing useful information as to how to perform QQ regression using R or Stata?

Dear Scholars, I am measuring the effect of policy on firms' performance. I found a common (in both treatment and control groups) structural break 4 years before the policy intervention. I used the difference-in-difference model to find the impact of the policy. I am using 20 years of firm-level panel data. What are your suggestions?

Hi, data scientists am an aspiring data scientist in the field of econometrics, please I need the book Using Python for Introductory Econometrics, and if anyone can care to share it, I will be grateful. Thanx

Dear esteem senior colleagues, I'm currently investigating the systemic risk in the banking industry. I need help on how to empirically estimate these systemic risks methods: Conditional Value at Risk (CoVar) introduced by Adrian & Brunnermeirer (2016); Long Run Marginal Expected Shortfall (LMES) method and SRISK method by Acharya, Engle & Richardson (2012).

Thanks for your usual help.

Hello, I am looking for an econometric study examining the relationship of the Financial Inclusion Index to post office related variables. can you help me?

I ran the ARDL approach on E-Views 9, and it turns out that the independent variable has a small coefficient, but it appears as zero in the long-term equation as shown in the table, despite having a very small value in the ordinary least squares method. How can I show clearly this low value?

Using E-Views 9, I ran the ARDL test, resulting in an R-Squared value in the initial ARDL model output and an R-Squared value under the Bounds test. so, what is the difference between these two R squared values?

Hi! I would like to have an opinion on something, rather than a straight-out answer, so to speak. In time-series econometrics, it is common to present both long-term coefficients from the cointegrating equation, as well as the short-term coefficients from the error correction model. Since I have a lot of specifications, and since I'm really only interested in the long-term, I only present the long-term coefficients from a cointegrating equation in a paper I'm writing. Would you say that is feasible? I'm using the Phillips-Oularis singe-equation approach to cointegration.

I'm implementing a pairs trading strategy.

Consider the linear model:

ln(wage)= a + b D + u

where D is a binary variable (say, female); in the texbook, it says that when D changes from 0 to 1, then b (equal to 0.067) has to be interpreted as follows:

bx100 = 6.7 percentage points of change in wage

However, I would say in this case:

6.7 percent change in wage (or a change in wage of 6.7%)

Am I right, or is the textbook right? Many thanks.

2nd generation panel unit root tests

Hi Dear All,

I have a dataset of one country from 2010 to 2021 and the dataset doesn't have regions of the country.

How can we test for endogeneity problem in cross-sectional data? Are there any tests? I don't know much about econometrics.

For a beginner, who wants to try his/her hand in econometrics calculations such as Vector Auto Regression functions, Generalised Additive Models, Spline interpolation, ADRL test, etc, which of the following tools/software will you recommend?

**1. Eviews**

**2. Gretl**

**3. Stata**

**4. R**

**5. Matlab**

Dear colleagues:

I would be so grateful if you could provide me an answer or a solution to my problem.

I am using ARDL model and the all pre and post tests are fine and achieve model's assumption. However, the (Error correction form) is positive and significant and based on the econometrics' theory, it has to be in a negative form and between 0 to -1. So, what are your recommendations?

Thank you in advance

Sorry for the layman question, but I am currently taking an introductory course to Econometrics in my school. In an exercise I'm currently working on, I have made the following linear model on R : Averagehourlyincome = B0 + B1*age+ B2*female+B3*bachelor +B4*age*female

Where female and bachelor are dummy variables ( 0 if male 1 if female and 1 if has bachelor 0 if not) . One of the questions is that after defining the model and interpreting the coefficients I should test the differences between males and females in this model. But to be frank I don't really know how to proceed, I would be very thankful if someone could tell me what i need to do.

Thanks in advance,

Hi,

Could anyone suggest me material (ppt, pdf) related with Applied Econometrics with R (intro level)?

Thanks in advane.

I am using two-step system GMM to find impact of two independent business related variables on governance of an economy.

Dataset size: 8 Year data for 71 countries

I am not using any control variable in the main models but reporting 20 different results for different types of countries using the independent variables and their one year lags. For example, for the full dataset, I checked impact of x1, x2, and L1.x1, L1. x2. Then ran similar regression only for the developed countries and so on.

If I find fairly consistent results, and shed discussion on impact of different control variables only in additional analysis, would it suffice?

(Prior literature used 2-4 control variables but they used only OLS. Many of those papers were published in ABDC A/B ranked journals).

Thanks for your cooperation.

Where can I find the heterogeneity of panel data model from EViews.

I'm studying about this issue and would like to modelling it in Econometrics moreover modelling it with various Energy Modeling.

I have converted a decommissioned PowerEdge R610 server for use in computational econometrics. Any advice for keeping it cool?

I am using an ARDL model however I am having some difficulties interpreting the results. I found out that there is a cointegration in the long run. I provided pictures below.

I am using a panel dataset (N=73; T=9). Dataset Timeframe: 2010-2018

In the GMM estimate on the total dataset, the AR(1) and AR(2) values are fine.

But to investigate the impact of the European crisis, I had to split the data (5 Years during and immediately after the crisis, and the subsequent 4 years). But when GMM is run on the second set of data, (2015-2018), in one of the models, AR(1) and AR(2) values were not generated.

Is the result still usable? What are the potential problems of using this specific result?

Economics has treated econometrics as

**a universal solvent**, a technique that can be applied to any economic question, which is sufficient and, therefore, makes other applied techniques redundant.Peter Swann in his book indicates the place of econometrics and argues against this notion and even takes this as a severe error. He advises fellow economists that they learn to respect and assimilate what he calls

**vernacular knowledge of the economy**. His top message to economists is what the great French composer, Paul Dukas, advised his pupil: “Listen to the birds, they are great masters.” If any fellow economist asks: “don’t most economists do this already?” Then the answer by Swann is clear: “… some economists do use vernacular knowledge some of the time to underpin what they do … incidentally to make a piece of high technique more approachable … outside this limited context, economists do not tend to take the vernacular seriously."Any argument for or against it?

dear all, i conducted my VAR. And i tested it's stationary through Inverse Roots and it is stationary.

Also, I checked the autocorrelation of the residuals for 7th lags (I am conducting my var for 6 lags) and there is no autocorrelation for the sixth lag. (there is autocorrelation for 2nd and 5th lag, is it also a problem? As I checked the only one is matter the 6th lag.

But my residuals are not normal. Can I apply CLT and say that it is asymptotically normal?

Also, my residuals are heteroskedastic, can I continue with this model? What are my limitations?

Thanks in advance!

What is the most efficient method of measuring the impact of volatility of a variable on another variable??

In the book '

**Mastering 'Metrics**',**Joshua D. Angrist and Jörn-Steffen Pischke**, in an example of the causal effect of having health insurance or not on health levels, they describe "... ...This in turn leads to a simple but important conclusion about the difference in average health by insurance status:Difference in group means = Avgn[Yi|Di=1]-Avgn[Yi|Di=0]

=Avgn[Y1i|Di=1]-Avgn[Y0i|Di=0], (1.2)"

Next, they prove that Difference in group means differs from causal effects "The constant-effects assumption allows us to write:

Y1i=Y0i+k, (1.3)

or, equivalently, Y1i - Y0i = κ. In other words, κ is both the individual and average causal effect of insurance on health. using the constant- effects model (equation (1.3)) to substitute for

Avgn[Y1i|Di = 1] in equation (1.2), we have:

Avgn[Y1i|Di = 1] - Avgn[Y0i|Di = 0]

={k+ Avgn[Y0i|Di = 1]}- Avgn[Y0i|Di = 0]

=k+ {Avgn[Y0i|Di = 1]-Avgn[Y0i|Di = 0]},”

From this they obtain that: "Difference in group means = Average casual effect + Selection bias."

I think there are some confusing aspects of this process. The difference between Difference in group means and ATE should be described in more detail as,

**Difference in group means**

= Avgn[Yi|Di=1] - Avgn[Yi|Di=0] = Avgn[Y1i|Di=1] - Avgn[Y0i|Di=0]

**=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+{1-Pr[Di=1]}*{Avgn[Y1i-Y0i|Di=1]-Avgn[Y1i-Y0i|Di=0]}**

where

**Avgn[Y1i - Y0i]**is the**ATE**or,

**= Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]**

where

**Avgn[Y1i - Y0i|Di=1]**is the**ATT**or,

**= Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]**

where

**Avgn[Y1i - Y0i|Di=0]**is the**ATC**Under the assumption of constant-effects (Y1i-Y0i=k),

**Avgn[Y1i-Y0i] = Avgn[Y1i-Y0i|Di=1] = Avgn[Y1i-Y0i|Di=0].**So,

**Difference in group means**

= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]+0

= Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]

**= k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].**

or, = Avgn[Y1i-Y0i|Di=1]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]

=Avgn[Y1i-Y0i]+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0]

**=k+Avgn[Y0i|Di=1]-Avgn[Y0i|Di=0].**

or, = Avgn[Y1i-Y0i|Di=0]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]

= Avgn[Y1i-Y0i]+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0]

**= k+Avgn[Y1i|Di=1]-Avgn[Y1i|Di=0].**

I am not sure if my derivation process above is correct.

At the moment, I have one explanation in mind. If labor productivity is high, firms may focus more on reinvestment prospects instead of repaying loan installments. But I cannot find any prior literature in support of this claim. I would appreciate it if you can provide any other explanations or refer to any relevant literature.

Econometrics regression relationship between the amounts daily income and daily expenditure of a market vender in Lae vegetable market

Hey folks,

New Year's Greetings!

I have 2019 undergraduate entrance exam data (i.e. 7,891 obs), and I am interested in using regression methods to study relationships among the limited data variables. I have data on the following:

math score, english score, average score (i.e. math+engl)/2, age, sex, nationality, college program, marital status, school type, and college major.

This is what I started doing, first I began with multiple linear regression where I took englishscore as Y and math and age as the main Xs vars, and all the other categorical vars as some control vars.

I got some interesting results from the MLR approach, but I am afraid if math does not interplay with english score which could make my analysis spurious.

Second, I am thinking about using quantile regression, but I am not so sure about this approach.

Finally, I am contemplating if SURE model would be applicable for such a situation like this.

Please let me know which methods will be feasible to explore.

Thank you for taking the time to read this.

All the best,

JG

Econometric analysis is a common way to answer the economic questions in research. and various software like "

*Stata*", "*Eviews*", "*R*", "*Python*" and etc are used in this aim. hence, which source do you suggest to economic students to learn econometrics by software?**Your answer could help many students who want to start learning. so, I will appreciate you, if share your experience or share this question with other researchers and teachers.**

The various literature regarding oil price impact on oil and gas company stock uses a multi-factor asset pricing model, that includes the interest rate, a market index rate, and oil price. (Sadorsky, 2001) (Mohanty and Nandha, 2011) (Cardoso, 2014) (Boyer and Fillion, 2007)

The literature surrounding oil price impact on an index mostly uses a VAR model. (Abhyankar and Wang, 2008) (Papapetrou, 2001) (Park and Ratti, 2008)

Rarely a VECM model is used. (Lanza et al, 2004) (Hammoudeh et al, 2004)

I have once seen a regression calculation but can't find the explanation for it (Gupta, 2016)

Due to renewable investment of individual companies only being available in news events or annual reports I am unsure which method to follow.

Is using a VAR model for monthly Oil returns, monthly O&G company index returns, and a dummy variable for renewable investment events in the news a possibility?

I would also be open to splitting my question into 3 sub-questions if this makes it easier to apply methodology.

**How does oil price impact UK and US market index?**

**How does oil price impact the stock price of public oil and gas companies?**

**How does renewable investment impact the stock price of oil and gas majors?**

I would also like to ask if a correlation coefficient can be used or simple regression analysis to establish a relationship?

Any help and advice would be highly appreciated.

Dear All,

I’m conducting an event study for the yearly inclusion and exclusion of some stocks (from different industry sectors) in an index.

I need to calculate the abnormal return per each stock upon inclusion or exclusion from the index.

I have some questions:

1- How to decide upon the length of backward time to consider for the “Estimation Window” and how to justify ?

2- Stock return is calculated by:

(price today – price yesterday)/(price yesterday)

OR

LN(price today/price yesterday)?

I see both ways are used, although they give different results.

Can any of them be used to calculate CAR?

3- When calculating the Abnormal return as the difference between stock return and a Benchmark Return (market return), The market (benchmark) return should be the index itself (on which stock are included or excluded) ? Or the sector index related to the stock?

Appreciate your advice with justification.

Many thanks in advance.

Some people say it's not necessary to do unit root test for under 15 years in panel data, when i ask for reference they introduce me Baltagi book but i didn't find anything in this book, can someone please tell me which book confirm this claim?

if you tell exact page i'll be thankful, i need it for my thesis.

by the way, i have a panel data with 83 countries and 7 time periods (2011-2017)

Hi colleagues,

I use Stata13 and I want to run panel ARDL on the impact of institutional quality on inequality for 20 SSA countries. I have never used the technique so I am reading up available articles that used it. But I need help with a Stata do-file because I still don't know what codes to apply, how to arrange my variables in the model, and what diagnostics to conduct.

Any help or suggestion will do....thanks in anticipation!!!

I am writing an article where I am determining the effect of import substitution, should I include the diagnostic tests results. Or only the short run ECM and long run bound tests results are enough to interpret?

Hello everyone

I have doubts about the interpretation of the following cases, please help with that.

1-Dependent variable is infant mortality rate (per 1000 live births) and independent variable is health expenditure (%GDP) with -0.39 coefficient.

2- Dependent variable is natural logarithm of infant mortality rate (per 1000 live births) and independent variable is urbanization (% of total population) with -0.42 coefficient (independent variable is not in logarithm form).

Dear Colleagues,

QQ regression is perhaps one of the latest methods in econometric estimation approaches. In case you have expertise could you please help me by providing useful information as to how to perform QQ regression using R or Stata?

Can I use Granger Causality test on a monetary variable only? or do I need non-monetary variables?

Also Do I need to do any test before Granger, like a unit root test, or just use raw data?

What free programs can I use to compute the data?

i'm having problem in deciding how to run the regression.

i want to see whether before and after covid, banks look at different characteristics when pricing loans. If for example in the period before covid loan-specific characteristicss were the ones driving the interest rate charged, I would like to understand wheter the same relationship still holds after covid, or whether other variables become more relevant, in my case whether firm-specific characteristics weight more in the period after covid

i created a dummy variable for covid period (2020-2021) and my dataset goes from 2011 to 2021.

To sum up, I would like to understand whether and how the indipendent variables of my model changed their relevance after the event covid-19

how can i investigate this hypothesis?

This is for personal academic benefit.

I am looking to learn econometrics.

Angrist s new paradigm to deal with experimental data is very efficient, i would like to verify is this could be the same for topics dealing with macroeconomic data